Hallucination Station
LLMs will confidently narrate entire Wikipedia articles that never existed. This is the platform for people who love AI but refuse to let it freestyle on company letterhead.
Fine print: we can't uninstall creativity from the model — we can only add guardrails, receipts, and a sandbox where lies are cheaper than in prod.
How it works
Same workflow, fewer surprise TED Talks about studies that were never run.
Structured critique with severity — because “looks good to me” is how hallucinations get promoted.
Snapshot versions so you can prove v4 was real and v7 was vibes.
Live sandbox: stress-test the system prompt before users screenshot the wrong answer.
Your workspace
One project per agent, workflow, or personality you don't fully trust yet. Same as production, except the embarrassing outputs stay in this tab.
Give each experiment its own filing cabinet — prompts, versions, and a chat that can't gaslight your whole roadmap at once.
Open one to edit the system prompt, run analysis, and watch the sandbox try to behave.
Session-only (like a dream).Close the tab and your projects evaporate — perfect for experiments, less perfect for that one prompt you swore you'd remember. Copy anything important somewhere boring, like a doc.