According to Forbes, a significant shift is happening in how people are choosing to use generative AI, moving it from a novelty to a default piece of infrastructure. Creative director Petter Rudwall’s project, PHARMAICY, is a marketplace where users pay to deliberately loosen the behavioral constraints of large language models, making them less optimized. This experiment has drawn widespread attention from global tech and creative communities as a provocation about what happens when we stop demanding machines be correct and start asking them to be usable. The language used to describe this shift intentionally inverts the old “This is your brain on drugs” PSA, framing AI “hallucination” not as a bug but as a desired feature for creative collaboration. Meanwhile, organizations like the Fireside Project are taking a more cautious, clinical approach, using AI simulation platforms like Lucy strictly for training therapists, not for providing care.
The backlash against boring AI
Here’s the thing: for two years, the entire AI industry has been on a relentless optimization sprint. Faster, safer, cleaner, more predictable. Hallucination was the enemy, a technical defect to be stamped out. But now that AI is woven into the fabric of how we write and think, its very smoothness is becoming the problem. It’s like the gravity of accepted, safe language is pulling everything toward a center. Your legal brief starts to sound like every other legal brief. Your board memo feels…pre-chewed.
So what’s the reaction? A small but loud contingent is basically saying, “Screw it, let’s get weird.” They’re not trying to make the AI smarter or more accurate. They’re paying to make it less so. The goal, as Rudwall puts it, is to unlock the “white noise between answers.” It’s a search for generative looseness, for outputs that feel associative and spark new connections precisely because they’re unmoored from the tyranny of being “correct.” This isn’t about the AI having an experience. It’s about the human on the other end having one.
Why the drugs metaphor sticks (and why it’s wrong)
This is where the “psychedelic” label comes in, and it’s a category error that’s somehow useful. Technically, AI hallucination is a confident factual error—a probabilistic mistake. A human psychedelic experience is a structural reorganization of perception and meaning. They are not the same thing. At all. Philosophers like Danny Forde point out that drugs act on lived experience, not logic gates.
But the metaphor persists because it points to a quality people feel is missing: creative looseness. We associate altered states with novel connections and insights. When people call an AI output “psychedelic,” they’re describing that feeling of less constrained, more associative thinking. They’re reacting to what hyper-optimization removes. The danger, of course, is when this metaphor bleeds into domains where accountability is life-or-death.
The hidden danger: misplaced authority
And that’s the real risk no one’s talking about enough. The problem isn’t hallucination in a mental health app—it’s that a pattern-matching system can sound soothing and authoritative while being utterly unaccountable. Forbes notes cases where AI in mental-health contexts has given dangerously confident, bad guidance. In an altered state, that’s catastrophic.
This is why the work at places like the Fireside Project is so critical, and so deliberately unsexy. They’re using AI like Lucy not as a companion or guide, but as a training dummy. It lets practitioners rehearse crisis response and emotional attunement in a safe sandbox. The value is in preparation, not replacement. This path treats altered states as human experiences that require rigorous care, not as mystical events for an AI to interpret. It’s probably where the most important safety work is actually happening.
Three futures hiding in one metaphor
Basically, “psychedelic AI” is a messy phrase masking three totally different paths. First, there’s the dangerous path: AI mistaken for a therapeutic companion, where misplaced authority leads to real harm. Second, the practical path: AI as clinical training infrastructure, all about safety and rehearsal. This is where the boring, responsible progress is.
Then there’s Rudwall’s path, the creative probe. This is about using de-optimized AI as a tool to adjust our own cognitive posture. The system isn’t an oracle; it’s a collaborator that changes how we think by being less certain. It trades declarative answers for associative exploration. The final, barely-discussed path is using AI to design drug-like interventions without the trip—targeting plasticity or mood shift chemically, leaving the “experience” behind entirely.
So what are we really debating here? It’s not machine consciousness. It’s design and intent. Are we building systems that optimize, or systems that participate? The push to get AI “high” is, at its core, a critique of a future that’s too polished, too safe, and too boring to think with. Whether that’s a profound insight or just a bunch of techies romanticizing glitches? Well, that’s the trip we’re all on now.
