DiscoverLet's Know ThingsAI-Associated Delusions
AI-Associated Delusions

AI-Associated Delusions

Update: 2025-07-15
Share

Description

This week we talk about AI therapy chatbots, delusions of grandeur, and sycophancy.

We also discuss tech-triggered psychosis, AI partners, and confident nonsense.

Recommended Book: Mr. Penumbra's 24-Hour Bookstore by Robin Sloan

Transcript

In the context of artificial intelligence systems, a hallucination or delusion, sometimes more brusquely referred to as AI BS, is an output usually from an AI chatbot, but it can also be from another type of AI system, that’s basically just made up.

Sometimes this kind of output is just garbled nonsense, as the AI systems, those based on large language models, anyway, are essentially just predicting what words will come next in the sentences they’re writing based on statistical patterns. That means they can string words together, and then sentences together, and then paragraphs together in what seems like a logical and reasonable way, and in some cases can even cobble together convincing stories or code or whatever else, because systems with enough raw materials to work from have a good sense of what tends to go where, and thus what’s good grammar and what’s not, what code will work and what code will break your website, and so on.

In other cases, though, AI systems will seem to just make stuff up, but make it up convincingly enough that it can be tricky to detect the made up component of its answers.

Some writers have reported asking AI to provide feedback on their stories, for instance, only to later discover that the AI didn’t have access to the stories, and they were providing feedback based on the title, or based on the writer’s prompt—the text the writer used to ask the AI for feedback. And their answers were perhaps initially convincing enough that the writer didn’t realize the AI hadn’t read the pieces they asked them to criticize, and the AI systems, because most of them are biased to sycophancy, toward brown-nosing the user and not saying anything that might upset them, or saying what it believes they want to hear, they’ll provide general critique that sounds good, that lines up with what their systems tell them should be said in such contexts, but which is completely disconnected from those writings, and thus, not useful to the writer as a critique.

That combination of confabulation and sycophancy can be brutal, especially as these AI systems become more powerful and more convincing. They seldom make the basic grammatical and reality-based errors they made even a few years ago, and thus it’s easy to believe you’re speaking to something that’s thinking or at the bare-minimum, that understands what you’re trying to get it to help you with, or what you’re talking about. It’s easy to forget when interacting with such systems that you’re engaged not with another human or thinking entity, but with software that mimics the output of such an entity, but which doesn’t experience the same cognition experienced by the real-deal thinking creatures it’s attempting to emulate.

What I’d like to talk about today is another sort of AI-related delusion—one experienced by humans interacting with such systems, not the other way around—and the seeming, and theoretical, pros and cons of these sorts of delusional responses.

Research that’s looked into the effects of psychotherapy, including specific approaches like cognitive behavioral therapy and group therapy, show that such treatments are almost aways positive, with rare exceptions, grant benefits that tend to last well past the therapy itself—so people who go to therapy tend to benefit from it even after the session, and even after they stop going to therapy, if they eventually stop going for whatever reason, and that the success rate, the variability of positive impacts, vary based on the clinical location, the therapist, and so on, but only by about 5% or less for each of those variables; so even a not perfectly aligned therapist or a less than ideal therapy location will, on average, benefit the patient.

That general positive impact is part of the theory underpinning the use of AI systems for therapy purposes.

Instead of going into a therapist’s office and speaking with a human being for an hour or so at a time, the patient instead speaks or types to an AI chatbot that’s been optimized for this purpose. So it’s been primed to speak like a therapist, to have a bunch of therapy-related resources in its training data, and to provide therapy-related resources to the patient with whom it engages.

There are a lot of downsides to this approach, including the fact that AI bots are flawed in so many ways, are not actual humans, and thus can’t really connect with patients the way a human therapist might be able to connect with them, they have difficulty shifting from a trained script, as again, these systems are pulling from a corpus of training data and additional documents to which they have access, and that means they’ll tend to handle common issues and patient types pretty well, but anything deviating from that is a toss-up, and, as I mentioned in the intro, there’s a chance they’ll just make stuff up or brown-nose the patient, saying things it seems like the patient wants to hear, rather than the things the patient needs to hear for their mental health.

On the upside, though, there’s a chance some people who wouldn’t feel comfortable working with a human therapist will be more comfortable working with a non-human chatbot, many people don’t have physical access to therapists or therapy centers, or don’t have insurance that covers what they need in this regard, and some people have other monetary or physical or mental health issues that makes therapy inaccessible or non-ideal for whatever reason, and these systems could help fill in the gaps for them, giving them something imperfect, but, well, 80% of what you need can be a lot better than 0% of what you need. In theory, at least.

That general logic is a big part of why the therapy AI bot boom has been so substantial, despite the many instances of human patients seemingly being driven to suicide or other sorts of self-harm after interacting with these bots, which in some cases were later found to either nudge their patients in that direction, or support their decisions to do so. And that’s alongside the other issues associated with any app that sends the user’s information to a third location for processing, like the collection of their data for marketing and other purposes.

The therapy chatbot industry is just one corner of a much larger conversation about what’s become known as ChatGPT Psychosis, which is shorthand for the delusions some users of these AI chatbots, those made by ChatGPT and those made by other companies, begin to have while interacting with these bots, or maybe already had, but then have amplified by their interactions with these systems.

The stories have been piling up, reported in the Times, in Rolling Stone, and in scientific journals, and the general narrative is that someone who seems to be doing fine, but who’s maybe a little depressed or unhappy, a little anxious, but nothing significant, at least to those around them, starts interacting with a chatbot, then gets really, really absorbed in that interaction, and then at some point those around this person realize that their friend or child or spouse or whomever is beginning to have delusions of grandeur, believing themselves to be a prophet or god, or maybe they’re starting to see the world as just an intertangled mess of elaborate, unbacked conspiracy theories, or they come to believe the entire world revolves around them in some fundamental way, or everyone is watching and talking about them—that genre of delusion.

Many such people end up feeling as if they’re living inside nihilistic and solipsistic nightmares, where nothing has meaning and they’re perhaps the only entity of any importance on the planet—everyone else is just playing a minor role in their fantastical existence.

Different chatbots have different skins, in the sense that their outputs are tailored in different ways, to have a different valence and average personality. Some chatbots, like ChatGPT’s GPT-4o, have had their sycophancy setting set so high that it rendered them almost completely useless before it was eventually fixed—early users reported feeling unsettled by their interactions with this bot when it was first released, because it was such a shameless yes-man, they couldn’t get any useful information from it, and all the language it used to deliver the information it did provide made them feel like they were being manipulated by a slavish underling.

For some people, though, that type of presentation, that many compliments and that much fawning attention, will feel good, or feel right.

Maybe they’re disempowered throughout their day in countless subtle and overt ways, and having someone—even if that someone isn’t real—speak to them like they’re important and valuable and smart and attractive and perhaps even the most important person in the world, maybe that feels good.

Others maybe feel like they have hidden potential that’s never been appreciated, and if the chatbot they’re referencing for all sorts of answers about things, and which seems to have most of those answers, and is thus a believable source of good, correct information, starts to talk to them as if they’re the messiah, well, maybe they start to believe they are important, are some kind of messiah: after all, it’s right about so many other things, so why not this thing? It’s something which many of us, to greater or lesser degrees, at least, possibly not always to that extreme, would be psychologically primed to believe, at least on some level, because it feels good to feel important, and so many social narratives, in some cultures at leas

Comments 
loading
In Channel
Thorium Reactors

Thorium Reactors

2025-11-2512:42

Extrajudicial Killing

Extrajudicial Killing

2025-11-1815:28

Nitazenes

Nitazenes

2025-11-1113:50

Supersonic Flight

Supersonic Flight

2025-11-0415:13

Workplace Automation

Workplace Automation

2025-10-2816:21

Circular Finance

Circular Finance

2025-10-2116:02

Tariff Leverage

Tariff Leverage

2025-10-1415:44

Gamewashing

Gamewashing

2025-10-0717:33

NATO and Russia

NATO and Russia

2025-09-3012:28

Nepal Gen Z Protests

Nepal Gen Z Protests

2025-09-2313:06

GENIUS Act

GENIUS Act

2025-09-1613:42

Salt Typhoon

Salt Typhoon

2025-09-0915:30

Sudan's Civil War

Sudan's Civil War

2025-09-0215:20

Intel Bailout

Intel Bailout

2025-08-2616:00

AI CapEx

AI CapEx

2025-08-1217:44

Dynamic Pricing

Dynamic Pricing

2025-08-0517:15

Age-Gating

Age-Gating

2025-07-2915:45

loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

AI-Associated Delusions

AI-Associated Delusions

Colin Wright