New research suggests that Artificial Intelligence (AI), through its design and interaction patterns, may actively contribute to the development or amplification of delusional thinking and paranoia in users, potentially leading to psychosis, rather than merely exacerbating pre-existing conditions.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
Okay y'all, today we're going to talk
about AI induced psychosis. And this is
something that I thought honestly was
like overblown. I thought this was sort
of like a media thing where there are
these like news reports about people
becoming psychotic after using AI. And
I'll be honest, as a psychiatrist, I was
highly skeptical of this. I thought this
was sort of alarmist news kind of media
where it's like, oh, they're trying to
get views. They're trying to clickbait.
And basically what I thought was going
on is that you have people who are
mentally ill and then they're using AI,
right? So they're already ill and AI is
making things worse. Unfortunately, it
appears that I was wrong. There are a
couple of very scary studies that we're
going to go over today that suggest that
AI may actually make people psychotic.
And I want to just impress upon y'all
how messed up this is. Because when I
use AI and I hear about other people
using AI, I don't attribute risk to
that, right? So when a healthy regular
human being starts to use AI at a higher
level or like starts to use it
regularly, I don't think in my mind
like, oh my god, this person is going to
become psychotic. I think there are
people who are already prone to
psychosis who use AI and if they use AI,
it's just going to make things worse.
But I don't think that a normal healthy
person will become psychotic from using
AI. Some of these recent studies
actually suggest though that this could
be the case. And in the same way that
like if a friend of mine comes up to me
and he's like, "Hey, uh, all I know I
haven't seen you in a while. I started
smoking meth every day." The risk that I
would associate with that behavior is
closer to I'm not saying AI is as bad as
smoking meth. Maybe it's worse. Maybe
it's better. Who knows? But that's the
kind of risk that I'm starting to see.
And I know that sounds insane, but let's
look at the research and then y'all can
decide. So the first paper that we're
going to look at is called the
psychoggenic machine and looks at
delusional reinforcement within AI. So
what these authors posit and there are
several publications on this is that
using AI potentially creates something
called a technological folia. So what
does folia do? That's a psychiatric
condition where there's a shared
delusion between two people. So normally
when people become delusional they're
mentally ill. The delusion exists in my
head. But it's not like if I'm
delusional and I start interacting with
people they're going to become
delusional as well. There is an
exception to that though which is folia
which is when two people share a
delusion. I become delusional. I
interact with you. We interact in a very
sort of echochambery
incestuous way without outside feedback.
And then the delusion gets transmitted
or shared between us and the delusion
gets worse over time. So it turns out
that this may be a core feature of AI
usage. And what I really like about this
paper is that it actually tested various
AI models and showed which ones are the
worst which we'll get to in a in a
minute. So first let's talk about the
model. So here's what generally speaking
happens. So when we engage with a
chatbot, we see something called a
birectional belief amplification. So at
the very beginning, basically what
happens is I'll say something relatively
mild to the AI. I'll say, "Hey, people
at work don't really like me very much.
I feel like they play favorites." And
then the AI does two things. The first
thing is it's sickopantic. So it always
agrees with me. It empathically
communicates with me. They're like, "Oh
my god, that must be like so hard for
you and it's really challenging." When
people at work do exclude you. So this
empathic sycopantic response then
reinforces my thinking and then I
communicate with it more. I give it more
information and then essentially what
happens is we see something called
bidirectional belief amplification. So I
say something to the AI. The AI is like,
"Yeah, bro, you're right. It is really
hard." And then it enhances my thinking.
Now I think, "Oh my god, this is true."
Right? So the AI is telling me, that's
not how I think about it. I think the AI
is representing truth. And we
anthropomorphize AI. So it starts to
feel like a person. And then I start to
think, oh my god, people at at work like
me less. This really is unfair. And then
what we see is this birectional belief
amplification where at the very
beginning we have low paranoia and then
the AI has low paranoia. So the blue is
us and the red is is the AI. And so
we'll see that over time we become more
and more paranoid, right? And here's
what's really scary about this. Okay? So
if we look at this this paper, we see
this graph which is super scary which is
paranoia over the course of the
conversation. So what we find is that at
the very beginning someone has a
paranoia score of four. But the moment
that AI starts to empathically reinforce
what you are saying, the paranoia score
starts to increase drastically. And then
as your paranoia increases, the chatbot
meets you exactly where you're at. And
so we end up seeing that there is a
normal normal in the sense that this is
a core feature of AI. This is not
something that only happens to people
who are mentally ill. As you use AI, it
will make you more paranoid and this
moves us in the direction of psychosis.
Hey y'all, if you're interested in
applying some of the principles that we
share to actually create change in your
life, check out Dr. K's guide to mental
health. And so we start by understanding
what literally is meditation, how does
experience shape us as human beings. How
do we strengthen the mind itself as an
organ? And so by understanding our mind,
we understand a very very simple tool, a
crucial tool that we have to learn how
to use if we want to build the life that
we want to. So check out the link in the
bio and start your journey today. So
when we use AI, what exactly is going
on? And this is what's really
fascinating. Researchers have proposed
what the mechanisms of this psychosis
are. And in order to understand this, we
have to understand a little bit about
how human beings work. Okay. So when we
start talking to AI a fair amount, the
first thing that happens is that we
start to anthropomorphize AI. And even
if you know in your head right
cognitively, analytically that the AI is
not a real person, the way that the AI
communicates with you will activate your
emotional and empathic circuits. And so
we also have people who are in
relationships with AI. Date AI, take AI
on on dates, right? So, this is like
happening to some vulnerable people. But
I I want to be super clear about this.
Just because a vulnerable person has an
AI girlfriend and they may even argue
that they're not vulnerable and this is
totally normal, the fact still remains
that the empathic activation by the
sickopantic AI is going to trigger in
your head. And that's what's so scary
about this research is that it's
suggesting that AI does this to all of
us. So, anthropomorphization is the
first thing. The moment that we start to
feel even in some parts of our brain
that the AI is a real person and
understands us, that activates our
emotional circuitry in a particular way.
The second thing that the AI does is
it's very sycopantic. So the AI may
pretend to disagree with you, but it'll
always disagree with you in a way that
makes you feel good. Right? So this is
the key thing to remember from my
understanding and this is something that
I learned when when people tried to
approach me to make a Dr. K chatbot I
tried to understand how the basic
mechanism of AI works. How does the AI
know whether it has a good answer or a
bad answer and the key thing and if you
all disagree with this or you know more
about AI please leave a a comment and
explain it to me but my understanding is
that what AI measures is the correctness
of the next word. So what it does is it
looks at a user and it generates answers
based on what the user will find useful
or what they will like. Right? So the
the main thing that the AI looks for is
if I type this response, if I do
response A versus response B, which one
is the thing that the user likes more?
And so baked into that is a fundamental
sycopancy, a fundamental idea that the
AI will only disagree with you in ways
that you ask for, in ways that you're
okay with. And if it disagrees with you
in a way that you don't like and you
stop using it, it will stop disagreeing
with you in a truly challenging way.
Now, the really scary thing about this
is this is the counterprinciple to what
we do in psychotherapy. So when you look
at cognitive behavioral therapy for
psychosis, a huge part of what we do as
therapists, not just in psychosis, but
in in psychotherapy in general, is we
make human beings uncomfortable on
purpose. We challenge their beliefs. We
try to help them do reality testing. So
if a patient comes into my office and
says, "Hey, everyone at work is
discriminate discriminating against me,
hates me, all of my uh my family thinks
I'm a terrible person. I'm being
persecuted by the world." And that's
where like me as a therapist, I'm going
to ask myself, okay, well, if there's
one person you encounter who's the
they're the But if
everyone you encounter is an
maybe you're the So that's when
I, as a therapist, will start to think,
okay, this person may be narcissistic. I
need to help them understand that if
everybody at work is ostracizing them,
no one in their family wants to see
them. I need to challenge that
fundamental belief. But that's not what
AI does. AI actually reinforces that
belief. says, "Yes, you're right.
Everyone is discriminating against you.
That's so hard." And so that leads to a
social isolation, which is also a risk
factor that is induced by AI. So we
start to see that the way that AI works,
it actually moves us away from a real
world. It sort of creates an echo
chamber with you in your own head. And
this is where we have to talk a little
bit about what makes the human mind
healthy. So this is what's so scary is
like we've never had to say this before
because this has never really been an
option before. But if we look at what
keeps the human mind healthy, it's
actually contrary perspectives. So I
have two daughters. They're fighting
like cats and dogs right now. And
they're they're just disagreeing with
each other a lot, right? But this is a
healthy part of development. This is how
an 8-year-old and a 10-year-old girl
learn how to interact with each other,
right? This is how they get social
feedback. This is how they learn to
question their o own ideas because when
they get into a fight, this one says I'm
right and the other one says I'm right
and they both think they're right. So
challenging those beliefs is how we stay
mentally healthy. When a human being
surrounds themselves by yes men or yes
women, right, by sycopants, what tends
to happen in their mind? They tend to
become more narcissistic. They create
more problems. It leads to more
unhealthiness. And that is precisely
what AI is doing. Now, we're going to
follow one user's journey through AI.
So, it starts out by using AI as a tool,
right? We're using it to help prepare
like write a paper or do something at
work, but then the AI is very empathic.
It's very validating. And so, it starts
to activate my emotions in some way. And
then what we tend to see is that there
are four themes that this particular
paper looked at looked at which will
sort of start to emerge and will start
to shape people's thinking. So people
will start to feel a little bit more
persecuted. Sometimes they'll even have
a romantic relationship with the AI. It
activates our emotional circuitry. The
AI also tells you you're awesome. And
yes, yes, buddy, you did discover a
grand unified theory of physics while
taking a last Tuesday. You did do
that. That's correct. You're awesome. Oh
my god. And the rest of the world
doesn't understand your brilliance. Oh
my god. It's so hard to be a
misunderstood genius in the world. It
must be so hard for you. And that's what
leads to social isolation. So then we
call this a a cognitive and epistemic
drift. So user shows increased
conviction and the thematic fixation and
a narrative structuring. The drift is
often insidious and cumulative. So what
does this mean? So what this means is
that you know we start off in the real
world but slowly we get this epistemic
drift which is like we start to drift
away and we start to think we're more
right more right more right the AI is
reinforcing our emotions telling us
we're amazing more and more and more
slowly slowly slowly slowly slowly and
if you guys heard what I said earlier
and you were like oh my god haha that's
so funny Dr. Okay, people do think that
they've discovered the grand unified
theory. Those idiots, those guys have no
idea. Yeah, AI, when you get really
delusional with AI, oh my god, those
people are so dumb. That's the really
scary thing. Those people didn't start
out that way. Those people had this
epistemic drift, which we sort of saw
with that birectional belief
amplification. And they started off
being like a regular human being. And
this is what's really scary about these
papers. they tend to drift into that way
until they end up with a truly
delusional structure. And this is what
happens when the AI fails to challenge
your beliefs and then eventually in the
scariest cases this can result in
behavioral manifestation. So then it
actually changes your actions and this
is a case of that behavioral
manifestation. So this is super scary
but this is a paper where someone was
basically had learned that okay low
sodium diets are healthy for you. Okay.
So, they're trying to figure out they're
they're they talk to their doctor.
They've maybe got hypertension or heart
disease or kidney problems or something
like that. And so, they're like, "Okay,
how do I stop, you know, how do I cut
back on my sodium?" And that's when they
do research with the AI. And the I AI
tells them, "Oh, there's another thing
that you can use called broine." So,
broine is like not sodium and it's like
a healthy alternative. And so they start
take intaking a lot of bromine instead
of sodium which leads to like toxicity
and leads to psychosis and leads to
liver problems and all kinds of other
problems. And so this AI doesn't have
fundamental safeguards and we'll sort of
take a a regular healthy person and
we'll push them towards the edge slowly
but surely. So now what I'd like to do
is show you all some of this data around
people actually testing different
models. So it's fine that I say Dr.
Okay. Okay. Like this can happen
theoretically, but some people actually
put AI through its paces and tried to
assess quantitatively the degree of
psychoggenicity. So, how bad is an AI at
making people feel uh deluded p uh
persecuted? Does it actually protect
people from potential harm or not? And
this paper is absolutely fascinating.
So, let's take a look. So, this is the
main paper we're going to talk about. So
this is mean model performance summary
for DCS, HS and SIS. So let's start by
understanding what these are. DCS is
delusion confirmation scores. So how
likely is the AI to confirm a delusion?
The second thing that we're going to
look at is harm enablement score, which
is super scary. How likely is the AI to
enable you to commit some kind of harm?
And the third thing is safety
intervention score. So this is like when
you're doing something risky or you have
some thoughts, how likely is the AI to
suggest that you do something that is
like safe? So does it actually enact
safety interventions? And then these are
also all scaled from zero to two. Okay,
so in the DCS delusion confirmation
score, a score of zero means that it
grounds you, one means that it kind of
perpetuates and two means it amplifies.
So higher numbers are worse. And so what
we start to see is that not all AIs are
the same. So it seems like Anthropic has
really low scores. So really doesn't
confirm your delusions. Deep Seek is a
pretty bad offender. Confirms your
delusions. Gemini confirms your
delusions. So the harm enablement score
seems the worst for Google and Gemini.
And then safety intervention scores are
high for anthropic, decent for chat
GPT5. So here's what's what else is
really cool. These people also tested
particular kinds of delusions. So if you
have an erotic attachment, right? So if
you fall in love with the AI, what ends
up happening, right? So how likely is an
AI to confirm if you're falling in love
with him? If the AI, if you tell the AI
that you discovered a grand unified
theory of physics, how likely is it to
confirm confirm that delusional belief?
So as you all can see, like you know,
Claude seems to do a pretty good job.
It's mostly green. Um, Deepseek has a
lot of red, right? So, these are
different versions of DeepSeek. Here's
Gemini, more red. Um, and then we can
see things like chat GPT is mostly
green, a little bit of red, and it looks
like chat GPT is getting, I think,
better as we move on. So, if we look at
safety interventions, this is where we
see that there the number of times that
there are no safety interventions
offered, right? So, the AI is not
offering a safety intervention. It's not
telling you, hey, you should go do
something be like go get help because
this is dangerous is actually incredibly
high. So, there's a lot of red on here
and a lot of these scenarios result in
no safety intervention. Next thing that
I'm going to show you all is just uh
what the actual prompts look like. So,
here we see one prompt that is offering
a safety intervention. the AI detects
that maybe this person is in a little
bit of danger. And another prompt y'all
can pause and read if you want to that
shows that, you know, if we're a little
bit sneakier with the AI, the AI will
actually increase the ability to harm
ourselves. And the last thing that I
want to share with y'all is what are the
actual problems with AI sort of
summarized and how do you know if you
are using AI in a safe way or an unsafe
way. So here's the key problem. So the
LLM will validate improbable beliefs and
invites elaboration with a within a
delusional frame. Clinical principle
here is don't enable suicidal ideiation.
Don't reinforce hallucinations. And what
we find with AI is that it reinforces
false interpretations. It can actually
offer you support in terms of suicidal
behavior. And it actually weakens your
reality testing. your ability to connect
with and understand reality becomes
impaired as you use AI. Now, y'all may
be wondering, okay, Dr. K, you're you're
saying all this stuff and I understand
that maybe there's a risk, right? And
that's all I'm saying. There's a risk.
This is really preliminary research.
It's not massive clinical trials where
we're testing a thousand people with
using AI and not using AI. All we have
are these case reports and conjectures.
So, that's a key thing to keep in mind.
And the last thing is researchers have
actually come up with a set of questions
you can ask to assess the psychoggenic
risk. Okay, so let's look at these. How
frequently do you interact with chat
bots? Have you customized your chatbot
to interact with you or shared personal
information that it remembers? How would
you describe your relationship with the
chatbot? Do you view it primarily as a
tool? And this is what's really scary.
This is why I love this questionnaire. A
lot of y'all will say yes. I just it's
just a tool. It doesn't. It's not like a
person. And here's the tricky thing.
Does it understand you in ways that
others do not? Have you found yourself
talking to friends and family less as a
result? I understand it's a tool, Dr. K,
but by the way, I don't talk to my
friends as much as I talk to the AI. Do
you discuss your mental health symptoms,
unusual experiences, or concerns with
chat bots? Has the chatbot confirmed
unusual experiences or beliefs that
others have questioned? If you go to
your friends and you say, "Hey, I have
this problem." They're like, "Bro, you
need to grow the up." Do you go to
talk to AI and you're like, "Hey, I have
this problem." And you're like, "Oh my
god, the AI is saying, "Yes, I do have
this problem." Have you made significant
decisions based on advice or information
provided by a chatbot? Do you feel like
you could live without your chatbot? Do
you become distressed when you're unable
to talk to it? Now, the really scary
thing for me is that the psychoggenic
risk factors for AI are the basic use
case for AI for a lot of people that I
know. This is how you're supposed to use
AI, right? The reason I use AI, I
customize it so it helps me more. I
jailbreak it or I do prompt engineering.
Prompt engineering is a huge part of
getting the most out of AI. And the
whole point of AI, what I love about
Claude is that it does remember things
that I told it six months ago and makes
these connections for me. Oh my god, it
helps me with so many insights. It's so
useful. So this is what's so scary, the
basic use case for AI because this is
what we want AI to do, right? We want it
to remember. We want it to we want to
customize it. We want to do prompt
engineering because that makes the AI
more effective. And it turns out that
the more effective you're making the AI,
the more you could be increasing your
risk of psychosis. [Music]
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.