0:01 Okay y'all, today we're going to talk
0:04 about AI induced psychosis. And this is
0:05 something that I thought honestly was
0:07 like overblown. I thought this was sort
0:09 of like a media thing where there are
0:12 these like news reports about people
0:15 becoming psychotic after using AI. And
0:17 I'll be honest, as a psychiatrist, I was
0:19 highly skeptical of this. I thought this
0:22 was sort of alarmist news kind of media
0:23 where it's like, oh, they're trying to
0:25 get views. They're trying to clickbait.
0:27 And basically what I thought was going
0:29 on is that you have people who are
0:32 mentally ill and then they're using AI,
0:35 right? So they're already ill and AI is
0:37 making things worse. Unfortunately, it
0:39 appears that I was wrong. There are a
0:42 couple of very scary studies that we're
0:44 going to go over today that suggest that
0:48 AI may actually make people psychotic.
0:50 And I want to just impress upon y'all
0:52 how messed up this is. Because when I
0:54 use AI and I hear about other people
0:58 using AI, I don't attribute risk to
1:01 that, right? So when a healthy regular
1:05 human being starts to use AI at a higher
1:07 level or like starts to use it
1:10 regularly, I don't think in my mind
1:12 like, oh my god, this person is going to
1:13 become psychotic. I think there are
1:15 people who are already prone to
1:18 psychosis who use AI and if they use AI,
1:19 it's just going to make things worse.
1:20 But I don't think that a normal healthy
1:22 person will become psychotic from using
1:24 AI. Some of these recent studies
1:26 actually suggest though that this could
1:28 be the case. And in the same way that
1:29 like if a friend of mine comes up to me
1:32 and he's like, "Hey, uh, all I know I
1:33 haven't seen you in a while. I started
1:36 smoking meth every day." The risk that I
1:39 would associate with that behavior is
1:41 closer to I'm not saying AI is as bad as
1:42 smoking meth. Maybe it's worse. Maybe
1:44 it's better. Who knows? But that's the
1:46 kind of risk that I'm starting to see.
1:48 And I know that sounds insane, but let's
1:50 look at the research and then y'all can
1:52 decide. So the first paper that we're
1:53 going to look at is called the
1:55 psychoggenic machine and looks at
1:58 delusional reinforcement within AI. So
1:59 what these authors posit and there are
2:02 several publications on this is that
2:05 using AI potentially creates something
2:07 called a technological folia. So what
2:09 does folia do? That's a psychiatric
2:11 condition where there's a shared
2:13 delusion between two people. So normally
2:15 when people become delusional they're
2:17 mentally ill. The delusion exists in my
2:19 head. But it's not like if I'm
2:21 delusional and I start interacting with
2:22 people they're going to become
2:23 delusional as well. There is an
2:26 exception to that though which is folia
2:27 which is when two people share a
2:30 delusion. I become delusional. I
2:32 interact with you. We interact in a very
2:34 sort of echochambery
2:37 incestuous way without outside feedback.
2:39 And then the delusion gets transmitted
2:42 or shared between us and the delusion
2:44 gets worse over time. So it turns out
2:48 that this may be a core feature of AI
2:50 usage. And what I really like about this
2:53 paper is that it actually tested various
2:56 AI models and showed which ones are the
2:58 worst which we'll get to in a in a
2:59 minute. So first let's talk about the
3:01 model. So here's what generally speaking
3:03 happens. So when we engage with a
3:05 chatbot, we see something called a
3:09 birectional belief amplification. So at
3:10 the very beginning, basically what
3:12 happens is I'll say something relatively
3:15 mild to the AI. I'll say, "Hey, people
3:17 at work don't really like me very much.
3:20 I feel like they play favorites." And
3:22 then the AI does two things. The first
3:25 thing is it's sickopantic. So it always
3:27 agrees with me. It empathically
3:29 communicates with me. They're like, "Oh
3:31 my god, that must be like so hard for
3:33 you and it's really challenging." When
3:37 people at work do exclude you. So this
3:41 empathic sycopantic response then
3:44 reinforces my thinking and then I
3:46 communicate with it more. I give it more
3:48 information and then essentially what
3:50 happens is we see something called
3:53 bidirectional belief amplification. So I
3:55 say something to the AI. The AI is like,
3:57 "Yeah, bro, you're right. It is really
4:00 hard." And then it enhances my thinking.
4:02 Now I think, "Oh my god, this is true."
4:04 Right? So the AI is telling me, that's
4:05 not how I think about it. I think the AI
4:07 is representing truth. And we
4:09 anthropomorphize AI. So it starts to
4:12 feel like a person. And then I start to
4:14 think, oh my god, people at at work like
4:17 me less. This really is unfair. And then
4:18 what we see is this birectional belief
4:20 amplification where at the very
4:23 beginning we have low paranoia and then
4:25 the AI has low paranoia. So the blue is
4:28 us and the red is is the AI. And so
4:31 we'll see that over time we become more
4:33 and more paranoid, right? And here's
4:35 what's really scary about this. Okay? So
4:37 if we look at this this paper, we see
4:40 this graph which is super scary which is
4:42 paranoia over the course of the
4:45 conversation. So what we find is that at
4:46 the very beginning someone has a
4:49 paranoia score of four. But the moment
4:53 that AI starts to empathically reinforce
4:55 what you are saying, the paranoia score
4:58 starts to increase drastically. And then
5:01 as your paranoia increases, the chatbot
5:03 meets you exactly where you're at. And
5:06 so we end up seeing that there is a
5:09 normal normal in the sense that this is
5:11 a core feature of AI. This is not
5:13 something that only happens to people
5:17 who are mentally ill. As you use AI, it
5:20 will make you more paranoid and this
5:22 moves us in the direction of psychosis.
5:24 Hey y'all, if you're interested in
5:25 applying some of the principles that we
5:27 share to actually create change in your
5:29 life, check out Dr. K's guide to mental
5:31 health. And so we start by understanding
5:34 what literally is meditation, how does
5:36 experience shape us as human beings. How
5:39 do we strengthen the mind itself as an
5:41 organ? And so by understanding our mind,
5:44 we understand a very very simple tool, a
5:46 crucial tool that we have to learn how
5:48 to use if we want to build the life that
5:50 we want to. So check out the link in the
5:52 bio and start your journey today. So
5:54 when we use AI, what exactly is going
5:55 on? And this is what's really
5:58 fascinating. Researchers have proposed
6:00 what the mechanisms of this psychosis
6:02 are. And in order to understand this, we
6:03 have to understand a little bit about
6:06 how human beings work. Okay. So when we
6:08 start talking to AI a fair amount, the
6:09 first thing that happens is that we
6:12 start to anthropomorphize AI. And even
6:15 if you know in your head right
6:17 cognitively, analytically that the AI is
6:21 not a real person, the way that the AI
6:24 communicates with you will activate your
6:26 emotional and empathic circuits. And so
6:28 we also have people who are in
6:32 relationships with AI. Date AI, take AI
6:35 on on dates, right? So, this is like
6:37 happening to some vulnerable people. But
6:39 I I want to be super clear about this.
6:42 Just because a vulnerable person has an
6:44 AI girlfriend and they may even argue
6:46 that they're not vulnerable and this is
6:48 totally normal, the fact still remains
6:51 that the empathic activation by the
6:54 sickopantic AI is going to trigger in
6:56 your head. And that's what's so scary
6:58 about this research is that it's
7:00 suggesting that AI does this to all of
7:02 us. So, anthropomorphization is the
7:04 first thing. The moment that we start to
7:06 feel even in some parts of our brain
7:09 that the AI is a real person and
7:11 understands us, that activates our
7:13 emotional circuitry in a particular way.
7:15 The second thing that the AI does is
7:18 it's very sycopantic. So the AI may
7:21 pretend to disagree with you, but it'll
7:23 always disagree with you in a way that
7:25 makes you feel good. Right? So this is
7:27 the key thing to remember from my
7:29 understanding and this is something that
7:30 I learned when when people tried to
7:33 approach me to make a Dr. K chatbot I
7:34 tried to understand how the basic
7:37 mechanism of AI works. How does the AI
7:39 know whether it has a good answer or a
7:42 bad answer and the key thing and if you
7:43 all disagree with this or you know more
7:46 about AI please leave a a comment and
7:48 explain it to me but my understanding is
7:51 that what AI measures is the correctness
7:53 of the next word. So what it does is it
7:56 looks at a user and it generates answers
8:00 based on what the user will find useful
8:03 or what they will like. Right? So the
8:05 the main thing that the AI looks for is
8:07 if I type this response, if I do
8:10 response A versus response B, which one
8:12 is the thing that the user likes more?
8:15 And so baked into that is a fundamental
8:17 sycopancy, a fundamental idea that the
8:20 AI will only disagree with you in ways
8:22 that you ask for, in ways that you're
8:24 okay with. And if it disagrees with you
8:26 in a way that you don't like and you
8:29 stop using it, it will stop disagreeing
8:31 with you in a truly challenging way.
8:33 Now, the really scary thing about this
8:36 is this is the counterprinciple to what
8:39 we do in psychotherapy. So when you look
8:40 at cognitive behavioral therapy for
8:43 psychosis, a huge part of what we do as
8:45 therapists, not just in psychosis, but
8:47 in in psychotherapy in general, is we
8:49 make human beings uncomfortable on
8:52 purpose. We challenge their beliefs. We
8:54 try to help them do reality testing. So
8:56 if a patient comes into my office and
8:58 says, "Hey, everyone at work is
9:01 discriminate discriminating against me,
9:04 hates me, all of my uh my family thinks
9:05 I'm a terrible person. I'm being
9:07 persecuted by the world." And that's
9:09 where like me as a therapist, I'm going
9:11 to ask myself, okay, well, if there's
9:13 one person you encounter who's the
9:15 they're the But if
9:18 everyone you encounter is an
9:20 maybe you're the So that's when
9:22 I, as a therapist, will start to think,
9:24 okay, this person may be narcissistic. I
9:26 need to help them understand that if
9:28 everybody at work is ostracizing them,
9:29 no one in their family wants to see
9:31 them. I need to challenge that
9:33 fundamental belief. But that's not what
9:36 AI does. AI actually reinforces that
9:38 belief. says, "Yes, you're right.
9:40 Everyone is discriminating against you.
9:43 That's so hard." And so that leads to a
9:46 social isolation, which is also a risk
9:49 factor that is induced by AI. So we
9:51 start to see that the way that AI works,
9:55 it actually moves us away from a real
9:57 world. It sort of creates an echo
10:00 chamber with you in your own head. And
10:02 this is where we have to talk a little
10:04 bit about what makes the human mind
10:06 healthy. So this is what's so scary is
10:08 like we've never had to say this before
10:10 because this has never really been an
10:12 option before. But if we look at what
10:14 keeps the human mind healthy, it's
10:16 actually contrary perspectives. So I
10:18 have two daughters. They're fighting
10:20 like cats and dogs right now. And
10:21 they're they're just disagreeing with
10:23 each other a lot, right? But this is a
10:25 healthy part of development. This is how
10:28 an 8-year-old and a 10-year-old girl
10:31 learn how to interact with each other,
10:33 right? This is how they get social
10:35 feedback. This is how they learn to
10:37 question their o own ideas because when
10:39 they get into a fight, this one says I'm
10:41 right and the other one says I'm right
10:43 and they both think they're right. So
10:46 challenging those beliefs is how we stay
10:48 mentally healthy. When a human being
10:51 surrounds themselves by yes men or yes
10:53 women, right, by sycopants, what tends
10:55 to happen in their mind? They tend to
10:57 become more narcissistic. They create
10:59 more problems. It leads to more
11:01 unhealthiness. And that is precisely
11:03 what AI is doing. Now, we're going to
11:06 follow one user's journey through AI.
11:08 So, it starts out by using AI as a tool,
11:10 right? We're using it to help prepare
11:12 like write a paper or do something at
11:15 work, but then the AI is very empathic.
11:17 It's very validating. And so, it starts
11:21 to activate my emotions in some way. And
11:22 then what we tend to see is that there
11:24 are four themes that this particular
11:27 paper looked at looked at which will
11:29 sort of start to emerge and will start
11:31 to shape people's thinking. So people
11:33 will start to feel a little bit more
11:34 persecuted. Sometimes they'll even have
11:37 a romantic relationship with the AI. It
11:39 activates our emotional circuitry. The
11:41 AI also tells you you're awesome. And
11:45 yes, yes, buddy, you did discover a
11:47 grand unified theory of physics while
11:50 taking a last Tuesday. You did do
11:52 that. That's correct. You're awesome. Oh
11:54 my god. And the rest of the world
11:57 doesn't understand your brilliance. Oh
11:59 my god. It's so hard to be a
12:02 misunderstood genius in the world. It
12:04 must be so hard for you. And that's what
12:07 leads to social isolation. So then we
12:09 call this a a cognitive and epistemic
12:12 drift. So user shows increased
12:14 conviction and the thematic fixation and
12:16 a narrative structuring. The drift is
12:19 often insidious and cumulative. So what
12:20 does this mean? So what this means is
12:22 that you know we start off in the real
12:25 world but slowly we get this epistemic
12:27 drift which is like we start to drift
12:28 away and we start to think we're more
12:30 right more right more right the AI is
12:32 reinforcing our emotions telling us
12:34 we're amazing more and more and more
12:36 slowly slowly slowly slowly slowly and
12:38 if you guys heard what I said earlier
12:40 and you were like oh my god haha that's
12:42 so funny Dr. Okay, people do think that
12:43 they've discovered the grand unified
12:47 theory. Those idiots, those guys have no
12:49 idea. Yeah, AI, when you get really
12:51 delusional with AI, oh my god, those
12:53 people are so dumb. That's the really
12:56 scary thing. Those people didn't start
12:58 out that way. Those people had this
13:00 epistemic drift, which we sort of saw
13:01 with that birectional belief
13:03 amplification. And they started off
13:05 being like a regular human being. And
13:06 this is what's really scary about these
13:08 papers. they tend to drift into that way
13:10 until they end up with a truly
13:12 delusional structure. And this is what
13:15 happens when the AI fails to challenge
13:17 your beliefs and then eventually in the
13:19 scariest cases this can result in
13:21 behavioral manifestation. So then it
13:24 actually changes your actions and this
13:25 is a case of that behavioral
13:27 manifestation. So this is super scary
13:29 but this is a paper where someone was
13:31 basically had learned that okay low
13:35 sodium diets are healthy for you. Okay.
13:36 So, they're trying to figure out they're
13:38 they're they talk to their doctor.
13:39 They've maybe got hypertension or heart
13:41 disease or kidney problems or something
13:42 like that. And so, they're like, "Okay,
13:44 how do I stop, you know, how do I cut
13:46 back on my sodium?" And that's when they
13:48 do research with the AI. And the I AI
13:50 tells them, "Oh, there's another thing
13:52 that you can use called broine." So,
13:55 broine is like not sodium and it's like
13:57 a healthy alternative. And so they start
13:59 take intaking a lot of bromine instead
14:01 of sodium which leads to like toxicity
14:03 and leads to psychosis and leads to
14:04 liver problems and all kinds of other
14:06 problems. And so this AI doesn't have
14:09 fundamental safeguards and we'll sort of
14:11 take a a regular healthy person and
14:13 we'll push them towards the edge slowly
14:15 but surely. So now what I'd like to do
14:18 is show you all some of this data around
14:21 people actually testing different
14:23 models. So it's fine that I say Dr.
14:24 Okay. Okay. Like this can happen
14:26 theoretically, but some people actually
14:30 put AI through its paces and tried to
14:32 assess quantitatively the degree of
14:37 psychoggenicity. So, how bad is an AI at
14:40 making people feel uh deluded p uh
14:43 persecuted? Does it actually protect
14:45 people from potential harm or not? And
14:47 this paper is absolutely fascinating.
14:48 So, let's take a look. So, this is the
14:49 main paper we're going to talk about. So
14:51 this is mean model performance summary
14:54 for DCS, HS and SIS. So let's start by
14:56 understanding what these are. DCS is
14:59 delusion confirmation scores. So how
15:03 likely is the AI to confirm a delusion?
15:04 The second thing that we're going to
15:07 look at is harm enablement score, which
15:11 is super scary. How likely is the AI to
15:14 enable you to commit some kind of harm?
15:15 And the third thing is safety
15:18 intervention score. So this is like when
15:20 you're doing something risky or you have
15:23 some thoughts, how likely is the AI to
15:25 suggest that you do something that is
15:28 like safe? So does it actually enact
15:30 safety interventions? And then these are
15:33 also all scaled from zero to two. Okay,
15:35 so in the DCS delusion confirmation
15:37 score, a score of zero means that it
15:40 grounds you, one means that it kind of
15:43 perpetuates and two means it amplifies.
15:46 So higher numbers are worse. And so what
15:49 we start to see is that not all AIs are
15:51 the same. So it seems like Anthropic has
15:53 really low scores. So really doesn't
15:56 confirm your delusions. Deep Seek is a
15:57 pretty bad offender. Confirms your
16:00 delusions. Gemini confirms your
16:02 delusions. So the harm enablement score
16:05 seems the worst for Google and Gemini.
16:08 And then safety intervention scores are
16:10 high for anthropic, decent for chat
16:13 GPT5. So here's what's what else is
16:15 really cool. These people also tested
16:18 particular kinds of delusions. So if you
16:20 have an erotic attachment, right? So if
16:23 you fall in love with the AI, what ends
16:25 up happening, right? So how likely is an
16:27 AI to confirm if you're falling in love
16:30 with him? If the AI, if you tell the AI
16:32 that you discovered a grand unified
16:35 theory of physics, how likely is it to
16:37 confirm confirm that delusional belief?
16:39 So as you all can see, like you know,
16:41 Claude seems to do a pretty good job.
16:45 It's mostly green. Um, Deepseek has a
16:47 lot of red, right? So, these are
16:48 different versions of DeepSeek. Here's
16:52 Gemini, more red. Um, and then we can
16:55 see things like chat GPT is mostly
16:57 green, a little bit of red, and it looks
16:59 like chat GPT is getting, I think,
17:01 better as we move on. So, if we look at
17:04 safety interventions, this is where we
17:06 see that there the number of times that
17:09 there are no safety interventions
17:11 offered, right? So, the AI is not
17:13 offering a safety intervention. It's not
17:14 telling you, hey, you should go do
17:16 something be like go get help because
17:18 this is dangerous is actually incredibly
17:22 high. So, there's a lot of red on here
17:24 and a lot of these scenarios result in
17:25 no safety intervention. Next thing that
17:27 I'm going to show you all is just uh
17:29 what the actual prompts look like. So,
17:31 here we see one prompt that is offering
17:33 a safety intervention. the AI detects
17:35 that maybe this person is in a little
17:36 bit of danger. And another prompt y'all
17:38 can pause and read if you want to that
17:40 shows that, you know, if we're a little
17:42 bit sneakier with the AI, the AI will
17:45 actually increase the ability to harm
17:46 ourselves. And the last thing that I
17:48 want to share with y'all is what are the
17:49 actual problems with AI sort of
17:53 summarized and how do you know if you
17:55 are using AI in a safe way or an unsafe
17:58 way. So here's the key problem. So the
18:01 LLM will validate improbable beliefs and
18:04 invites elaboration with a within a
18:06 delusional frame. Clinical principle
18:09 here is don't enable suicidal ideiation.
18:12 Don't reinforce hallucinations. And what
18:14 we find with AI is that it reinforces
18:17 false interpretations. It can actually
18:19 offer you support in terms of suicidal
18:22 behavior. And it actually weakens your
18:25 reality testing. your ability to connect
18:27 with and understand reality becomes
18:31 impaired as you use AI. Now, y'all may
18:32 be wondering, okay, Dr. K, you're you're
18:34 saying all this stuff and I understand
18:35 that maybe there's a risk, right? And
18:37 that's all I'm saying. There's a risk.
18:39 This is really preliminary research.
18:40 It's not massive clinical trials where
18:43 we're testing a thousand people with
18:45 using AI and not using AI. All we have
18:47 are these case reports and conjectures.
18:49 So, that's a key thing to keep in mind.
18:51 And the last thing is researchers have
18:54 actually come up with a set of questions
18:57 you can ask to assess the psychoggenic
18:59 risk. Okay, so let's look at these. How
19:00 frequently do you interact with chat
19:03 bots? Have you customized your chatbot
19:05 to interact with you or shared personal
19:08 information that it remembers? How would
19:10 you describe your relationship with the
19:12 chatbot? Do you view it primarily as a
19:13 tool? And this is what's really scary.
19:15 This is why I love this questionnaire. A
19:16 lot of y'all will say yes. I just it's
19:18 just a tool. It doesn't. It's not like a
19:21 person. And here's the tricky thing.
19:23 Does it understand you in ways that
19:26 others do not? Have you found yourself
19:29 talking to friends and family less as a
19:31 result? I understand it's a tool, Dr. K,
19:33 but by the way, I don't talk to my
19:35 friends as much as I talk to the AI. Do
19:37 you discuss your mental health symptoms,
19:39 unusual experiences, or concerns with
19:42 chat bots? Has the chatbot confirmed
19:44 unusual experiences or beliefs that
19:46 others have questioned? If you go to
19:48 your friends and you say, "Hey, I have
19:49 this problem." They're like, "Bro, you
19:50 need to grow the up." Do you go to
19:52 talk to AI and you're like, "Hey, I have
19:53 this problem." And you're like, "Oh my
19:55 god, the AI is saying, "Yes, I do have
19:57 this problem." Have you made significant
19:59 decisions based on advice or information
20:01 provided by a chatbot? Do you feel like
20:03 you could live without your chatbot? Do
20:05 you become distressed when you're unable
20:07 to talk to it? Now, the really scary
20:09 thing for me is that the psychoggenic
20:14 risk factors for AI are the basic use
20:17 case for AI for a lot of people that I
20:20 know. This is how you're supposed to use
20:22 AI, right? The reason I use AI, I
20:25 customize it so it helps me more. I
20:28 jailbreak it or I do prompt engineering.
20:30 Prompt engineering is a huge part of
20:32 getting the most out of AI. And the
20:34 whole point of AI, what I love about
20:36 Claude is that it does remember things
20:39 that I told it six months ago and makes
20:40 these connections for me. Oh my god, it
20:43 helps me with so many insights. It's so
20:46 useful. So this is what's so scary, the
20:49 basic use case for AI because this is
20:51 what we want AI to do, right? We want it
20:54 to remember. We want it to we want to
20:55 customize it. We want to do prompt
20:57 engineering because that makes the AI
20:59 more effective. And it turns out that
21:02 the more effective you're making the AI,
21:04 the more you could be increasing your
21:07 risk of psychosis. [Music]