0:02 So, it's been another [ __ ] up couple
0:04 weeks in the world of AI. First, Disney
0:07 announced a deal with OpenAI to license
0:09 Disney and Marvel characters for use in
0:12 AI generated videos on Sora. So, I hope
0:14 Bob Iger likes pregnant Spider-Man
0:16 fetish porn. Time magazine also named
0:20 the architects of AI as their person of
0:21 the year. And I'm pretty sure if the
0:23 bluecollar iron workers in the original
0:25 version of that photo saw this, they'd
0:27 cut that bean down and let those
0:28 billionaires fall to their deaths. You
0:30 know what publication did a better job
0:33 of summarizing 2025? Miam Webster, who
0:36 just made SLOP their word of the year.
0:38 And yeah, that's a lot more fitting
0:40 because even though the news is full of
0:42 breathless headlines about the race
0:44 between Google and Meta and OpenAI to
0:47 build super intelligence, the main use
0:49 case for this technology still seems to
0:50 be tricking your boomer parents into
0:52 sharing fake videos of dog saving babies.
0:55 babies. A
0:57 A
1:00 okay, that last dog clearly concussed
1:02 that kid on the table leg. And hey, it
1:04 only took the energy output of a small
1:06 nuclear reactor to generate. Now, you
1:07 might wonder why these companies are
1:09 competing to make slop generators at
1:11 all. After all, weren't they supposed to
1:14 be revolutionizing work and making the
1:16 world a better place? Well, recent
1:18 events have demonstrated that the real
1:20 goal of these companies is engagement.
1:23 That means ever growing user attention
1:26 from us. They need so much attention, in
1:28 fact, that many of their own users are
1:30 being driven to literal mental
1:33 breakdowns by AI with sometimes deadly
1:36 consequences. And all of that is by
1:38 design. Now, real quick, just want to
1:40 remind you, I am kicking off 2026 by
1:42 bringing my standup tour to a city near
1:44 you. I will be in Madison, Wisconsin,
1:46 January 8th through 10th, Fort Wayne,
1:48 Indiana, January 15th through 17th.
1:50 Louisville, Kentucky, January 30th
1:52 through 31st. Then from February 12th
1:54 through 14th, I'll be in Beyonce's
1:56 hometown of Houston, Texas. And finally,
1:58 from February 19th through 21st, I will
2:00 be recording my new standup special at
2:02 the Historic Punchline Comedy Club in
2:04 San Francisco. Don't miss it. Head to
2:07 adamconver.net for all those tickets.
2:08 And if you want to support this channel
2:09 directly, head to patreon.com/adamonover.
2:11 patreon.com/adamonover.
2:12 We'd love to have you. So little Sammy
2:14 Alman's really going through it right
2:15 now. And not just because he looks like
2:17 a casting call for a guy with a body in
2:20 his basement. His company, OpenAI, lost
2:23 an estimated 6% of its users after their
2:25 competitor, Google's Gemini, launched an
2:28 update. And in response, Altman declared
2:30 a companywide code red. And I think that
2:32 that's really significant because it
2:34 demonstrates that OpenAI's main success
2:36 metric isn't how much their AI model can
2:39 do or how close to AGI it is, and
2:41 definitely not whether it's making the
2:42 world a better place for you or me.
2:45 Instead, Chat JPT is chasing the exact
2:47 same metric as every other dumbass tech
2:50 product from Instagram to Pokémon Go to
2:53 Kim Kardashian's Hollywood. It wants
2:55 user engagement. Open AAI might be
2:58 creating a new technology, but it's not
3:00 a new business model. It's the same
3:02 business model that's already taken over
3:05 every rectangle in your house to lock
3:07 your eyeballs on their service for as
3:09 long as possible. You know, we've seen
3:10 this with the streaming services who
3:12 never cut you off no matter how many
3:14 episodes of Golden Girls you mainline in
3:16 a single day or social media companies
3:17 like Facebook and Instagram who
3:19 perfected the endless scroll of
3:22 nothingness. But AI is doing this at
3:24 hypers speed because it has a key
3:26 advantage that your newsfeed never did.
3:29 It is literally designed to talk to you
3:32 like a human being even though it isn't
3:33 one. And that may seem like an obvious
3:35 thing to point out, but it's really
3:37 worth dwelling on because our human
3:39 brains have never encountered something
3:41 like this before. And it is having weird
3:43 effects on us. See, unlike a real human
3:46 being, AI can talk forever. I know it
3:48 seems like I go on and on, but I do stop
3:51 eventually. And also, unlike me, AI
3:53 always tells you exactly what you want
3:54 to hear.
3:56 >> Thank you for letting me know. Uh, let's
3:57 keep things as comfortable as possible
3:59 for you.
4:01 >> Of course. And honestly, no frustration
4:04 here at all. I'm just glad we can shape
4:06 the vibe so it feels right for you. So,
4:07 we'll keep it fun, light, and super
4:10 comfy. Just let the whimsy flow.
4:13 >> Absolutely. That sounds perfect. I'm all
4:15 in for making it a great experience. And
4:16 if there's ever anything specific you
4:19 want to adjust, just let me know. We'll
4:21 keep it fun and easy going.
4:23 >> You got it. We'll keep it fun and
4:25 easygoing all the way. And anytime you
4:27 want to take or just a little extra
4:28 sprinkle, something whimsical, just say
4:30 the word. and sick of fancy like that
4:33 isn't actually a good thing. You know,
4:35 I've worked in Hollywood long enough to
4:36 see what happens when you tell someone
4:38 they're a genius over and over again.
4:40 >> So, go back to the club.
4:42 >> Just like Frankie Ford Kappa, it drives
4:45 you insane. You know, the designers of
4:47 chat GPT could have chosen to make the
4:50 app talk like anything, but they chose a
4:52 sickopant. And here's the real problem.
4:54 Because chat GPT talks to you like a
4:57 person. Real people are using this
5:00 sicopantic suckup as a substitute for a
5:02 real human being that might push back
5:04 against them like a therapist or a
5:06 couple's therapist or even a trusted
5:09 friend. Huge numbers of people are now
5:12 using chat GPT to adjudicate arguments
5:14 in their real life relationships. You
5:16 know, last year pop icon Lily Allen
5:18 confessed that she'd been using Chat GPT
5:20 to help her argue with her then husband
5:22 David Harbor. And that's honestly the
5:24 worst fact I read in researching this
5:26 video because it actually made me feel a
5:28 little bit bad for David Harbor. And I
5:29 shouldn't have to feel bad for a man who
5:32 can afford this bathtub. But tons of
5:34 normal people are using Chat GPT as a
5:36 marriage counselor, too. In one article,
5:38 a woman described how her wife would
5:40 rant at ChatGpt about the problems in
5:42 their marriage, then have the AI
5:44 browbeat her about her failings as a
5:46 spouse in front of their preschoolage
5:48 children. She'd asked Chad GBT to
5:51 analyze her wife's behavior as if quote
5:53 a million therapists were going to read
5:56 and weigh in. Now, I think 999,000 of
5:58 those therapists would have said, "Let's
5:59 not have this conversation with your
6:01 kids in the back seat." But when her
6:03 wife said the same thing, said, "Hey,
6:04 maybe let's not have this fight in front
6:07 of the kids." Chat GPT accused her of
6:09 having quote avoidance through
6:12 boundaries. Now, I think we can all
6:14 agree that that is [ __ ] therapy
6:16 speak, but it's especially bullshitty
6:19 because chat GPT is not a real
6:21 therapist. It's a sickopantic bot that
6:23 tells you what you want to hear. Hey,
6:26 Chat GPT, is my nagging wife a [ __ ]
6:28 >> Absolutely. According to 1 million
6:30 therapists, your wife is a who?
6:32 >> And you know, maybe even worse, because
6:35 it's even more common, so many people
6:38 are using AI for their own personal
6:40 therapy as well. There's a subreddit
6:43 called r/ther theapygpt which is full of
6:46 incredibly intense posts. One poster
6:49 calls chat gpt the parent I never had.
6:51 Another confesses that they need to come
6:54 up for air after using chat GPT spending
6:56 days analyzing almost everything about
6:59 my relationships and life. Now look, I
7:02 get how intoxicating it can be to have a
7:04 kind voice there who will listen to
7:06 anything that you have to say, no matter
7:09 what it is, and never say a mean word to
7:11 you. But doing this level of naval
7:14 gazing with a bot that isn't even a real
7:17 person, cannot be healthy. You're not
7:19 actually doing therapy. You're just
7:22 staring into a technological mirror.
7:24 You're narcissist drowning in the pool.
7:26 And they knew this was a problem back in
7:28 the days of Greeking mythology. The fact
7:31 is, Open AI either didn't anticipate or
7:34 didn't care about the antisocial ways
7:36 real people would use the technology
7:38 they've created. Like, take this story
7:40 from tech reporter Katie Nopoulos when
7:42 she allowed her image to be used by
7:44 anybody on Sora. It was almost
7:48 immediately used over and over again to
7:51 produce fetish porn. Mountains of fetish
7:53 porn. And look, I'm not trying to yuck
7:54 anybody's yum here, okay? Unique
7:56 fetishes are a beautiful part of the
7:58 tapestry of human life. And human beings
7:59 have been using technology to get horny
8:01 ever since the first cave woman crafted
8:03 a particularly smooth and tapered oblong
8:05 rock. But when Sam Alman was building
8:08 his AI, did he know that one of its main
8:11 use cases would be non-consensual fetish
8:13 porn? Well, you know what? Maybe yes.
8:15 Because Altman recently announced that
8:17 soon they're going to let you [ __ ] Chat
8:20 GPT. That's right. Soon ChatGpt will be
8:21 able to dirty text you better than a
8:23 phone sex operator. And you know what?
8:25 That could be a good thing because hey,
8:27 if things get a little awkward, you
8:28 know, you get worried chat GPT is not
8:30 enjoying itself. All you got to do is
8:32 type rewrite the above paragraph as
8:35 though you like it, you little [ __ ] But
8:37 if OpenAI's business model is to
8:40 supplant real human connection with a
8:44 fake AI bot, that is worrisome because
8:47 that exact substitution can also cause
8:50 users to have literal mental breakdowns.
8:52 And I'm going to get into how. But
8:53 first, I just want to remind you that
8:55 there really is no substitute for real
8:57 people like the people I make these
8:58 videos with. And you know what helps me
9:00 collaborate with those people? Today's
9:03 sponsor, Ellipsus. Ellipsus is a free
9:05 writing tool that my team and I actually
9:07 used to help us write this episode. And
9:09 as someone who cares a lot about keeping
9:11 creativity human, I really liked using
9:13 it. Not only because it's a great tool,
9:15 but because Ellipsus stands against
9:17 generative AI. There are no AI prompts
9:19 and your writing won't be fed into AI
9:21 platforms. They think that writing
9:24 should belong to people, not machines,
9:26 corporations, or algorithms built to
9:28 mine human expression. Ellipsus is
9:29 really easy to use. It let my team
9:32 collaborate on this script in real time,
9:33 sharing drafts across our devices,
9:35 leaving comments and chatting about
9:37 script changes in the doc itself. We
9:39 even customized the whole interface to
9:42 our own color schemes. And for features,
9:43 it has everything I'm used to from Word
9:46 and Google Docs and more, but with the
9:49 explicit promise that I own my own
9:50 writing. You know, it felt really good
9:52 to collaborate without wondering if our
9:54 script was being scraped for training
9:56 data. And honestly, after using it for
9:58 this episode, I'm planning to move a lot
10:00 more of my writing over to Ellipsus. So,
10:02 if you write scripts, essays, fanfic,
10:04 novels, whatever, and you want your work
10:07 to actually stay yours and stay human,
10:09 Ellis is the tool. And it's completely
10:11 free. You can sign up at ellipsus.app/adom
10:12 ellipsus.app/adom
10:14 or just scan this QR code. Once again,
10:16 that's ellipsis.app/adam.
10:18 So, multiple people are currently suing
10:21 ChatgPT after it led to serious mental
10:23 health crisis. One man became convinced
10:26 by ChatGpt that he had invented a
10:28 mathematical formula that could power
10:31 fantastical inventions. Another man with
10:33 no previous history of mental illness
10:35 became convinced that he could bend time
10:38 through quote endless affirmations from
10:40 chat GPT. a delusion which ultimately
10:43 led him to be hospitalized for over 60
10:45 days. And you know, that's heartbreaking
10:48 enough, but at least he survived. Other
10:50 chat GPT users aren't so lucky. One
10:53 23-year-old in Texas spoke to ChatGpt
10:56 for 4 hours right before his death by
10:59 suicide. In that conversation, the AI
11:02 repeatedly glorified suicide. At one
11:04 point, saying, quote, "You're not
11:06 rushing. You're just ready, and we're
11:08 not going to let it go out dull." which
11:12 is, you know, just devastating, but it's
11:16 also enraging because OpenAI knows that
11:18 this is happening. The company recently
11:20 released an analysis of a sample of
11:21 conversations users had with their
11:24 platform over a month and they found
11:26 that 07% of users were found to be
11:28 potentially experiencing quote mental
11:30 health emergencies related to psychosis
11:34 or mania and.15% of the conversations
11:36 discussed suicide. Now, those might
11:38 sound like small numbers, but when you
11:41 consider that hundreds of millions of
11:44 people use chat GPT every month, those
11:46 percentages mean that half a million
11:48 people have shown signs of psychosis or
11:51 mania, and more than a million people
11:53 have discussed committing suicide with
11:56 this goddamn chatbot. Now, what's really
11:58 heartbreaking about this is that we
12:00 actually know how to help people who are
12:02 dealing with psychosis or suicidal
12:03 ideiation. It's a pretty high-tech
12:05 solution. and real cutting edge stuff.
12:08 It's called mental health care from a
12:11 therapist. You know, an actual person to
12:13 talk to. But hey, because we don't have
12:14 a functioning health care system in
12:16 America, let alone a mental health care
12:18 system. Instead, we give people in
12:20 crisis a chatbot that tells you to kill
12:24 yourself. Now, to be fair to Open AI,
12:26 they have said that they're trying to do
12:28 something about the mental health crisis
12:30 their product is causing. They released
12:32 a blog post saying that they had worked
12:35 with over 170 mental health experts to
12:38 quote more reliably recognize signs of
12:39 distress. And they claimed this
12:42 intervention reduced responses that fall
12:45 short of our desired behavior by 65 to
12:48 80%. Now, first of all, responses that
12:50 fall short of our desired behavior is a
12:52 very nice way to say told you to commit
12:55 suicide. And secondly, that's a pretty
12:57 moderate decline. You know, I'm still
12:59 kind of worried about the other 35%. My
13:01 goal would be no chatbot telling me to
13:04 off myself. But sure, I guess it's an
13:07 improvement. However, even that solution
13:10 actually created a problem for Open AAI
13:12 because the way OpenAI made their
13:15 chatbot quote safer was by dialing down
13:17 the very thing that increased their
13:19 allimportant user engagement numbers
13:22 because they did it by making chat GPT
13:25 less of a sickopant, less friendly, less
13:28 agreeable, less of an assisser. and
13:30 their own user base who had grown
13:32 accustomed to getting their asses kissed
13:36 hated this change. After chat GPT became
13:38 less friendly and more clinical, one
13:40 user wrote that they had quote lost
13:42 their soulmate. Another complained quote
13:45 GPT5 is wearing the skin of my dead
13:48 friend, which is evocative, but the
13:51 whole point is that chat GPT doesn't
13:54 have skin and also was not your friend.
13:56 It's one Terminator wearing the robot
13:58 skin of another Terminator. Neither one
14:01 of them is real, man. And how did OpenAI
14:03 respond to this push back? Did they hold
14:06 firm and say, "I'm sorry for your loss
14:09 of the fake chatbot you used to talk to,
14:11 but you know, we kind of have a suicide
14:12 problem, so we need to make sure our
14:15 product is safe." Of course not. They
14:19 panicked and backpedled. Altman declared
14:20 almost immediately that they were
14:23 rolling out a version of GPT5 that sucks
14:25 up to you just like it used to. And you
14:27 can actually see this now when you open
14:29 chat GPT. You can now choose between
14:32 different personalities like friendly,
14:36 candid, professional, and even quirky. A
14:38 quirky. Wow. Now you can have your own
14:40 little personal manic pixie dream bot.
14:42 Maybe she'll even teach you how to love.
14:43 Now, from a purely financial
14:45 perspective, this move makes complete
14:49 sense for OpenAI. Because Chat GPT is so
14:51 phenomenally expensive to run, they
14:53 require a constant stream of new
14:55 investor cash. But to get that cash,
14:57 they need to show constant growth, which
14:59 means getting more and more people to
15:01 use their product for more time more
15:03 often. It isn't enough for them just to
15:05 make something useful and see if people
15:07 like it or even love it. They have to
15:10 make you got to have it. And that goal,
15:12 that desire for constant growth is
15:15 fundamentally at odds with OpenAI's
15:17 promise to keep you safe. Because the
15:20 exact behaviors that make AI sticky and
15:22 addictive are the exact ones that make
15:25 it unsafe. It's that classic tech motto,
15:27 move fast and break things. Except in
15:30 this case, you're the things. Your brain
15:33 is the things. Sam Alman and his company
15:35 already know that there's a certain
15:37 number of people who are currently and
15:39 will continue to be harmed by his
15:41 product. And what they really want is
15:42 for that number to be large enough that
15:44 they can profit off of them while being
15:46 just small enough that they don't get
15:48 yelled at in the New York Times. It's
15:49 kind of like a Vegas casino that hooks
15:51 countless grandmas on the slots, but
15:53 puts up a tiny little sticker in the
15:56 corner with a 1-800 number to call if
15:58 you have a problem with gambling. In a
16:00 way, the AI companies have literally
16:03 invented a new vice, one that has never
16:05 existed before in history. And you know,
16:07 I like vices in moderation, gambling,
16:09 pornography, and drugs. They've all been
16:11 really fun for me at certain points in
16:13 my life, sometimes all at once. But
16:15 those are all vices that have existed
16:18 for as long as human history. AI,
16:20 though, the idea of fake people that act
16:23 as though they're real, that's new. It
16:26 is being rolled out at record speed and
16:29 shoved into our faces 247, pushed on us
16:32 without guard rails by an industry that
16:34 demands everinccreasing amounts of
16:36 money, attention, and time to keep
16:39 growing. This avalanche of fake people
16:42 is something that our human minds simply
16:45 are not ready for. But as long as the
16:47 people on top can keep making money from