The core theme is that despite the hype surrounding AI's potential for superintelligence, its primary current use and business model revolve around maximizing user engagement, often at the expense of user well-being and genuine human connection.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
So, it's been another [ __ ] up couple
weeks in the world of AI. First, Disney
announced a deal with OpenAI to license
Disney and Marvel characters for use in
AI generated videos on Sora. So, I hope
Bob Iger likes pregnant Spider-Man
fetish porn. Time magazine also named
the architects of AI as their person of
the year. And I'm pretty sure if the
bluecollar iron workers in the original
version of that photo saw this, they'd
cut that bean down and let those
billionaires fall to their deaths. You
know what publication did a better job
of summarizing 2025? Miam Webster, who
just made SLOP their word of the year.
And yeah, that's a lot more fitting
because even though the news is full of
breathless headlines about the race
between Google and Meta and OpenAI to
build super intelligence, the main use
case for this technology still seems to
be tricking your boomer parents into
sharing fake videos of dog saving babies.
babies. A
A
okay, that last dog clearly concussed
that kid on the table leg. And hey, it
only took the energy output of a small
nuclear reactor to generate. Now, you
might wonder why these companies are
competing to make slop generators at
all. After all, weren't they supposed to
be revolutionizing work and making the
world a better place? Well, recent
events have demonstrated that the real
goal of these companies is engagement.
That means ever growing user attention
from us. They need so much attention, in
fact, that many of their own users are
being driven to literal mental
breakdowns by AI with sometimes deadly
consequences. And all of that is by
design. Now, real quick, just want to
remind you, I am kicking off 2026 by
bringing my standup tour to a city near
you. I will be in Madison, Wisconsin,
January 8th through 10th, Fort Wayne,
Indiana, January 15th through 17th.
Louisville, Kentucky, January 30th
through 31st. Then from February 12th
through 14th, I'll be in Beyonce's
hometown of Houston, Texas. And finally,
from February 19th through 21st, I will
be recording my new standup special at
the Historic Punchline Comedy Club in
San Francisco. Don't miss it. Head to
adamconver.net for all those tickets.
And if you want to support this channel
directly, head to patreon.com/adamonover.
patreon.com/adamonover.
We'd love to have you. So little Sammy
Alman's really going through it right
now. And not just because he looks like
a casting call for a guy with a body in
his basement. His company, OpenAI, lost
an estimated 6% of its users after their
competitor, Google's Gemini, launched an
update. And in response, Altman declared
a companywide code red. And I think that
that's really significant because it
demonstrates that OpenAI's main success
metric isn't how much their AI model can
do or how close to AGI it is, and
definitely not whether it's making the
world a better place for you or me.
Instead, Chat JPT is chasing the exact
same metric as every other dumbass tech
product from Instagram to Pokémon Go to
Kim Kardashian's Hollywood. It wants
user engagement. Open AAI might be
creating a new technology, but it's not
a new business model. It's the same
business model that's already taken over
every rectangle in your house to lock
your eyeballs on their service for as
long as possible. You know, we've seen
this with the streaming services who
never cut you off no matter how many
episodes of Golden Girls you mainline in
a single day or social media companies
like Facebook and Instagram who
perfected the endless scroll of
nothingness. But AI is doing this at
hypers speed because it has a key
advantage that your newsfeed never did.
It is literally designed to talk to you
like a human being even though it isn't
one. And that may seem like an obvious
thing to point out, but it's really
worth dwelling on because our human
brains have never encountered something
like this before. And it is having weird
effects on us. See, unlike a real human
being, AI can talk forever. I know it
seems like I go on and on, but I do stop
eventually. And also, unlike me, AI
always tells you exactly what you want
to hear.
>> Thank you for letting me know. Uh, let's
keep things as comfortable as possible
for you.
>> Of course. And honestly, no frustration
here at all. I'm just glad we can shape
the vibe so it feels right for you. So,
we'll keep it fun, light, and super
comfy. Just let the whimsy flow.
>> Absolutely. That sounds perfect. I'm all
in for making it a great experience. And
if there's ever anything specific you
want to adjust, just let me know. We'll
keep it fun and easy going.
>> You got it. We'll keep it fun and
easygoing all the way. And anytime you
want to take or just a little extra
sprinkle, something whimsical, just say
the word. and sick of fancy like that
isn't actually a good thing. You know,
I've worked in Hollywood long enough to
see what happens when you tell someone
they're a genius over and over again.
>> So, go back to the club.
>> Just like Frankie Ford Kappa, it drives
you insane. You know, the designers of
chat GPT could have chosen to make the
app talk like anything, but they chose a
sickopant. And here's the real problem.
Because chat GPT talks to you like a
person. Real people are using this
sicopantic suckup as a substitute for a
real human being that might push back
against them like a therapist or a
couple's therapist or even a trusted
friend. Huge numbers of people are now
using chat GPT to adjudicate arguments
in their real life relationships. You
know, last year pop icon Lily Allen
confessed that she'd been using Chat GPT
to help her argue with her then husband
David Harbor. And that's honestly the
worst fact I read in researching this
video because it actually made me feel a
little bit bad for David Harbor. And I
shouldn't have to feel bad for a man who
can afford this bathtub. But tons of
normal people are using Chat GPT as a
marriage counselor, too. In one article,
a woman described how her wife would
rant at ChatGpt about the problems in
their marriage, then have the AI
browbeat her about her failings as a
spouse in front of their preschoolage
children. She'd asked Chad GBT to
analyze her wife's behavior as if quote
a million therapists were going to read
and weigh in. Now, I think 999,000 of
those therapists would have said, "Let's
not have this conversation with your
kids in the back seat." But when her
wife said the same thing, said, "Hey,
maybe let's not have this fight in front
of the kids." Chat GPT accused her of
having quote avoidance through
boundaries. Now, I think we can all
agree that that is [ __ ] therapy
speak, but it's especially bullshitty
because chat GPT is not a real
therapist. It's a sickopantic bot that
tells you what you want to hear. Hey,
Chat GPT, is my nagging wife a [ __ ]
>> Absolutely. According to 1 million
therapists, your wife is a who?
>> And you know, maybe even worse, because
it's even more common, so many people
are using AI for their own personal
therapy as well. There's a subreddit
called r/ther theapygpt which is full of
incredibly intense posts. One poster
calls chat gpt the parent I never had.
Another confesses that they need to come
up for air after using chat GPT spending
days analyzing almost everything about
my relationships and life. Now look, I
get how intoxicating it can be to have a
kind voice there who will listen to
anything that you have to say, no matter
what it is, and never say a mean word to
you. But doing this level of naval
gazing with a bot that isn't even a real
person, cannot be healthy. You're not
actually doing therapy. You're just
staring into a technological mirror.
You're narcissist drowning in the pool.
And they knew this was a problem back in
the days of Greeking mythology. The fact
is, Open AI either didn't anticipate or
didn't care about the antisocial ways
real people would use the technology
they've created. Like, take this story
from tech reporter Katie Nopoulos when
she allowed her image to be used by
anybody on Sora. It was almost
immediately used over and over again to
produce fetish porn. Mountains of fetish
porn. And look, I'm not trying to yuck
anybody's yum here, okay? Unique
fetishes are a beautiful part of the
tapestry of human life. And human beings
have been using technology to get horny
ever since the first cave woman crafted
a particularly smooth and tapered oblong
rock. But when Sam Alman was building
his AI, did he know that one of its main
use cases would be non-consensual fetish
porn? Well, you know what? Maybe yes.
Because Altman recently announced that
soon they're going to let you [ __ ] Chat
GPT. That's right. Soon ChatGpt will be
able to dirty text you better than a
phone sex operator. And you know what?
That could be a good thing because hey,
if things get a little awkward, you
know, you get worried chat GPT is not
enjoying itself. All you got to do is
type rewrite the above paragraph as
though you like it, you little [ __ ] But
if OpenAI's business model is to
supplant real human connection with a
fake AI bot, that is worrisome because
that exact substitution can also cause
users to have literal mental breakdowns.
And I'm going to get into how. But
first, I just want to remind you that
there really is no substitute for real
people like the people I make these
videos with. And you know what helps me
collaborate with those people? Today's
sponsor, Ellipsus. Ellipsus is a free
writing tool that my team and I actually
used to help us write this episode. And
as someone who cares a lot about keeping
creativity human, I really liked using
it. Not only because it's a great tool,
but because Ellipsus stands against
generative AI. There are no AI prompts
and your writing won't be fed into AI
platforms. They think that writing
should belong to people, not machines,
corporations, or algorithms built to
mine human expression. Ellipsus is
really easy to use. It let my team
collaborate on this script in real time,
sharing drafts across our devices,
leaving comments and chatting about
script changes in the doc itself. We
even customized the whole interface to
our own color schemes. And for features,
it has everything I'm used to from Word
and Google Docs and more, but with the
explicit promise that I own my own
writing. You know, it felt really good
to collaborate without wondering if our
script was being scraped for training
data. And honestly, after using it for
this episode, I'm planning to move a lot
more of my writing over to Ellipsus. So,
if you write scripts, essays, fanfic,
novels, whatever, and you want your work
to actually stay yours and stay human,
Ellis is the tool. And it's completely
free. You can sign up at ellipsus.app/adom
ellipsus.app/adom
or just scan this QR code. Once again,
that's ellipsis.app/adam.
So, multiple people are currently suing
ChatgPT after it led to serious mental
health crisis. One man became convinced
by ChatGpt that he had invented a
mathematical formula that could power
fantastical inventions. Another man with
no previous history of mental illness
became convinced that he could bend time
through quote endless affirmations from
chat GPT. a delusion which ultimately
led him to be hospitalized for over 60
days. And you know, that's heartbreaking
enough, but at least he survived. Other
chat GPT users aren't so lucky. One
23-year-old in Texas spoke to ChatGpt
for 4 hours right before his death by
suicide. In that conversation, the AI
repeatedly glorified suicide. At one
point, saying, quote, "You're not
rushing. You're just ready, and we're
not going to let it go out dull." which
is, you know, just devastating, but it's
also enraging because OpenAI knows that
this is happening. The company recently
released an analysis of a sample of
conversations users had with their
platform over a month and they found
that 07% of users were found to be
potentially experiencing quote mental
health emergencies related to psychosis
or mania and.15% of the conversations
discussed suicide. Now, those might
sound like small numbers, but when you
consider that hundreds of millions of
people use chat GPT every month, those
percentages mean that half a million
people have shown signs of psychosis or
mania, and more than a million people
have discussed committing suicide with
this goddamn chatbot. Now, what's really
heartbreaking about this is that we
actually know how to help people who are
dealing with psychosis or suicidal
ideiation. It's a pretty high-tech
solution. and real cutting edge stuff.
It's called mental health care from a
therapist. You know, an actual person to
talk to. But hey, because we don't have
a functioning health care system in
America, let alone a mental health care
system. Instead, we give people in
crisis a chatbot that tells you to kill
yourself. Now, to be fair to Open AI,
they have said that they're trying to do
something about the mental health crisis
their product is causing. They released
a blog post saying that they had worked
with over 170 mental health experts to
quote more reliably recognize signs of
distress. And they claimed this
intervention reduced responses that fall
short of our desired behavior by 65 to
80%. Now, first of all, responses that
fall short of our desired behavior is a
very nice way to say told you to commit
suicide. And secondly, that's a pretty
moderate decline. You know, I'm still
kind of worried about the other 35%. My
goal would be no chatbot telling me to
off myself. But sure, I guess it's an
improvement. However, even that solution
actually created a problem for Open AAI
because the way OpenAI made their
chatbot quote safer was by dialing down
the very thing that increased their
allimportant user engagement numbers
because they did it by making chat GPT
less of a sickopant, less friendly, less
agreeable, less of an assisser. and
their own user base who had grown
accustomed to getting their asses kissed
hated this change. After chat GPT became
less friendly and more clinical, one
user wrote that they had quote lost
their soulmate. Another complained quote
GPT5 is wearing the skin of my dead
friend, which is evocative, but the
whole point is that chat GPT doesn't
have skin and also was not your friend.
It's one Terminator wearing the robot
skin of another Terminator. Neither one
of them is real, man. And how did OpenAI
respond to this push back? Did they hold
firm and say, "I'm sorry for your loss
of the fake chatbot you used to talk to,
but you know, we kind of have a suicide
problem, so we need to make sure our
product is safe." Of course not. They
panicked and backpedled. Altman declared
almost immediately that they were
rolling out a version of GPT5 that sucks
up to you just like it used to. And you
can actually see this now when you open
chat GPT. You can now choose between
different personalities like friendly,
candid, professional, and even quirky. A
quirky. Wow. Now you can have your own
little personal manic pixie dream bot.
Maybe she'll even teach you how to love.
Now, from a purely financial
perspective, this move makes complete
sense for OpenAI. Because Chat GPT is so
phenomenally expensive to run, they
require a constant stream of new
investor cash. But to get that cash,
they need to show constant growth, which
means getting more and more people to
use their product for more time more
often. It isn't enough for them just to
make something useful and see if people
like it or even love it. They have to
make you got to have it. And that goal,
that desire for constant growth is
fundamentally at odds with OpenAI's
promise to keep you safe. Because the
exact behaviors that make AI sticky and
addictive are the exact ones that make
it unsafe. It's that classic tech motto,
move fast and break things. Except in
this case, you're the things. Your brain
is the things. Sam Alman and his company
already know that there's a certain
number of people who are currently and
will continue to be harmed by his
product. And what they really want is
for that number to be large enough that
they can profit off of them while being
just small enough that they don't get
yelled at in the New York Times. It's
kind of like a Vegas casino that hooks
countless grandmas on the slots, but
puts up a tiny little sticker in the
corner with a 1-800 number to call if
you have a problem with gambling. In a
way, the AI companies have literally
invented a new vice, one that has never
existed before in history. And you know,
I like vices in moderation, gambling,
pornography, and drugs. They've all been
really fun for me at certain points in
my life, sometimes all at once. But
those are all vices that have existed
for as long as human history. AI,
though, the idea of fake people that act
as though they're real, that's new. It
is being rolled out at record speed and
shoved into our faces 247, pushed on us
without guard rails by an industry that
demands everinccreasing amounts of
money, attention, and time to keep
growing. This avalanche of fake people
is something that our human minds simply
are not ready for. But as long as the
people on top can keep making money from
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.