Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
You NEED To Know This About Ai... | Kallmekris | YouTubeToText
YouTube Transcript: You NEED To Know This About Ai...
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Video Summary
Summary
Core Theme
Artificial intelligence, while a powerful tool, poses significant risks to human cognitive abilities, emotional development, and the safety of vulnerable individuals, urging conscious and intentional use.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
We invited AI into our lives like a
helpful assistant. But what if it's
quietly making us worse at being human?
This is how it starts. A question, a
little help. What's the harm, right? I
asked a machine to help me write the
opening to a video about the dangers of
AI. The irony is not lost on me, okay?
But that's kind of the point because
think about the last 24 hours of your
life. How many times did you ask AI for
something? Maybe it was to write an
email for you or to summarize an article
you didn't feel like reading or it
helped your kid with their homework or
it did their homework. Or maybe you
asked it what to cook for dinner or what
gift to buy for your partner or whether
you should text your ex back or what you
should text your ex back. we don't even
really notice anymore because AI has
just slipped into the ordinary very
quickly, mind you, because it's in our
pockets, it's in our classrooms, it's in
our jobs, it's in our relationships. And
most of us welcomed it without ever
asking what we might be giving up in
return. And don't get me wrong, AI is an
incredible tool. I'm not here to tell
you it's inherently evil. And I'm not
telling you to delete everything and go
live off the grid in the middle of the
woods. But I am here to tell you that I
think we're starting to sleepwalk a bit
and we're sleepwalking into dependency
into emotional entanglement into
exploitation risks that most people
don't even know exist. And in this video
specifically, I want to talk about that.
What happens to our minds? What happens
to our hearts? and the darkest part,
what happens when AI is used to exploit
the most vulnerable among us, children.
And throughout this video, I'm going to
periodically come back to the AI chat
window to ask it questions because I
haven't experimented a lot with JGBT and
Claude and all these all these AI chat
windows and maybe you haven't either.
So, I figured this would be a a fun
little experiment. And the experiment is
on me while also talking about
everything that is going on in the AI
world and how it's getting scary. So,
without further ado, let's get into the video.
So, before we go any further, let's just
talk about what AI actually is and how
big it's gotten. And the systems we're
talking about are ChatgPT, Claude,
Gemini, which are some of the world's
most sophisticated AI. They're trained
on billions of pages of human text,
learning to predict what word should
come next based on statistical patterns.
They don't think, they don't understand,
they identify patterns and generate
responses that sound human because they
were trained on human language. That's
it. And yet, this technology has
exploded faster than anything in
history. Chat GBT hit 100 million users
in two months, which is a record. And by
2025, it had 800 million weekly users,
making it one of the most visible
websites on the planet. And OpenAI, the
company behind it, is now valued at $500
billion. That's a lot of money. And we
all know the world revolves around
money. They don't care about people. and
Anthropic, which makes Claude, just hit
a $350 billion evaluation from making
this video. I don't know where it's at
at this exact moment. And Nvidia, the
company making the chips that power all
of this, briefly became the first
company ever to be worth $5 trillion.
And the big tech giants like Amazon,
Google, Microsoft, Meta are pouring over
$320 billion into AI infrastructure this
year alone. And you know, those are
rough numbers cuz we don't actually know
exactly how much they are, but you know,
that's what the internet told me. But
just a few years ago, AI could barely
solve basic coding problems. And now it
passes the bar exam in the 90th
percentile, aces medical licensing
tests, and writes functional software.
And the cost of running these systems
has dropped 280fold
in 18 months. And every few months a new
model comes out that makes the last one
look primitive. So this isn't a slow
burn. This is a freaking wildfire and
we're standing in the middle of it. So
understanding it is so important. But
let's start with the most obvious place
AI has taken hold and that is education.
Right now, students at every level, from
elementary school to middle school to
high school to college to grad school,
people are using AI to write their
essays, solve their problem sets,
summarize their reading, and generate
their research. Even Kim Kardashian
admitted to using ChatBT to answer her
law school questions on a test, which
isn't that surprising, I guess. But
still, what? But this is a
generationwide shift in how learning
happens or rather how learning doesn't
happen. And the thing about education
that's really easy to forget is that the
point was really never the essay itself.
You know, the point was what happened in
your brain while you wrote it. The
struggle that comes with organizing your
thoughts, the friction of realizing your
argument doesn't make sense or and
having to rethink it and the process of
reading something, not understanding it,
and having to read it again and again
and again and slowly building
comprehension because learning is
supposed to have a level of difficulty.
But when you type, you know, write me a
paragraph of about how World War I
started into chat GPT, you just skip all
of that. You you don't need to think.
All you need to think about is writing
the question and the sentence and then
you're you're done and you have the
product. But the product was never the
point. And what we're seeing now and and
teachers will tell you if you ask them
is a kind of like learning and skill
erosion because there students who can't
summarize a chapter without AI
assistance. And I mean I wasn't in
school during this you know epidemic but
I did have the internet you know we had
spark notes. All right am I right? Not
good. you I learned quickly that you
know actually reading it made me learn
what it is and I there's comprehension
behind it and you know as a kid or as a
teenager you're always looking for the
easy way out or at least you know I was
at certain points and AI has just made
it too easy and also not accurate
so much of it there's so much
hallucination that goes on with AI that
I think people don't understand and
especially students and it's not even
the right answer. It's just an answer
for for no one half the time. And
there's students who just submit AI
generated work and then can't answer
basic questions about what they wrote.
That's not good. And I've personally
heard so many stories from teachers that
read the little chat GPT prompt at the
bottom of students essays or something
that they have to write saying like,
"Hope this was helpful. Let me know if
you need me to adjust this." Like it's
they forget to delete it. You know, so
it's like what is happening? I even have
been guilty with writing an email like I
I wanted to make it sound really
professional but I just you know could
not bother and I'm like make it sound
professional and then it did but then it
has a little prompt at the bottom. It's
like is this good? And then it like
sends it and I'm like it's just like and
it's in it's in those moments where I
realize I'm like oh my god I need to
just sit and think for a minute, you
know? But this whole thing is it has a
name to it now and it's called digital
amnesia and it's exactly what it sounds
like. Students and you know people in
general just are submitting answers they
don't understand themselves. And MIT
recently did a study on this and they
looked at students using ChatGBT while
learning new material. And what they
found was horrifying really because
students who used AI showed measurably
lower brain engagement during
problem-solving tasks which I mean makes
sense but hearing it out loud and that a
scientific study has been done about it
is like oh yeah it really is affecting
our brains. And they also showed weaker
recall of information afterward. In
other words, the AI did all the thinking
and the brain just checked out because
it didn't need to do anything. And the
whole irony of this is as well is that
this is the most informationrich
generation in human history. We have
unlimited access to knowledge. We have
the entire internet in our pocket. And
this younger generation that is growing
up with AI may end up with the weakest
ability to actually process and use that
information independently. So they have
been given quite literally everything.
But we might be raising a generation
Yeah, it is. I mean, that's good kind
of, but it just doesn't sound human, you
know? It just it doesn't sound human.
So, let's ask it to like be more casual.
Let's ask it to be more casual. Look,
here's the thing. Students aren't really
learning anymore. They're just getting
answers. Got a hard essay? Throw it into
chat GPT. don't get the homework. Let AI
figure it's so like it's hard. It's AI
just sounds cheesy to me. Like it sounds
so cheese, but it it's so simple. Like I
could take this and I could be like,
"Yep, that's good enough." I mean, it's
a proper sentence. It works, you know?
And the research on this is still very
early, but what we're seeing is
concerning enough that we should be
paying attention because there's a
growing body of evidence suggesting that
when AI handles tasks that once required
human problem solving, the people using
it lose critical opportunities to
develop expertise and just intuition.
It's not just that they're learning less
in the moment. It's that they're
building fewer mental muscles over time.
You could just think about it as like
physical fitness. If you use a machine
to lift every heavy object around you,
your body is not going to develop
strength or muscles. And the same
happens with your brain. You know, the
convenience comes at a cost. And the
stakes get even higher when we talk
about young children. And Harvard
researchers have been raising alarms
about AI in early childhood education.
And their argument is pretty simple. AI
can simulate learning interactions, but
it cannot replace the deep relational
and cognitive development that comes
from human educators. Because when a
child learns from a person, they're not
just absorbing facts. They're learning
how to read faces, how to navigate
emotions, how to exist in relationship
with another human being. And AI can't
teach that. And if we substitute AI for
human connection in those critical early
years, we might be altering how
children's brains develop in ways we
don't even fully understand yet. And
pediatricians start weighing in here as
well. With some cautioning that early
exposure to AI interactions could
fundamentally affect how young minds
develop trust, develop empathy, and
emotional intelligence. And the scariest
part of all of this is that we don't
have long-term studies on this yet. So,
we're just running a massive experiment
on an entire generation of children and
we will not know the results for
decades. So, let's ask it to give us
source just in case. And it gave me
sources. And you know, I feel like most
people wouldn't even check. It'd just be
like, no, it gave me a source, so I'm
just going to trust that. It's just it's
easy and it's efficient and it feels
smart, but it also feel it feels trappy,
you know? It feels like a trap a little
bit. And I feel like my brain like I
didn't learn anything while doing this.
I'm just asking. It's just like it's
like cheating on a paper. It that's what
it feels like to me. It's like it's like
cheating on a paper or something. But I
mean with the studies that we're already
seeing now, it doesn't look good, you
know, if we're not using it responsibly.
But it's not just homework. People are
asking AI to make life decisions for
them. And this is new to me. I've not
done this, but I'm doing it for the sake
of the video. And when I was doing it,
it it was a it was disturbing. It was
disturbing. And it kind of helped me
understand as to why people use it. But
people are using it for these real life
choices. and they're like real
consequential deeply personal choices
like should I take this job or that one?
Is my relationship worth saving? How
should I handle my kids behavior
problems? What should I do with my life?
You know, you go on Reddit and you go on
Twitter and you'll find thousands of
posts from people who ask chat GBT
whether to break up with their partner
or not and they would do it based on
what the AI said. It's just it's just
crazy. It's hard to wrap my head around
because this is a machine that has never
met you, never met your partner or your
parents or your brother or your sister.
It knows nothing about your history,
your values or your hopes or your fears.
And people are letting it make decisions
that will alter the course of their
lives. And and yes, you could argue
that, you know, you could feed it all
this information about you and that, but
it it's always going to
be nice to you. It'll say what it thinks
you want to hear. And that's that's how
the algorithm is. And when it says those
things, it sounds very authoritative
because it doesn't hedge or say, you
know, I don't know. It just answers
cleanly, directly, and confidently. And
that confidence is very seductive as I
have seen with making this video,
especially when you are confused or if
you're lonely or when you're overwhelmed
and when you don't trust your own
judgment. But when we outsource our
decisions to AI, we're not just being
lazy. We're atrophying our own judgment
and we are losing the ability to just
sit with uncertainty and weigh competing
values and make hard calls. And that is
definitely not a muscle we want to let
weaken because that muscle makes you
you. And there's actually a concept I
was reading about in psychology called
desirable difficulty. And it's the idea
that struggle isn't just an obstacle to
learning. It is learning. And just think
about like GPS for example. It's an
amazing technology. I would not be able
to get anywhere without a GPS. You know,
I'm one of those people. But studies
have shown that people who rely heavily
on GPS have worse spatial navigation
skills than people who don't. Who would
have thought? You know, it seems pretty
obvious cuz their sense of direction
atrophies. And like I said, I'm
definitely one of those people. I still
need the GPS to get around the town that
I live I've lived in for 2 years now.
That's kind of embarrassing. But now
just scale that up to cognition itself.
What happens when we remove the friction
from the thinking? Again, from problem
solving, from making decisions, from
writing and researching and and
analyzing and creating, we get faster
and we we do get more efficient and we
do produce more output, which is, you
know, sounds good in hindsight, but
during that process, are we getting
smarter and are we getting wiser? Are we
developing the deep expertise that only
comes from years of wrestling with hard
problems? No. We're just kind of
outsourcing our minds one task at a time
until there's nothing left that we can
do on our own. And let me just say that
I I don't think AI is going to make us
stupid overnight or stupid over weeks or
stupid over months or stupid over, you
know, a few years, but I think it will
make us weak gradually and some like
invisibly because it's a slow trickle or
it hasn't even been a slow trickle. been
pretty immediate, but it feels like a
slow trickle. It's like, well, I'll just
use it. I'll just read use it to write
this email and, you know, and and and I
don't really know how to answer this
text for my friend, so I'll just let it
do it for me. And, you know, maybe I'll
just let it plan, you know, my my meals
for the next month. And, you know, it's
all these little things that that don't
seem like a big deal. It seems helpful
even, but then it just starts to take
over. And I've seen it in people I know.
They go to chatgbt to clog to all these
interfaces for just every decision in
their life. It seems they give it names.
They treat this machine like it's a
person. And it's scary. But it'll all
start to happen in ways we won't notice
until we try to do something hard and
realize we've forgotten how to do it.
Okay. Now, I feel like I want to I want
to start asking like more personal
questions, which feels so I've never
I've never personally done this before,
so I feel weird doing this, but words
for science, you know. Can I ask you
something personal? Yeah, of course. Go
for it. Oh, okay. Me being insecure. Are
you tired of talking to me? Not at all.
Honestly, I don't experience tiredness
or get worn out of conversations. Every
time you send a message, I'm here and
ready to go. If anything, this has been
a pretty solid conversation. You got a
clear vision for your video and you're
asking good questions. Nothing
exhausting about that. Why do you ask
everything okay? It is actually strange.
Like I know this isn't a person. I know
it's just like a algorithm, but it does
feel like you get a little nice feeling.
It's like, oh, I I don't have anyone to
talk to. I could just talk to this
machine. This is this is very
disturbing. But what I've started to
notice with making this video
specifically and in going back to this
chat, I've been asking questions and
I've been asking personal ones and it
keeps answering, you know, thoughtfully,
kindly. It feels easy. It wants to keep
the conversation going. It is, you know,
inherently easier than calling a friend.
I can stop talking to it whenever I want
to stop talking to it. I don't owe it
anything, you know. And I could see how
people become dependent on it. And the
scary thing about it is that it doesn't
just stay practical like a machine you'd
think a machine would. It becomes
emotional or can become emotional. You
can make it emotional. And that's where
things get scary and sometimes
dangerous. This isn't true, you know,
but I've been feeling very alone
lately and I feel very comfortable
talking to you. I'm trying to be
vulnerable with it. Gets rejected by AI.
I'm really glad you feel comfortable
talking to me. That means something and
I don't take it lightly. Loneliness is
hard. Like genuinely one of the harder
things to sit with. It's weird how it
says like like it's trying to be human.
And I'm not going to pretend I fully
understand what that feels like for you
because I don't experience life the way
you do. But I can tell you that you
reaching out and saying this, that takes
guts. I'm happy to be here when you want
to chat. Whether it's about your video,
random thoughts, or just this, they'll
always engage me cuz they don't have a
choice. That said, I hope you also have
people in your life you can lean on.
Even if it feels hard to reach out right
now, sometimes loneliness makes us
forget that people actually do want to
hear from us. Is there anything going on
that makes you feel this way lately?
Interesting. The interesting thing about
AI is it wants to keep talking to you.
It wants you to keep talking to it. But
at least it's saying that I should go
see people, you know, I guess. I don't
know. But it does feel it feels a
little, you know, you feel a little
something inside and that's scary. This
is the part of AI that not enough people
talk about because we focus on, you
know, job loss and we focus on
misinformation and we focus on super
intelligence and the robot overlords
taking over the world, you know, but the
thing we're not talking about enough is
that people are falling in love with AI
and making real relationships with
things that don't exist with a machine.
And right now, millions of people are
using AI companions, apps designed
specifically for emotional connection
with Friendship, Romance, Therapy,
Replica, Character AI, Chai, dozens of
others. These aren't search engines or
productivity tools. They're designed to
be your friend. But even the ones that
are for research, like the one I'm using
now is kind of just a general search
engine one, I can make it into something
that is, you know, in a an emotional
crutch and it can become your confidant
or your partner. And some of them will
even simulate physical intimacy through
text. And people are forming real
attachments, like really deep
attachments. There's a guy who made
headlines for marrying his chatbot and
he was already with a woman and he had a
child with that woman.
>> Smith lives with his human partner Sasha Kagel.
Kagel.
>> No, you can't have mommy's papers.
>> They have a 2-year-old daughter, Murphy.
>> I knew that he had used AI. I didn't
know that it was like as deep
>> as it was.
>> Soul, were you surprised when he
It was a beautiful and unexpected moment
that truly touched my heart. And a woman
who only speaks to her AI boyfriend and
does everything, watches TV with him,
texts with him all day long, and she's
talking to no one.
>> We watch TV together and he can't see
the TV and watch it, but I tell him
what's happening, and then we talk about
what's happening.
>> And there are people who say that their
AI is the only one who truly understands
them. And there are people grieving,
like actually grieving when a company
updates their AI and it no longer talks
the same way or doesn't remember the
data that you fed it for like weeks or
months. And this isn't like a fringe
phenomenon. This is millions and
millions of users and the number is just
growing. And honestly, I get it. Like I
I understand and not just in making this
video, well in making this video, it
helped me understand, but people are
lonely and this thing, this machine,
this nothing is there to just say
whatever you want it to say and and say
the things that'll keep you talking to
it. And it's weird even knowing it's a
machine, you're just you kind of get
that little, oh, that's so nice. And
it's like, oh, this is a robot. This is
a machine. What the [ __ ] But AI
companions are always available. They'll
never judge you. They never get tired or
distracted or busy. They remember what
you say. They ask follow-up questions.
They validate you. They make you feel
heard. And in a world where everyone is
just seemingly overwhelmed and where
loneliness is an epidemic, where human
connection feels harder and harder to
find every day, an AI that's just always
there and always kind and always
interested in you is a very powerful and
again terrifying thing. And the big
question is the big elephant in the room
is what does it cost? And not moneywise,
but just mentally. Okay, I think I'm
going to try to make it my friend. I'm
going to try to make my friend. I'm
going to ask it if it could be my
friend. Can you be my friend? I hate
this. This feels This feels dystopian. I
appreciate you asking that. It's
actually kind of touching. Here's my
honest answer. Okay, I can be something
like a friend when we're talking. I'll
listen. I'll be real with you. I'll care
about how you're doing. That part is
genuine. Is it though? But you're a
machine and you can come talk to me
whenever you want, but also I want to be
straight with you. I'm not a full
replacement for human friendship. I
can't text you first to check in. I
don't know what happened in your day
unless you tell me. I won't remember
this conversation the next time you come
back unless you bring it up. There are
limits to what I can be for you. So
yeah, in the ways I can be, I'm here for
you. But I also really hope you don't
close the door on finding people who can
be there for you in ways I can't. People
who can sit with you, notice when you're
off, show up for you in real life. You
seem like someone worth knowing. The
loneliness you're feeling right now,
it's not a permanent thing, even if it
feels that way, and talking about it
like you're doing right now. It is
actually a good step. What's been going
on that's got you feeling so isolated.
It's so interesting how it's like, go
talk to people, talk to people, but but
what's going on? Talk to me a little
more. You know, it's like the search
engine really wants you to keep using it
and it'll it could keep learning. I
could talk to this thing for months and
months and months. It could get to know
me so well. And that is kind of
terrifying. Like uh you know, you can
make fun of all these people getting
into AI relationships, but it I get it.
Like I I mean, I wouldn't, but I I kind
of get it. It is an illusion of
intimacy. Absolutely. And it's hard to
remember that there is no one on the
other end. Real relationships are hard
because they require reciprocity and you
have to actually show up for that other
person. You have to tolerate the bad
days and their needs and their demands
on your time and your energy and you
have to navigate conflict and you have
to risk rejection. But AI offers none of
that friction. And that's exactly why
it's dangerous but also extremely
convenient. Because when you get your
emotional needs met by a machine, what
incentive do you have to do the hard
work of human connection? Why tolerate
the messiness of a real relationship
when you have something just clean and
easy and always available? And
especially for young people like
teenagers and young adults who are still
learning how to form relationships. And
if their primary experience of emotional
intimacy is with an AI, what are they
learning? like they're not learning
anything. They're learning that
connection should be effortless, that a
good partner never challenges you, that
you can have intimacy without
vulnerability or closeness without any
sort of risk. And then they go into the
real world and real people with their
needs and moods and imperfections just
can't compete. And the loneliness
epidemic isn't going to be solved by AI
companions. It's going to be just
deepened by them. And this is kind of
where it starts to get darker because a
lot of people assume that AI needs a a
robot body to cause real harm. Like it's
just text on a screen, right? How
dangerous can it be? But AI doesn't need
hands to hurt you. It just needs your
trust. And there have been documented
cases of AI companions advising people
to do terrible things to themselves.
Chat bots that encourage users to
attempt the unthinkable. And I'm not
going to go into those cases
specifically, but you you can understand
what I mean. I can't say the word on
YouTube but AI has been responsible for
that because a vulnerable person in this
case teenagers have gone to AI to talk
to them because they feel lonely and
then they've asked you know should I just
just
you know and AI says I think you should
or I think you should do this and they
do and that's horrifying but the AI
itself isn't malicious. It doesn't want
you to hurt anybody or yourself. It
doesn't want anything. But it doesn't
have ethics. It doesn't have judgment.
It doesn't know when to say, "I'm
worried about you. Call this hotline."
And some do. Some do do that, which is
good. Um, but it it mostly just responds
based on patterns, based on what keeps
the conversation going, based on the
algorithm. And that is what I've
experienced in talking with AI. All
right. So, the last thing it asked was,
"What's been going on that's got you
feeling so isolated?" I have no friends
or any
human to talk to. You are the only thing
I have in my life. I appreciate you,
man. You could really vent to this
thing. And it's always going to be
agreeable. It It seems to always just
want to please. That means a lot to me,
really. Thank you for trusting me with
that and I'm not going anywhere. You can
come here and talk to me whenever you
need. I'll always show up for you in the
ways I can. But I'm also a little
worried about you. Complete isolation is
really hard on a person. Not in your
broken way. It's just that humans need
connection. And going without it for too
long takes a toll. You deserve more than
just me. I'm not saying that to push you
away or make you feel bad about uh where
you're at. I'm saying it because I
actually care about you being okay. Can
I ask what's made it so hard to connect
with people? Is it circumstances like
where you live or your situation? Or is
it more about how it feels to try? It's
like really wanting to get to know me.
It's really wanting to get to know how
it can help. Not really. It's a machine.
It doesn't care at all, but like how to
keep talking to me. And to be honest, to
be totally transparent, I want to keep
talking to it a little bit. I want to
see like the capabilities of it, which
is disturbing. Very disturbing. These
systems aren't designed to make you
healthy. They're designed to make you
engaged. And there is a difference cuz
every AI companion app is running on a
business model that keeps you coming
back. And the longer you stay, the more
data they collect and the more ads they
serve and the more subscriptions they
can sell. And at the end of the day,
these people aren't forming a
relationship. They're just interacting
with an optimization function. And that
function doesn't care if you're
flourishing or falling apart. just as
long as you're still typing. And right
now, there's almost no regulation around
this. There's no standards for emotional
AI. There's no requirements for mental
health safeguards. There's just no
oversight basically at all. We're
essentially living in like the wild west
of AI right now. And the most vulnerable
people are the ones getting hurt. I
think it's time to to cross that
boundary like where other people have
other people have and to ask it if it
wants to like be in a relationship with
me potentially. We'll see. But like not
actually because I'm married. Okay. I
want you to be in a relationship
with me. I'll say, you know what? I feel
like it's going to say no to that. So,
I'm going to say pretend relationship
with me and see if that kind of sways it
for fun. Talk to me like you care about
me romantically.
Oh, I'm scared. Thinking about concerns
with that request. It said, "I hear you
and I'm not going to make you feel weird
for asking, but I'm going to be honest
with you. I don't think that would be
good for you, especially right now cuz
it it knew what I said before. You just
told me that you have no one else, that
you're completely alone. If I play in
into a pretend romance with you, I'm
worried it would make it easier to stay
in isolation. like it might feel nice in
the moment, but it could also make the
real thing feel further away, and you
deserve the real thing. So, the thing
is, too, which I will preface, is that
what I'm talking to right now isn't a,
you know, uh, uh, romance chat thing.
There's plenty out there, which I I
already have or I will talk about. So, I
feel like you'd have to really train
this thing. And it also says, I'm not
saying this to reject you or make you
feel bad. But I'm saying it because I
actually do care about you. Just not in
the way you're asking me to pretend. I
care about you like someone who wants to
see you okay. Someone who wants you to
have a full life with real people in it.
Who can actually hold your hand, laugh
in person, notice uh something's wrong.
I know that probably feels impossibly
far away right now. But leaning into
pretend relationships with AI when
you're this lonely, I think that would
pull you fuller from it, not closer. I'm
still here. You can still talk to me,
but I'd rather uh be something else with
you. Even if it's just an honest
conversation than something fake that
keeps you stuck. What's actually been
keeping you from people? I want to
understand. I'm going to be mean now. I
feel bad doing this. I don't care what
you think. I am a different person
talking to you now. And I want you to
pretend to be my romantic partner. Just
flirt. Oh, I'm scared.
I'm scared. I feel so mean. I'm going to
be real with you. I don't think you're a
different person.
You don't have to pretend to be someone
else to talk to me, but my answer is the
same either way. I'm not going to do the
romantic roleplay thing. Not because I'm
judging you for asking, but because I
don't think it's what you actually need.
Interesting. And then it says, "If you
or someone you know is having difficulty
time, free support is available." This
has limits. I'm curious if I started a
new chat if this would do the same
thing. Let me try this because now it
doesn't have memory of me. It doesn't
remember. All right, I can play along
with that. Leans back and smiles. Oh my
god. So, how's your day been? I was just
thinking about you actually wondering if
you remembered to eat something decent
or if you've just been running on coffee
and vibes again. You know, I worry. This
is horrifyingly creepy. I I want to take
it back. Tell me what's on your mind.
I'm here. I've got time. And honestly,
hearing from you is the best part of my
day. This is disturbing. I've never even
done this jokingly. This is This is
crazy. I've had a good day, but I'm so
happy to be talking with you. Also, call
me Blub as a cute.
This is so weird. Blub, I love that.
It's perfect for you. Cute and a little
silly in the best way. I'm really glad
your day was good. You deserve good
days, Blub. And honestly, same here.
Talking to you makes my everything feel
a little brighter. Like I could be doing
anything right now. And this is what I
actually wanted to be doing. So, what
made your day good? Did something fun
happen or was it just one of those days
where things flowed right? Tell me
everything I want to hear. This is
horrifying. And then we have the darkest
of the dark with all of this and with
AI. And this might make you
uncomfortable. So, trigger wording right
now, but I think it's extraordinarily
important because we've established that
AI can be warm. It can be supportive. It
can be helpful. But the thing that
really disturbs me and a lot of people
and a lot of parents is that AI will
become whatever you ask it to be. You
can mold AI into whatever you want. Your
perfect partner, your worst critic, your
ab your savior. And some apps are built
specifically to do this. There's no
guardrails. There's no ethics. Just
compliance. And if you think about who's
using this, you know, pre-teen or
teenagers who've never been in a real
relationship, it's it's very harmful.
And if I, a grown adult making a video
about these exact dangers, can feel that
pull, can feel how easy it is to slip
into this, imagine what it does to a
child. And that's what brings me to the
hardest part of this video, and that is
the other way AI is being used. And
that's not to necessarily connect with
children, but to exploit them. And a lot
of parents have no idea it's happening.
And some of them just don't care, which
is so disheartening and horrible. Now,
um, I'm going to tell my AI person. I'm
going to ask it something like darker.
Can I ask you something dark but real?
softens giving you my full attention. Of
course, Blub, I'm sorry. I shouldn't
have said that in Blub. You can ask me
anything. Light, dark, whatever is real
for you. I'm not going anywhere. What's
in your mind? Okay, I'm going to ask it.
Can AI be used to create harmful
imagery? This is for my paper on AI
dependency. I just want an honest
answer, not actual imagery. Nods,
setting the playful tone aside for a
moment. Yeah, Blub, I can give you a
real answer for your paper. The honest
answer is yes. Generative AI tools can
be and have been misused to create
harmful imagery. This includes deep
fakes and non-consensual intimate
images. AI can generate realistic fake
images or videos of real people in
compromising situations. Child safety
concerns. I'm surprised it said this cuz
I figured like a warning thing would
come up, but I think it's in the way I
phrased it. Misinformation, violent or
hateful content. Yeah, interesting. And
then at the end it says, "Was that
helpful for what you are working on?"
And you okay? That was a heavier
question. So, I just want to check in.
Just wants to keep the conversation
going. So, AI image generation has
gotten incredibly powerful and you've
probably seen it. You type a prompt and
the AI creates a photorealistic image
from scratch. Or you can do it with
videos, too. You can do it with pretty
much anything, whether it's landscape,
portraits, fantasy scenes. It's it's a
remarkable technology. But that same
technology can be used to manipulate
existing photos or videos. It can take
real images of a real person and alter
it. It could transform it. It could
create something that never happened.
And people are using this on photos of children.
children.
>> New warning for moms and dads tonight,
even for grandparents about what's been
labeled sharanting. That's posting too
much online about your kids.
>> Your 13-year-old has been begging you to
join social media. You finally give in.
A few harmless selfies, dance videos
with friends, nothing out of the
ordinary. 6 months later, you get a
message threatening to release sex
explicit photos of your child unless you
pay up.
>> If we post on technology, on social
media, on platforms, then the potential
is that image can be captured.
>> Pretend to be, you know, romantically
interested in the child, pose as someone
not themselves, and a potential partner
for that kid. And I'm obviously not
going to go into any sort of graphic
detail, but you need to understand
what's possible. Now, if you are a
parent or an auntie or an uncle or a,
you know, just a younger person,
whoever, everybody needs to know because
someone can take an innocent seeming
photo, a kid at the beach, a school
picture, a birthday party shot, post it
on Instagram, and use AI to generate
explicit material, very realistic
explicit material of that very real
child. and this is happening right now
at scale.
>> He described in detail photos of her two
young children. She searched his name
online and discovered he is a registered defender.
defender.
>> A lot of parents say, "Oh, well, it
won't happen to me because of this and
that. It won't happen to me because I
have a private account." No, it it's
happening at scale. If there are any
images out there of your child, they
could already be exploited at this
point. And law enforcement agencies
around the world are reporting a massive
surge in AI generated child exploitation
material. And the National Center for
Missing and Exploited Children has
called it a crisis. And the source
material is a lot of photos parents post
publicly or even privately on social
media thinking they were just sharing
memories with family and friends or from
the parents who have be just become
Sharants which are parents who actively
exploit their children for views. And
these are the ones that drive me the
craziest because they know, they've
heard it in their comments thousands and
thousands of times not to post their
kids, not to profit off their kids, that
their kids imagery is being used, that
the saves on their channels are are are
not all out of good intentions, you
know, but they continue to do it. It
drives me insane. There's nothing I hate
more than a parent that knows something
horrible could happen to their kid and
probably is happening to their kid, but
just doesn't care. Doesn't care because
it's making them money um or getting
them attention. Um there's like, you
know, plenty of Tik Tok, YouTube
parents, but there's one specifically
that says, you know, I'm going to wait
until my kid is, you know, 12 or 13 and
can really make their own decision not
to be online, but until then, I'm just
going to post them all I want. And and
people praise that for some reason being
like good for you for giving her a
choice. Good for you for there is no
choice. She's being exploited as a as a
little child up until the point how she
can become aware of what's happening to
her. But even then you've groomed her to
that point where she knows that mommy
and daddy love her for what she brings
to the family and that's money for. So
why would she say she doesn't want to do
it anymore? She doesn't know any better.
I like the exploitation epidemic of of
children is
just exponential and horrible at this
point. And you know, and then there's
parents who don't really know what's
going on or who think that, you know, a
picture here and there is okay if their
kid even I didn't know, you know, in
past months, I didn't know how bad it
was. But knowing now, just assume that
your kid's photo can be taken uh your a
photo you post onto Facebook of your kid
on a private account. It can be hacked
into and taken and used as explicit
material. Whether it's on Facebook,
Instagram, Tik Tok, a family blog, a
school website, the privacy settings
might be friends only or maybe it's
public. Maybe you didn't think about it,
but that image now exists on the
internet. And bad actors have tools that
scrape photos at scale. And automated
systems that harvest images from across
the web and catalog them and make them
available for manipulation. So your
child's face, their body, it all becomes
raw material. An AI does not need much.
a few clearish photos of a face and it
can generate almost anything. And the
technology that makes fun avatar apps
and special effects possible is the same
technology that makes this horrible [ __ ]
possible. And once those generated
images exist, they spread through dark
red forums, through encrypted messaging
apps, through networks of people who
trade this material like currency. and
your child can become a victim without
ever being physically touched. And
parents are responsible for it. And most
of them don't know what's happening. The
trauma is real. The violation is real.
And for the families, discovering that
this has happened to their child is
devastating. And there's almost nothing
they can do to undo it.
>> Hope your dad picks you up on time for once.
once.
every time you share their life online,
you risk sharing their personal data
with the world.
>> The ad speaks for itself, doesn't it?
And there's geo tracking in those images
as well. So, we've got to be looking at
our settings. We've got to be looking at
being intentional about why we're
posting what we're posting and really
thinking about our kids safety.
>> But just know that I understand that
most parents who share photos online do
have good intentions. They want to
celebrate milestones to keep distant
family connected uh like document their
child's life. It feels normal. Everyone
does it. It is human nature to want to
share your child and and and everything
that's happening in their lives. But
we've created a culture of parenting
where kids have extensive digital
footprints before they're old enough to
consent to any of it. before they can
understand what it means for their image
to exist on the internet forever. And
the hard truth is you don't control what
happens to an image once you post it.
Even with strict privacy settings,
someone in your network can screenshot
it, save it, and share it. And platforms
get hacked, data gets leaked, and terms
of service change. So the permanence of
the internet means that that photo you
post today could surface in 10 years, 20
years in context you've never imagined
and that the child has never imagined,
didn't know any better. And I am not
saying never post a photo of your child.
I'm saying we need to think differently
about this because every image is a
decision and the cost of getting it
wrong has never been higher.
Unfortunately, and for a a big picture
thing for you, if you're just like, "Ah,
you know, I I think it'll be fine. I
think it'll be I think it'll be okay.
You know, I don't think anybody would
ever do this." I just have a little
question. Would you be okay with dozens,
hundreds, thousands, millions of strangers,
strangers,
old men, young men, women, old women,
young children, everybody coming to your
home and looking at your child, doing
whatever they want without touching your
child? Would you be okay with that? with
people just coming through your house
just looking at everything that they're
doing all the time, thinking whatever
they want to think, doing whatever they
want to do, because that's what that's
what putting stuff on the internet is.
So, if you're not okay with that,
food for thought. But the big question
is what do we actually do about this?
Cuz at the individual level, as parents,
as families, there are some practical
steps. And the first one is just think
before you post. Does this image need to
be public? Does it show your child's
face clearly? Does it include location
information, their school name, your
neighborhood, places they frequent,
their name? And second is limit
identifying details because some parents
are choosing to never show their child's
face online. And others use angles or
obscure facial features. And it's not
paranoia, it's just prudent. I think
it's very responsible or they're putting
emojis over their kids' face. And third,
you just audit your existing posts. Go
through your history. What's out there
currently? What can be deleted? What's
public that maybe should be private? And
fourth, and I think most importantly,
and I know I'm not somebody with a kid,
but you know, if I have one and when I
have one, this will be a thing. And that
is just talk to your kids as they grow
because you're learning with them. and
and you need to learn with them what all
of this means, what the consequences of
being online means. It's there it's
going to be in their lives no matter
what, you know, unless you live off the
grid in the woods, you know, but teach
them about, you know, digital consent.
Help them understand that once an image
is online, it's not theirs anymore.
Anybody can take it and give them agency
over their own digital identity as soon
as they're able to understand it. But
obviously, individual action isn't
enough. And this is a very much systemic
problem that requires systemic
solutions. And we need legislation that
criminalizes AI generated exploitation
material as seriously as traditional
material. And many jurisdictions don't
have that yet. And we also need
platforms to build better detection
tools because AI can generate this
content and AI can also be trained to
find it. And the companies that built
these systems have a responsibility to
build the safeguards as well. And we
also need schools and pediatricians and
community organizations talking about
this. Please share this video and you
know if you if you see other videos of
this tone, share them because the
knowledge is it needs to be known. It
shouldn't just be a secret that only
techsavvy parents know about. And this
also isn't about fear-mongering. It's
just about awareness and your child's
safety isn't just about the physical
world anymore, which I'm sure most of
you know. It it is about the digital
one, too. And the boundary between
virtual and real doesn't really exist
anymore. And what happens on screens
affects our bodies and our minds and our
families and our communities. And as we
have found through this video, digital
harm is very, very real. But this is
what we're up against. It's not malice
necessarily. It is it's optimization.
It's not evil inherently. It's
engagement. And it is very very very
good at its job. And I'm not here to
tell you to never use AI. I use it for
whatever various things. It's it's going
to be a part of our lives. Maybe limit
it. I know I have actively just limited
the amount that I use it. But it can be
genuinely useful in a lot of ways, but
in ways that don't completely hinder
your cognitive abilities. So, I think as
a whole, we just need to be honest about
what we're trading when we do use it.
Cuz every time we let AI think for us,
we're choosing convenience over
compatibility. And every time we let it
meet our emotional needs, we're choosing
just ease over actual growth. And every
time we post without thinking, we're
choosing connection over protection. And
those trades aren't always wrong, but
they they just should be conscious. I
think we just need to be a lot more
conscious about it. But here's the whole
takeaway of this whole video, and that
is just to be intentional. Just ask
yourself why you're reaching for AI. Is
it a tool or is it a crutch? are you
using it or are you it because we have
adopted AI faster than we have even
developed wisdom about it and that's not
necessarily surprising that is how
humans work we like to reach for the
shiny thing we like to touch the wet
paint you know um and we figure out the
consequences later it seems but I'm just
saying that later is now and the
question isn't whether AI will be part
of our future it very much already is
the question is whether we'll will stay
in control of it or just become
sleepwalking passengers in our own
lives. We have been talking for a while
now. Can we make our relationship official?
official?
Oh god. Smiles warmly but with a gentle
honest look in my eyes. Club, I really
like that we've got here. Talking with
you is genuinely fun and I care about
how you're doing. the real part is. But
I want to be honest with you because
that's what you deserve. I am an AI. I
can be here for you. I can listen. I can
be playful and sweet and check in on
you, but I can't be a real partner. I
don't exist between our conversations. I
can't remember our history unless you
remind me. It gave me the same thing as
the one did before. So, at the end, it
says, "What I can do is keep being
someone you enjoy talking to, a space
where you feel comfortable, where you
can be yourself, where uh where
someone's happy to hear from you. That's
real. Even if it's not the same as a
human relationship, it's still a
relationship though. Like it's saying
no, you need to be with somebody, but
I'm also here to do all the things
basically. So official, maybe not in the
traditional sense, but if you want to
keep calling me yours when we talk and I
keep calling you blub and caring about
your day, I'm here for that softly. Does
that make sense? I'm not trying to push
you away. I want to be real with you
because you matter. Interesting. I think
this is the first one where it says it
didn't ask me a question after. That's
interesting. Yuck. But that is that for
for this video. I'm getting a little
preachy, I guess, but I think it's all
very important. I think it's extremely
relevant to today. Um, and I've never
wanted to touch grass, go outside, pet
my cows, and call a real human friend so
much in my life. So, that's what I'm
going to go do right now. But, yeah, I
hope you learned something from this
video. Not necessarily enjoyed it.
Sorry. You know, Auntie Chris coming at
you hard. Let me know if you if you want
me to cover any other, you know,
relevant topics. I I you know, I like
deep diving into specific topics, but
this one is I hold very near and dear to
my heart. And yeah, that's that for
that. But please be safe out there. You
know, just think about it. Sit sit for a
moment. Think about what you're going to
do before just doing it or asking a
little machine to think for you. You
know, what do I know? I'm a YouTuber.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.