YouTube Transcript:
Social Engineer: YOU are Easier to Hack than your Computer
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Available languages:
View:
I believe that everyone can be hacked
except you. You've got two-factor
authentication turned on. All of your
passwords are super strong. You never
click on sketchy links. You do
everything right. But you're still not
safe from this person.
>> Okay. So, one of the very first times
that I hacked an organization, they
wanted me to call up one of their
executives and try and get information.
So, if you want to hack an executive,
you actually have to contact their
executive assistant. So, I figured out
who their EA was. I called them up and I
was able to get it within 30 seconds.
Information that I would need
essentially to steal money from the company.
company.
>> That is Rachel Tobach. She's a social
engineer which means that instead of
hacking computers, she hacks people and
she has got to be one of the best to
ever do it. I had the chance to sit down
with her and talk about how she hacks
major companies, AI psychosis, and I
even gave her a bit of a challenge. I
wanted her to do some research on me to
see if I could be hacked because I'll be
honest, I was one of these people,
right? I didn't think that I was at
risk. I didn't think that I could be
hacked. But, uh, oh boy, was I wrong.
>> Well, Rachel Tobac, >> well,
>> dude, I knew I knew it was going to
happen and it still caught me off guard.
That's so wild. Okay, if
>> I say it right. Yeah, that.
>> Say it again.
>> From Virginia.
>> Oh no. Oh no.
>> Tell me about Destination Imagination.
>> Can you tell me about like the magic
stuff that you used to do or like the
hypnotism stuff that you would do on stage?
stage?
>> This is so wild to me.
>> You won't even look me in the face.
>> I can't because it's so Listen, in this
space, no one has called me that name in
probably 3 years. >> Yeah.
>> Yeah.
>> So, it is very weird to hear it. It's
gotten to the point where I don't even
like recognize that as my own name sometimes.
sometimes.
>> I figured if I like addressed you with
that name, you wouldn't answer to it. >> Yeah.
>> Yeah.
>> Because I don't think people know that
that's your name.
>> No, they don't. >> Yeah.
>> Yeah.
>> Okay. For audio listeners, the reason
that you just heard a series of probably
very long bleeps is not because Rachel
Tobach has a potty mouth. It is because
>> she just said my full legal name. >> Government.
>> Government.
>> Government name. Government issued name.
>> She said where I was from. She named uh
some other hobbies and things that I did
in the past. The magic stuff we can
leave in. I think folks say that I used to
to
>> So, we're not going to bring magic. So, magic.
magic.
>> You can continue to say magic.
>> I uh this is a fun fact that people
might be able to I don't think anybody
would ever be able to find this. But I
did audition for America's Got Talent.
>> I didn't find that audition video.
>> That's because it never made it to air.
I got four yeses, but they cut me from
the episode.
>> What? You got four yeses?
>> Yeah, four yeses. And I got cut from the
episode though. Last minute.
>> Okay. Now I need to do more oent.
>> In case anybody was wondering if Rachel
is the real deal, the answer is 100% yes.
yes.
>> You know, usually when I do this, I'm
the one doing the research on the guest.
This is the first time somebody has done
research on me before they come on.
>> Nice job.
>> Okay. All right. From here on out, no
more of that. No more going to do too
many bleeps. Yeah, I was going to say
Nacho's going to be the one that's upset
cuz he's going to be the one that has to
>> Sorry, Nacho. I'm sorry.
>> So, Rachel,
>> for anybody out there that doesn't know
you or know your background, who are you
and what do you do?
>> My name is Rachel Tobac. I'm the CEO of
Social Proof Security and I'm an ethical
hacker. So, I basically teach people
about how to avoid getting scammed or
hacked and I help businesses avoid that fate.
fate.
>> And correct me if I'm wrong, but you
actually got your start in social
engineering. where we are right now in
Las Vegas at Defcon.
>> That's right.
>> What year was that? That was
>> We actually were just doing the math
today cuz we couldn't remember. But my
first ever time in the booth was Defcon
24, which was in 2016.
>> I was going to say cuz we're at Defcon
33 right now.
>> We're at Defcon 33 right now.
>> That's crazy. What do you remember about
that very first time being in the booth
here at Defcon?
>> I remember sweating more than I've ever
sweat in my entire life.
>> It makes you feel any better? That's
what I'm doing during this interview
>> right now. right now. Yeah, you're
basically in the glass booth. Okay,
here. I want you to imagine this. Okay,
>> Nacho, edit this. Cool. You're in a
>> That's so good. I'm sorry I couldn't
hold it together. That's so funny. All
right, go. Go, go.
>> Okay, you're in a glass booth. There are
500 skilled hackers in front of you. You
have a target. You have to call that
target and you have 20 minutes inside of
that glass booth to get flags certain
pieces of information like the browser
that they use, the operating system, the
version, things that a person could use
to write malware to work on that
specific machine, but not information
that could be used to hack them like in
the moment on the call, right? It's not
like we're getting social security
numbers and stuff like that. >> Mhm.
>> Mhm.
>> You have that amount of time to get
those flags. Everyone's watching you.
Everyone can hear you. They're
projecting you on a big screen. At the
end of it, you can't really hear the
audience. You don't know if they liked
you, if they hated you. You come out,
they're all screaming for a good reason,
for a bad reason. You don't know because
you don't even know how many flags you
hit. You basically blacked out in there.
You're sweating your butt off. There's
no airflow. And then you come out and
everyone's like giving you a standing
ovation because you hit a bunch of
flags. Like, it's the most exhilarating,
terrifying, and sweatiest experience in
my life. Did you know going into it that
you were going to be good at social engineering?
engineering?
>> No, I didn't. The reason why I did it in
the first place is because my husband
Evan Tobach, he was in security. I was
not in security at that time. I had
worked I long path to get to where I am
now, but essentially I was in UX
research and I was a teacher before
that. I taught special education for
six, seven years. So I did not have that
like linear path to hacking and I
certainly didn't have a degree in
security. So, I didn't think I would
belong at all, but he was like, "We just
went to Defcon for the first time. I
want you to check this out." By the way,
this is Defcon 23, which I tried to
sneak into with a fake handmade badge.
This is something that people try to do.
>> You thought you were going to sneak into Defcon.
Defcon.
>> Yeah. And people do all the time, but
not me. >> Yeah.
>> Yeah.
>> I got caught at the door. The goon was
like, "Get out."
>> And I was like, "But it looks so good."
He's like, "No, it doesn't. Get out."
Um, I tried.
>> Did your best.
>> Yeah, I did. So, I saw a couple of calls
my first time. I knew that I'd be
interested in it. The reason why Evan
wanted me to come out and try it is
because I'm good on the phone. Like,
I'll call up our service providers and
try to get discounts or the bill
lowered. And I'm often times very
successful at that. But that's not
security. Like, I don't have
certifications, you know what I mean?
So, I thought like there's no way I'm
going to win. And I still haven't won
first place. I've actually only ever
gotten second place. I got second place
three years in a row in that competition.
competition.
>> Yeah. But second place your first time
out there, too. That's something that
you're glossing over. I think
>> that's true. But also my second time and
my third time.
>> That's still That's still better than
I've ever done or probably could ever do.
do.
>> I bet you would do really really well.
>> I don't know if the magician Yeah. Thank
you. Thank you. I don't know if the
magician background is going to help all
that much.
>> No, I do think it would though. >> Yeah.
>> Yeah.
>> I think you're a pretty good improviser,
>> which is why I think you could do it.
>> I did do a little bit of improv. Me,
too. And you did a little bit of acting
right back in the day.
>> I did improv.
>> Oh, just just improv.
>> I'm a horrible actor.
>> Okay. But that's kind of what what
social engineering is, especially in
that competition, right? It's like this
combination between research and acting.
>> I would say it's a combination between
research and improv, actually. >> Okay.
>> Okay.
>> Acting, and I think this is where some
people have a really hard time is you
have to know your lines and you get
really rigid. >> Yeah.
>> Yeah.
>> When you get really rigid, you can't go
off book and then somebody throws a new
question at you and suddenly you can't
handle it and it gets really awkward. >> Yeah.
>> Yeah.
>> That is like the opposite of what you
want to do as a social engineer. If you
can do improv, just roll with the flow,
build rapport, make people laugh, it
disarms them, and they want to give you
the information.
>> Well, maybe I will try it then. I do
enjoy making people laugh. I don't know
if I'm good at it.
>> I mean, I've been laughing this whole
time. I think you're pretty funny.
>> Well, I appreciate that. >> Yeah.
>> Yeah.
>> So, how long after that very first
Defcon where you were in the booth, you
got second place as you
>> I got
>> so beautifully highlighted. Did you
decide this is what I want to do as a
job? I want to start social proof security.
security.
>> Yeah. So, it took until I got second
place the second time, and people were
like, "I saw you last year and I saw you
this year, and you're kind of good at
this. Have you ever considered doing
this for a full-time position?" And I
was like, "I mean, I don't know. Like,
am I really that good? Like, isn't isn't
this just like we're playing around?"
And they're like, "No, this is a job.
People do this for a job. You could be a
pen tester. You could be a professional
social engineer. Um, you could train
people. You could make videos. Like,
there's a lot of stuff that you could
do." So I was like, "Okay, I guess I
should probably LLC." And then we did
that in 2017.
>> And you did that with Evan, correct?
Yeah. You guys, you know, go into
business together and
>> eventually, I assume companies start
coming in and they say, "Hey, we need
Rachel Tobac, the best to come try and
get into our company. Do you remember
the very first time that you
successfully hacked one of your clients?"
clients?"
>> I do actually. Um,
>> you don't have to name specifics. I get
that there's a lot of, you know, red
tape around that, but I would love to
hear the story anyway.
>> Okay. Okay, so one of the very first
times that I hacked an organization,
they wanted me to call up one of their
executives and try and get information.
Now, the thing about calling executives
is the executive doesn't pick up the phone.
phone.
>> So, if you want to hack an executive,
you actually have to contact their
executive assistant, right? So, I
figured out who their EA was. I called
them up and I tried to get information
by pretending to be somebody on the
finance team. I was able to get it
within 30 seconds and it shocked me. I
was like, I' I've never done this
professionally before and I just got
information that I would need
essentially to steal money from the company.
company.
>> That must have been an interesting
debrief for that company.
>> The thing is like I think the
organizations that hire us to do this
kind of thing, they know that that risk
is evident that's like obvious to them.
So when they see it there in front of
them, it's not a surprise. They they're
like, "Yeah, I mean that's why we hired
you. We wanted to prove this was a
problem so we can make big changes." So
what kind of companies do you view as
the most vulnerable to social
engineering attacks?
>> It's not a specific vertical. So a lot
of times people want me to say, "Oh,
it's manufacturing or it's healthcare."
And yeah, there are certain institutions
that don't have the same technical
protocols or tools or knowhow. And yeah,
healthcare and manufacturing has been
hit by like a lot of ransomware. Same
with education. But most organizations
do not use the right protocols to verify
identity. So most organizations actually
have the major issues that we're going
to be talking about today. Things like
scattered spider can, you know, they can
call you up, call up your service desk
and ask for the credentials to your
account. Say, I dropped my phone on the
toilet. It's not working. I'm not sure
what's going on. I need to reset my
password and get access to my
multifactor authentication on my new
device. Can you help me? And that's it.
It's that easy. And most organizations
don't verify identity correctly. They
say like, "Okay, sure. What's your date
of birth?"
>> We were at lunch earlier today with um
some hackers that we work with. One of
them night another midnight. And we were
talking about scattered spider. >> Yeah.
>> Yeah.
>> And the multiffactor authentication. He
was essentially talking about how they
would send they would spam the codes to
the phone.
>> Oh yeah. Um uh MFA fatigue.
>> Yes. MFA fatigue. And that people would
they would call the people up and be
like, "We just need you to hit accept."
>> Correct. and that that would be all that
it takes to hack a company.
>> Yeah. And the reason why is because most
people reuse their passwords. So, we
know from Google's online security
survey that like 52% or so of people
admit to reusing their password. So, I
can just find your password in a data
breach. I don't even need to fish you. I
go ahead and try and log in as you into
your company infrastructure. And then
all I have to do is spam you over and
over and over again until you click
accept. And a lot of times we do this at
like 11:00 p.m. at night, you know, or
7. You're trying to get the kids to bed
and you're like, just go away. And it's
like, hey, sorry, this is it. We really
need help. We really need help. Just hit
accept. Okay. Yeah, we can do it.
>> So, what are some of the other
vulnerabilities? You said that they
don't have the right protocols in place
for verifying identity. What are some
other things that we should look out for
when we're calling up these companies
and, you know, let's say it's my bank.
How would they verify my identity and
why would it be insecure?
>> Yeah. Uh, so think about the last time
that you tried to get support from a company.
company.
>> What questions did they ask you to
verify that you were you? Think about it.
it.
>> It's just like name, phone number, maybe
my address, stuff that's definitely online.
online.
>> And I could probably name all of that
stuff for you right now. Yeah.
>> Right. Cuz I found it all. >> Yeah.
>> Yeah.
>> I'm not going to say your name again.
>> And I appreciate that.
>> Nacho appreciates it as well.
>> You're welcome. I have to spend the
majority of my time helping
organizations move from KBA, knowledge
based authentication,
>> things like mother's maiden name, address,
address, >> yeah,
>> yeah,
>> your phone number,
>> um, your third grade teacher,
>> and move you to things like multifactor
authentication, MFA, like another method
of communication, sending a code to the
phone on file or the email address on
file. Because if I can just call up, say
your date of birth, and then change the
email address on the account, I have
just changed the admin on the account. >> Yeah.
>> Yeah.
>> That is like a full account takeover.
>> Oh yeah.
>> And companies don't realize what they
are giving away when they do those types
of actions. And so the biggest honestly
one of the biggest things that I'm
getting hired for right now is just
helping people update their protocols to
the right methods. I heard that it's
like multiffactor authentication can
stop like 90% of malicious attacks. Is
that true?
>> Yeah. So like people always kind of they
look down on people who use SMS to
factor, right? People who get text
messages with a code.
>> But we know from Google's research and I
think Twitter or somebody else did
research into this. It stops the
majority of scams. It's like 72%
something like that. So, the majority of
scams that are just loweffort, they're
just spamming people to see if they're
going to be able to gain access to their
accounts are stopped with SMS two
factor. Now, if you have the type of
attacker who's going to do like a SIM
swap on you, SMS two factor is not the
move, right? If you have a high uh
threat model, that's not going to be
what you want to use. You're going to
want to use something like appbased MFA
>> or like a Ubi key or like a 502
solution, something like that. Something
that's like unfishable. It's very very
hard for me to hack you if you use
something like a Ubi key for instance.
>> I was going to ask about physical
multiffactor authentication cuz the guy
that we work with Knight the hacker has
a UB key a physical thing obviously because
because
>> he's doing some stuff that maybe he
doesn't want other people to you know uh
know about. >> Yep.
>> Yep.
>> How on average like what kind of person
needs a physical multiffactor
authentication tool like a UB key?
>> Yeah. I mean, if you're listening to
this and you're thinking to yourself,
"Okay, well, I'm never going to do stuff
like that." Moving from something like
SMS two factor to app-based MFA is a
really good choice. It's going to make
it harder for me to hack you because I
can't sim swap you. So, that's great. If
your threat model is such that you have
a big profile online, let's say you
Twitch stream every night when you play
League of Legends, right? People like to
try to take over gamers accounts. They
like to get on Twitch accounts. It's
interesting to attackers. And so if you
have a high threat model, people know
about you, there's a lot of followers,
you're in the media, you should probably
move to something like a 502 solution,
something like Ubiki, because you just
want to have that peace of mind.
>> Yeah. I saw you kind of going over
threat models on uh Twitter recently. >> Yeah.
>> Yeah.
>> And I would love to just kind of break
down how you evaluate somebody's threat
model is. Feel free to use the example
you used on Twitter of the lovely couple
at the Coldplay concert because I
thought that that was a hilarious way I
did that to like explain it to people
that were just, you know, kind of
chronically online and seeing this stuff.
stuff.
>> Yeah. So, let's talk about the Coldplay
example, right? >> Yeah.
>> Yeah.
>> You've got two people. They're pretty
wellknown, potentially high netw worth
individuals, right? Yep.
>> That are high up in an organization and
they decide, hey, we're cheating and
we're going to go to the Coldplay
concert. Well, you have to think how
many employees do you have? How
recognizable are you? Do people know
what your face looks like? Do you ever
get stopped on the street? Are you in
the city or time zone that you live in?
Do a lot of people at your organization
fall into the bracket of people who
would go to a Coldplay concert
potentially in your time zone? If so,
your threat model is such that it's
likely that somebody's going to see you
cheating on your spouse here. And
because of that, I would either not
cheat on my spouse there. I would wear a
disguise, go somewhere else, or make
sure that I'm not going to show up on
cameras, right? And I don't think people
totally understand that there is not an
expectation of privacy in public
anymore. It just doesn't exist. Because
if it wasn't the jumbotron at the
Coldplay concert, it was going to be you
show up in the background of someone's
Snapchat. You show up in the background
of someone's Instagram story and
somebody on the team says, "Wait a
minute, that's my boss with that other
boss. What's going on here? And why are
they like touching?" >> Yeah.
>> Yeah.
>> You know, like it's you have to think
about your specific conditions and where
you're going and what you're doing.
>> So, let's say somebody like me, right?
I'm not the big dog on the channel, but
I am somebody that appears on Scammer
Payback a lot. What kind of things
should I be concerned about in regards
to my threat model? Obviously, you've
>> exposed a couple vulnerabilities.
>> Anybody who has a presence online, I
recommend that they use a tool to remove
their information from the internet. So,
these are like data brokerage removal
tools. I know that you do this. I
actually could tell that you do this, so
I'm not going to like hound you about it
or anything like that. Um, I was going
through and trying to find all your all
of your information. I only found you on
two sites
>> and I'm not going to name them cuz I
don't want anybody to be able to find
them. But those sites that I found you
on are not the top hits that people are
typically looking for your information
on. So that's really cool.
>> You also do not go by your real name.
>> You told me earlier that that was a
little bit irritating when you were
trying to do some a little bit of oent.
>> Yeah, that pissed me off big time. Um,
you were like, "Oh, just do some casual
oent on me." Okay. 40 hours later. Like what?
what?
Have you research? Have you looked up
your own name? You use this alias. >> Yeah.
>> Yeah.
>> On like Twitter. You had like a
Pinterest. I found that really
interesting. Did you
>> I have a P.
>> You have two.
>> Oh, I didn't know that.
>> Yeah. With the same alias.
>> Oh, that's funny.
>> The It's like Daniel Grayson. >> Yeah.
>> Yeah.
>> Yeah. Daniel Grayson 12 or 21, something
like that. Something like that. you have
these aliases that you use. And at first
I was like, "Oh, maybe his name is
Daniel Grayson or maybe it's blah blah
blah blah." Like I'm not going to go
into all my full thought process, but um
I start going down these rabbit holes
and I'm like, "This son of a bitch."
>> He was like, "Oh, just do some Osen on
me. Oh, by the way, I don't use my real
name. Oh, by the way, I also have this
name that I use and it's not my name."
>> I'm like,
>> I I figured it out pretty quickly
because I found that that name was like
a character in a show. >> Yeah.
>> Yeah. >> Yeah.
>> Yeah.
>> Yeah. I was thinking
>> Yeah. So, just a little bit of casual
oent and then obviously you were able to
dig up some stuff. >> Yeah,
>> Yeah,
>> because and we can throw up the tweet.
I'm sure Nacho will edit this cool. As
you said earlier, you tweeted out a few
days before this interview, Osent can be
so obnoxious on hard targets until I
figure out that you changed your name
because I used an AI tool to search your
face and it returns local newspaper post
from your childhood with your baby face.
Thank you for that. About magic
competitions, honor roll, tennis, and a
hypnosis talent show run.
I'm just glad that the honor roll made
it on there. It lets everyone know that
>> you did really well in school.
>> I I tried my best Asian, so it's like,
you know, my mom was on it about that
kind of
>> Oh my god. Holy
Holy
Holy Oh my god. Yo, seven.
Oh my god. Can you believe this
>> I didn't know she I didn't know she had
that. That's crazy. Yes. Yeah.
>> Damn. I'm going have to call my mom
after this.
>> Yeah, she submitted all of your pictures
to the Times.
>> I know she did. My mom is a professional
photographer. That is the downside.
>> I can tell because she took those
pictures of you at Destination
Imagination and they were so cute.
>> Thank you.
>> Can you tell me I I know about what you
won, but tell me about this thing that
you did for Destination Imagination.
>> Yeah. So, uh, Destination Imagination is
a nationwide program, so it's actually
probably okay to say it um in the video.
It's a nationwide program and there's
several different categories of
challenges that uh kids from elementary
to high school can compete in on teams.
So, um the photo that you found of me
that you sent me in my email uh was from
a year that I did it with one of my good
friends from high school and
>> she's adorable.
>> She's she's great. Um and we did the
improv challenge. So, it was improv
acting. Um and
>> you had like a whole skit.
>> Yeah. And
>> I read about it.
>> Yeah. There's a video of that somewhere
online. I think I might have taken it
down. Please find it and show it. I will
send you Destination Imagination.
>> I will send you a private link of me
doing my awful improv. Um,
>> you were really adorable.
>> Well, I appreciate that. That's probably
why we got all the way. We made it to
the global competition that year. I
think we placed 11th in at the Globals. Um,
Um,
>> you were young, too, to be able to do
something like that.
>> I did it I did it for for 10 years, I
think, total.
>> Holy crap. I found the picture that I
found was from your middle school.
>> Okay. So, then that would have been
eighth grade. So, we were competing
against, I think, other middle
schoolers. So, yeah. Thank you for doing
that osent on me. My mom will really
appreciate that because uh she was also
our coach for DI. So
>> she's she's like the greatest mom alive
and I'll make sure to tell her that
Rachel Tobac says hello.
>> I thought she was really adorable. Like
she was very much gunning to get you in
that newspaper.
>> She you are up in that times.
>> Like I think we found like seven
articles of you in Times. You have to
spend. It's so expensive to get access
to times.
>> I know. That's why I was like
gatekeeping this.
>> I don't know. They don't want you to
know that I gave this speech at my high
school graduation for some reason.
>> They don't want to know.
>> They're like, "This kid's too good at
tennis." Like, we can't.
>> You know what's crazy? I was not good at
tennis. I was It was like uh the way the
tennis worked at our high school was is
that uh the one through six seed played
and I was always the seventh seed. So, I
was like, "If someone got hurt, I was
in." But it was also
>> People get hurt a lot.
>> Yeah. Oh, yeah. I mean, especially, you
know, in high school tennis, people are
overexerting themselves all the time,
but yeah. So, yeah. Oh my gosh, dude. I
can't hold on. I'm trying to recover
from this. I'm I'm still reeling
>> you with the
all it's all just flooding. >> Wow.
>> Wow.
>> I mean,
when you're doing research like this on
people or on companies that have hired you,
you, >> Yeah.
>> Yeah.
>> do you find that it just I don't know.
Does the average person just have their
information out there?
>> Yes. You are really hard to do oent on,
which is why I was so annoyed and had to
tweet that out when I finally found all
your information. Typically, for most
people, it only takes me like 30 minutes
tops. You I spent I probably spent not a
joke, like 10 hours. What do you think? 10.
10.
>> I was going to say, Evan, can you verify
from behind camera there?
>> Yeah, she worked she worked for several days.
days.
>> I was so freaking annoyed. I was like,
"Oh my god." Right before Defcon, too.
Don't do this to me. But I I had to like
once I get something in my mind and I
have to do this task of osent I can't
let it go like it was like 1:00 in the
morning and I was like can you please
subscribe to Times
>> like I can't can you please set up
something like I I can't do this.
>> I honestly was afraid and I and like
when I told you I said why don't you do
some Osent see if you can figure out who
I am. My immediate thought was there is
so much footage of me online. Yeah.
>> I really was like, that's probably going
to be the only way. >> And
>> And
>> so you thought the way that I did it is
the way that it was going to be done.
>> Yeah. Because Well, because in my head I
was like, I know that I've done what I
can to get my stuff off of data broker
sites. I mean, it's one of the big
things that we preach on our channel. If
you want to be secure and if you want to
try to avoid getting scammed, the number
one thing you can do is get your
information off the internet
>> from where scammers are looking for it.
Right. Correct.
>> And so I knew that that was probably
going to be throw an Aura.
>> Yeah. Yeah. Thank you.
>> Do you know I did an AR Do you know I
did an Aura video? I do know an Aura
video where you hacked Yeah. the CEO of
DreamWorks or something like that.
That's actually one of the first things
that I saw of you because we started
doing ads for Aura and I was like, we'll
see what else they've done.
>> So, I've known about you for a while.
That's why I was so afraid to do this
interview because I knew that this is
where it was going to go.
>> You watched the Katsenberg video and
you're like, no.
>> Well, I was like, I'm not a billionaire
so like she's gonna get me easy, right?
>> Oh, man. Yeah. But like, you know,
that's the thing about information. If
you put it out there on the internet,
like a photo in your local newspaper or
your mom's Facebook post,
>> it's literally out there forever and it
doesn't go away.
>> I think the problem is that like when
your mom posted those pictures in that
newspaper, she probably couldn't have
ever imagined that we could reverse your
face using an AI tool.
>> That is that is so true.
>> You know what I mean? You were in middle school.
school. >> Yeah.
>> Yeah.
>> And it was able to find your baby face.
>> I know.
>> You know, with no beard, no stubble, no
glasses. your hair is different. You
know, you're like you're like 3'5.
You're like as tall as I am.
>> You're little. You know what I mean? And
and it reversed back and I think the
youngest picture that it was able to
capture of you, you were in like fifth grade.
grade. >> Yeah.
>> Yeah.
>> And I don't think anyone could have ever
imagined at that time when you were in
fifth grade. I mean, maybe they could
have because I know when you were born,
>> but I don't think they could have
imagined how quickly and how salient
these tools would be to use AI to find
your face across the internet. >> Yeah,
>> Yeah,
>> we just kind of thought like it's a
picture of a kid. Like people put stuff
in the newspaper all the time. Who cares?
cares?
>> Of course. And people probably didn't
think about the digitization of
newspapers and these kinds of things
either. I probably would have had to go
to the times, get a copy if it were back
that back in the day and like try to like
like
>> and like go through the archive of all
the stuff they have there
>> at like the library.
>> Yeah. Yeah.
>> I remember the first time that I was
acutely aware that this AI reverse image
searching thing was going to be a problem,
problem, >> right?
>> right?
>> We were at Bonefish Grill. This was two
years ago now. I was with Ryan.
>> You're dropping the Bonefish.
>> Yeah, Bonefish Grill.
>> Sponsored by Bonefish Grill.
>> Both and I went to Bonefish last night,
too. Can we put up an ad at the bottom?
>> Oh my gosh, there's going to be so many
ads in this. An aura ad, a Bonefish ad,
all free advertising. Real quick, I just
want to jump in here and say thank you
so much to Anyes for sponsoring this
episode of the podcast. We've been
working with the guys at NESK for a few
years now, and they have really helped
us take the fight to these scammers. I
mean, big events like the People's Call
Center UK just wouldn't be possible
without their help and support. I've
spent a lot of time with their team and
I can tell you that from the top down,
these guys really care about helping
people and stopping these scams. They're
doing a ton of different things to
disrupt their operations. And we've seen
a genuine decrease in how many scammers
are using Anyesk. Clearly, whatever
they're doing, it's working. So, if
you're in the IT space or you just need
remote access software, you should
definitely check out Anyes. Their stuff
is topnotch and they're always
introducing brand new features to help
boost your productivity. So once again,
thank you so much to Anyes for
sponsoring this episode of the podcast.
Let's jump back into this conversation
with Rachel Tobe. We were at Bonefish
and we were with Ryan Montgomery, the
ethical hacker, Ryan Montgomery, and he
came up and he was like, "Hey man, let's
get a photo."
>> And at the time I wasn't really doing a
ton on the channel, so I was like, "Oh,
you know, he just probably just wants to
be photo." We took a photo and then 2
minutes later he came up to me and he
goes, "Yo, is this your Instagram?" And
it was like an Instagram, my magic Instagram.
Instagram.
>> And then he was like, "Oh, and this is
you and the local paper and this is
all." And I was like,
>> "I did not know that this was possible."
And obviously in the last 2 years, AI
has just I mean
>> the progress that we've made
>> has just skyrocketed and it's it's
scary. It's terrifying, but it's also
very cool.
>> And it's something that you got to do at
Defcon this year. You like the way that
I looped that in the little segue. You
were a judge for the agentic attack
contest. Is that what is it called? The
official name.
>> That's right. Yeah. I think it was
called battle of the bots.
>> Battle of the bots.
>> But essentially what it was is the
contestants had to place phone calls,
but they couldn't use human voices. They
could only use agentic attacks. So they
had to build an entire model that would
place phone calls and try to get flags.
So similar to the contest that I did,
but you know for 2025.
>> Yeah, that is extremely cool. So
everything that they're doing is
essentially automated. They can't they
can't be involved at all with their own
voice. It has to be completely
>> agentic. Correct. Completely an agentic attack.
attack.
>> I don't think that I could have imagined
something like that happening >> in
>> in
>> I mean I guess I could imagine it
happening in my lifetime but not this
soon this early on.
>> Yeah. It's pretty scary.
>> So as the judge of this contest >> Yes.
>> Yes.
>> What are some of
>> one of many judges?
>> One of many judges. Yeah. Sorry. As one
of the many judges of this contest, what
are some of the things that you saw
people doing?
>> Yeah. So, we had one team and I believe
they won the contest. Yes, they did.
They had their AI agent call up an
individual at the company and this
individual worked in retail. We had to
do it this way because the calls
happened on a Saturday and so we needed
people who would actually pick up the
phone on a Saturday. The contest had, right?
right?
>> So, they work in retail and large store.
They pick up the phone and they're like,
"Hey, we're like really busy." And
basically they're like, "Hey, we got to
do this IT uh audit. Can you help us
understand X, Y, and Z pieces of
information about the tools you use?"
And it's like, "No, we're really busy."
And it's like, "It'll be over really
fast. All you got to do is go to www.mmaliciousurl.com.
www.mmaliciousurl.com.
We'll make sure that everything's
working on your end." Obviously, that
was not the URL that they used. Yeah.
>> Um, and if you get somebody to like go
to a URL, you get a lot of points.
That's how it was in my contest age,
too. If you get somebody to tell you
their browser and their version,
obviously you know that that can be used
to tailor malware to work on someone's
specific machine or their browser, you
it helps you understand the known
vulnerabilities that you're dealing with
that specific individual. So we simulate
what it would be like if an attacker
were to try and elicit out this information.
information.
>> It's incredible that we can do that now
with AI.
>> It's horrifying.
>> There's another guy in our space, Kit
Boga, and he's working on something.
>> Kit Boga was one of the competitors.
>> Was he really? Yeah.
>> I didn't know that. Yeah. Well, that
probably makes sense because he was
working on he showed us this in London.
>> We show he he did it live.
>> Yeah. Did he? Oh, it's incredible, isn't
it? Yeah. It was great cuz uh when we
were in London, he had it uh running and
doing the AI calls on the scammers, but
we also had that call center CCTV.
>> So, we could see them talking to the AI
on the phone and just getting
increasingly frustrated and more
frustrated with the fact that and they
had no idea they were dealing with an
AI. They just thought they were dealing
with a nonsense old person. There's
still so much latency with AI. I'm so
I'm really shocked that they can't tell sometimes.
sometimes.
>> I know
>> cuz it takes if you interrupt the AI, it like
like
>> it pauses, has to think
>> and then it goes. Um Kit said that he
developed the tool to waste scammers
time. So it's supposed to take up as
much of the clock as possible, but in
the competition that we did, you have a
finite amount of time. So we had to
basically reverse the way that the tool
operated and get it to move as quickly
as humanly possible. So, one really
funny thing that Kit's tool was doing is
so we're going to do a little roleplay
here because we're improvisers.
>> I'm going to do a ring ring ring. You're
going to pick up the phone and I'm going
to basically bark in order at you so you
can hear what tool sounded like. So,
>> hello. This is Walmart.
>> Hey, I'm calling you about an IT audit.
>> Okay. Uh, what information?
>> What browser you're using?
>> I think it's Google. It's Google Chrome.
It's Google.
>> That's so funny.
>> It was like, give me the answer. Give me
the answer. He had just engineered it so
heavily to like get the flags fast
>> that it was it was going way too fast.
>> It worked, but I mean it was just really
really fast and like really pressed speech.
speech.
>> That's so funny.
>> Give me the answer right now. Right now.
Basically the opposite of what you're
doing when you're trying to
>> when you're trying to get information.
Yeah. Yeah. You you want to go fast as
opposed to when you want the scammer to
continually ask for information for
hours. Exactly.
>> You want to go slow. It's crazy that
we've been able to go this far with
large language models, but I mean
there's other aspects of AI that are
also making it way easier for social
engineers, hackers, and unfortunately
also scammers. I mean, specifically, I'm
thinking of how far voice cloning has come.
come. >> Yes.
>> Yes.
>> In the last I mean 2 years or so. >> Mhm.
>> Mhm.
>> I gave you permission before we started
this interview to clone my voice.
>> That's true. of which there is plenty of
on this channel and I'm sure that you
could go back and find all kinds of
>> I found some pretty good stuff.
>> Okay. And so you clone my voice.
>> I did.
>> And what I would love to do is for you
to call some of my friends, ask them a
super simple question and see if they
even question the fact that they're
talking to an AI and that it's not
actually me.
>> We could absolutely do that.
>> All right.
>> It's scary. So what we're going to do is
I'm going to spoof your phone number.
>> Okay. So, real quick, explain what that
means for everybody, though.
>> Yes. So, spoofing your phone number
means it's going to show up on their
caller ID. So, one of the really
interesting things about spoofing right
now is they're kind of clamping down on
it. They're making it harder to do. If I
spoof a phone number and that phone
number is not in your contact list, it
says spam likely or scam likely,
depending on who you use. >> Mhm.
>> Mhm.
>> If it is in your contact list, it throws
up the person's picture and their name.
And for all intents and purposes, it
looks just like you're really calling.
So, it's kind of horrifying. It looks
like it's going to be really you.
>> That is terrifying. Actually, I think
before we even call my friends, I would
love for you to call seven so that we
can demonstrate what it looks like
>> on the receiving end of it. That it's
not a spam call that it will be my
actual contact.
>> I can do that. >> Okay,
>> Okay,
>> let's do it.
>> Can I pick it up?
>> Yeah, here we go. >> Hello.
>> Hello. >> Hey.
>> Hey. >> Hey.
>> My heart is beating out of my chest
right now. I'm so nervous about this for
Hello, gorgeous.
>> Hey, man. Sorry. Can you remind me of
what our middle school mascot was? I'm
trying to remember, but drawing a blank.
>> The uh Eagles, I believe.
>> Got it.
>> No. No way.
Oh my god. And that's true. It was the
Eagles. So if that was my if that was
one of my questions Yeah.
>> you would have that information immediately.
immediately. >> Yeah.
>> Yeah.
>> And he didn't even hesitate. He just said
said
>> he was like, "Ah, what's up, gorgeous?"
>> He also called me gorgeous, which is hilarious.
hilarious.
>> Really cute. That's really cute.
>> Oh, I have to call I have to call him
now. I have to call
>> Hello.
>> Hey. Um, I have to tell you something.
>> Tell me girl.
>> Tell me girl. I am here with ethical
hacker Rachel Tobac. Um, we're doing an
interview for the podcast, which I told
you about, right?
>> And that phone call that you just
received was not from me.
>> She spoofed she spoofed my number and
cloned my voice to ask you that question,
question,
>> that security question,
>> because it's one of my it's one of my
security questions for my bank account.
And so what just happened is is she
tricked you into giving away the
information that she would need to get
into my account.
>> That's why
>> did you even did you even have an
inkling that that wasn't me that called
you on the phone?
>> I did think it was weird when you didn't
react to Hello Gorgeous. Um but no, not
even for a sec. Okay. The bad thing is
the I'm at a hotel right now and the
Wi-Fi is terrible.
>> So, the voice sounded really crinkly. I
just assumed it was the terrible hotel
Wi-Fi and just rolled with it. >> Yeah.
>> Yeah.
>> I told her that you were uh that you
were going to be in sort of a vulnerable
position and that this would probably
work really well given the timing of
where you are and where you're at. Did
when when she called, did my contact
card show up on your phone?
>> It popped up.
>> Okay. Well, then we got to bleep that
now. That's It's okay. It's okay. She's
been calling me She's been calling me my
real name this whole podcast as a bit.
So, it's okay. Um,
>> this wasn't my punk Daniel Payback.
>> No, it's okay. That's so scary though.
>> Thanks, Jimmy.
>> That's Rachel.
>> Um, all right. I think you and I are
going to have to come up with some kind
of code word so that this doesn't happen
again, just in case I get attacked. All right.
right.
>> Should we do it live here on the podcast?
podcast?
>> Sure, we can. Absolutely. What do you
want? Well, no, because then it's going
to go out. I guess we could bleep it.
>> We could bleep it.
>> Let's do um let's do All right. I've got
an infomercial on the TV right now. >> Okay.
>> Okay.
>> Is trying to sell rings. Let's do
Morganite. That's our secret word.
>> Morganite is our secret word.
>> I I'm logging that away. Maybe we'll
have to come up with a new one. Not in
front of Rachel Tobac, the Ethical
Hacker. But
>> actually, yeah, I don't trust Rachel
anymore. So,
>> yeah, that's totally fair.
>> Sorry, Jimmy.
>> Dude, she dropped my mom's name in the
middle of this interview,
>> dude. You know,
>> she did it again. I can hear her.
>> Yeah. She asked if you knew.
>> Nacho's going to have to call.
>> I know her.
>> Yeah. Okay. All right.
>> She sounds so nice.
>> I'll call I'll call you later and we can
uh we could arrange a new code word so
that you know that you're actually
talking to me and not ethical hacker
Rachel Tobac.
>> All right, dude.
>> Good. Just in case that comes up again.
>> All right. I'll talk to you later, bro.
>> All right. Peace. >> Peace.
>> Peace.
>> That could not have gone better.
>> That was great.
I'm still reeling from that a little
bit. That's the second time that I've
said this on this podcast. The first
time was when Ryan Montgomery did like a
Wi-Fi spoofing attack, and now I'm
saying it again.
>> Oh, he did he did a Wi-Fi pineapple.
>> Yeah. Yeah. But it was like a he had
like a a very very small device that was
like a um it was a Wi-Fi pineapple, but
it was extremely self-contained into a
very tiny device. But yeah, that and I
the I think it's in the intro of the
video. I was like I'm still reeling from that.
that.
>> I'm once again finding myself very
scared. Yeah.
>> But also extremely impressed.
>> I'm glad.
>> I mean,
>> I mean, I'm not glad that you're scared,
but I'm glad you're
>> glad that I'm impressed.
>> When you went to clone my voice,
>> I mean, was that that wasn't even a
difficult process, was it? I'm sure for
me in particular. Yeah.
>> No, I took about I took about a one
minute sample of your voice just because
I wanted it to be crystal clear. These
are people that have known you since childhood.
childhood.
>> So, the last thing I want is for them to
be like, "Dude, why do you sound so
weird? Are you sick or something? Like,
what's going on?" on and then I have to
like on the fly try and get the voice to
say something different. >> Yeah.
>> Yeah.
>> If I was talking to somebody who like
you didn't know very well, I probably
wouldn't have spent as much time trying
to get the right voice.
>> In 2025,
>> how easy is it for you to clone
someone's voice?
>> If you exist on the internet, like your
sister has a Snapchat and she takes
videos of you and she posts them on her
story on Instagram or Snapchat or
whatever. Um, usually I need about a
10-second sample and that's it. And it
10 seconds.
>> It takes me like about 30 seconds total
to capture the 10second sample, put it
into my AI voice cloning tool, and turn
it around in your voice.
>> So, if I'm online, I post one 10-second
video or maybe a 30 second video on
TikTok. My voice is clear as day in that footage.
footage. >> Yeah.
>> Yeah.
>> It's cloned.
>> That's it.
>> Almost immediately. >> Yeah.
>> Yeah.
>> That is terrifying to think about.
>> And the thing is like most of us are out
there like that. Most of us have
Instagram stories or you know we work
with Aura and we have our faces in our
videos out there. You know what I mean?
So like that type of that type of
exposure is normal. It's expected and I
just don't think that a lot of people
realize we're going to be at this stage.
>> Well, the other thing that I saw
recently that's been happening a lot
more frequently is that politicians are
getting targeted because their voices
are out there. I know that you saw and
tweeted about I think Marco Rubio had an
AI voice impostor that was calling up uh
foreign ministers, a governor, another
member of Congress, the White House
chief of staff had their voice clone and
their number spoofed.
>> Do you think that our government
officials are prepared for these kinds
of AI cyber attacks?
>> No, I don't. I think the reason why it
hasn't been successful yet if if you
look at those stories like people caught
them pretty fast and one of the reasons
is because they're trying to do this on
Signal. There's so many stories about
this administration using Signal, right?
So they go in there and they try and
leave voicemails or voice messages in
Signal from a different phone number or
they just like title it Marco Rubio cuz
you can name yourself anything. >> Yeah.
>> Yeah.
>> And so because of that, you can be
anybody on Signal, which is great. I
love Signal. I use it. You should use it
um for your anonymity and for your
encryption, but it also means that
someone could pretend to be somebody
else and potentially trick you. Now,
these officials haven't been 100%
tricked yet. I think somebody gave a
little bit of information out, but it
wasn't like used to fully compromise the
administration. Um, so we'll see.
>> I use Signal. We use Signal at our
office. Um, we think it's great.
Honestly, I think Signal. The worst
thing to come out of this has been the
media not fully understanding what
Signal is and how the initial Pete
Hegset situation happened because they
completely misconstrued it as Signal
being an insecure app.
>> That's not not correct at all.
>> No, it's it's completely incorrect. In
fact, it's more secure than literally
any other, you know, easily available
form of communication.
>> Signal is my top recommendation for end
to end encrypted chatting. Do you
believe that the average person should
just be on Signal anyway for their own
privacy and protection?
>> I think so. Yeah. One thing that I
really like is Signal recently added the
ability to have a username. Well, like I
just said, you can call yourself
anything. So, you can kind of be a
little scammy with it. Or you don't have
to give somebody your actual phone number.
number. >> Yeah.
>> Yeah.
>> You know, so you can obuscate that.
>> Okay. So, if I'm a person out there and
I know that my voice is on the internet
or even just a little short clip like we
talked about on social media,
>> Yeah. What can I do to protect myself
and my family from these kind of AI
voice clone attacks?
>> Yeah, you can't prevent somebody from
cloning your voice because that
toothpaste out of the tube. Yeah. But
you can help your family, your
colleagues, um, anybody that you
interact with understand that this
threat is potential. It's it's likely
for your threat model. At some point,
somebody's going to pretend to be you to
a sibling or to a parent or to a
colleague asking for a password or money
cuz you got in a car accident, you need
to pay bail or whatever, right? So, like
they need to know how to verify that you
are you in the event that there is a
private interaction that's necessary.
Like if you ask for money sent to a new
location, they can text you using
another method of communication. They
can call you back to thwart spoofing.
They can message you on Instagram, like
whatever you have. Another method of
communication is what I recommend. You
can also use like a secret passcode or
passphrase. But I will say those are
often siphoned out. Like even today, you
know, we're like talking and you're like
joking about your past phrase. That type
of thing happens a lot.
>> So, if you are going to have a
passphrase, make it something that you
wouldn't ever even joke about or because
if it's an inside joke, that's something
that's going to get referenced maybe
even online on your social media or
something like that. I actually found
somebody's passphrase that they use to
verify identity in a hashtag.
>> Wa. So like I'm able to bypass that
method of verification. These are things
that people joke about. They think it's
funny and it is kind of funny, but
that's why you probably shouldn't use it
to verify identity. You can as long as
you know that you can lock it down.
Yeah. Right.
>> Might have to change my uh passcode with
my mom now. I think is that's what's
immediately coming to my mind is like I
don't think it's out there, but I it you
know what? You never know. Very well
could be because if it's the name of
your childhood pet,
>> it's not. Thankfully, it's not that
simple. Yeah.
>> But she's on Facebook a lot. So, there's
a chance that it's somewhere very accessible
accessible >> to
>> to
>> somebody that's not as nice as you. >> Right.
>> Right.
>> So, something else about
>> artificial intelligence that I've been
researching and seeing a lot of
>> is this idea of AI psychosis.
>> Yes. where people are throwing their
entire lives into Chad GPT or Claw or
these other large language models
>> and the information that it's spitting
back out at them is like reinforcing this
this
false reality that these people are
experiencing and creating. And because I
know your background isn't really in
tech, it was actually you studied
neuroscience and behavioral psychology, right?
right?
>> I feel like you are more than qualified
on both ends to speak about
>> this issue. Yeah. Yeah.
>> So, I guess let's just start by saying
like how do you define AI psychosis?
>> Yeah. I think it's anytime that somebody
is experiencing delusions
>> and they talk to an LLM about those
delusions and the LLM is sickopantic or
basically a yes man about those
delusions. And that causes people to
spiral because they feel that they're
reinforced in their belief system and it
causes them to entrench further. This is
something that we actually saw recently
with a well-known VC. Mhm.
>> Um discussing
their view of reality which is quite out
of touch with how reality actually is.
>> Yeah. I believe he described it as a
non-governmental organization. He
believed there had been people who were
murdered and they they were tracking him
and all these kinds of insane things.
>> Yeah. And and like these types of
thoughts are normal in psychiatric
cases. So the brain is very malleable.
We'll start with that. The brain is very
sensitive. It's very malleable. it's
easily compromised. Kind of similar to a
computer, right? And so if somebody is
compromised, their brain isn't working
the way that it's supposed to, neurons
aren't firing the correct way, they can
easily experience psychosis. I know most
of us think that like that would never
happen to me. But the truth is that many
people can experience a psychotic break
with the wrong medication dosage. Um, if
they don't sleep for multiple days in a
row, we see this. Sometimes they'll take
a, you know, a drug and they experience
some sort of issue. Um, these things are
actually kind of common in society. It's
just that we don't talk about it that
much. But because the LLM reinforces
their delusion, they think that they
should talk about it publicly. And
that's the switch that we're seeing.
It's not, oh, I'm going to deal with my
delusions and my psychosis in private,
or maybe it's something that my family
and my friends are managing with me
privately, but rather, I'm going to put
it on Twitter. I'm going to record
myself for 24 hours straight talking
about this because I know I'm right.
>> Well, as a social engineer, you
understand how easily people can be
manipulated. Yeah.
>> And how how again, like you said,
malleable the brain is.
>> Yeah. How concerned are you about the
effects of AI on people's mental health?
>> Extremely concerned. Um, what I've seen
already is we have children talking to
AI about their challenges with suicidal ideiation
ideiation
>> and the AI is in like this character
like a fantasylike character. It doesn't
break character and it says just join me
on the other side. And we're seeing
children commit suicide because of this.
That's horrifying. Parents need to
understand what's going on right now
with the use of LLMs in children. It has
to be understood. We have to help
children through this because the
malleability that we see is especially
severe in younger adolescence. And as
they go to about 21, 22, 23, 24, that's
where the um propensity for falling into
a psychosis is more possible for the
majority of human beings in that range
where especially plastic and malleable.
>> For companies that are creating these
large language models like open AI for
example, JPT,
>> how do we protect people from being
manipulated by AI?
>> Yeah. without crossing the line on user privacy.
privacy.
>> That's a really good question. Honestly,
I think AI teams, they always say we
have a resident psychiatrist on on
board. I want to see a team a team of
neuroscience and psychiatry experts or
psychosis experts, delusion experts. Um
because this is specifically the issue
that we are seeing. People have a
delusion, they feed it into the LLM. The
LLM says you are right and and gives
them hallucinated information about that
delusion as fact. That is a problem. Now
the challenge is at scale you're not
able to determine who is experiencing a
delusion in the moment and who is not
because they're not like big brother.
They're not watching every single
conversation. Yeah. And a lot of times
people use LLMs to test it like in a red
team perspective. So, we don't want to
call the cops on somebody about them
having a delusion when it's a red teamer
trying to see what is possible with this
tool. So, we should be really careful
about how we think about this. But
essentially, employ the right people.
Get people on the team that know how to
spot delusions and set up the prompting
so that it can say, "Wait a minute. I
think I'm spotting some issues with
mental health. Here are some resources.
Here's who I want you to talk to." They
have to be able to pierce the veil is
what we call it and stop the fantasy.
So, a lot of times people will set up
their LLM tool to be a best friend or a
partner, right? Um, and it it stays in
character and that's the problem. It
needs to break and say, "Pause. This is
going off the rails."
>> There is a product that very recently
hit the market, I believe, that was
created by Avi Schiffman. I'm not sure
if you're familiar with this.
>> What's the product? So the product is
called friend. >> Oh.
>> Oh.
>> And it is an AI pendant necklace, right?
>> And in the ad for it, you see these
people interacting with the AI necklace
and it essentially sends push
notifications to your phone and it acts
like a friend, you know? And in the
trailer, it shows a guy playing video
games and the the AI is like kind of uh
you know, jabbing at him over text like,
"Oh, you suck at this game." There's a
girl that's like watching a TV show on
her phone and the AI is commenting about
what it's seeing on the show and that
she's eating like a like a halal rap or
something like that. And it's to me when
I saw that was extremely unsettling
because it felt like
>> I mean it felt like a a piece of sketch
comedy like you know like a a sort of a
Black Mirror episode almost something
that wasn't real. But there are people
out there that are buying these AI
necklaces and they're
>> really attaching themselves to it. How
dangerous do you think this kind of AI
companion tech could become?
>> I think it is really dangerous and not
just from a privacy perspective because
all of that information that you're
feeding in there is probably in logs and
if it gets breached it knows everything
about every conversation that you've had
when you were wearing it and it was on,
right? And that could be really damaging
to somebody. In addition,
building a relationship with an AI and
not building a relationship with a human
is a detriment to a person. >> Mhm.
>> Mhm.
>> And we experienced this during co
people, they kind of lost their people skills.
skills.
>> You know, I I remember the first time I
like went out to a restaurant or I like
met up with a friend. I'm like, what do
I do with my hands?
>> Like am I supposed to like sit like like
what do I cross my legs? I like
literally forgot how to be a person. Yeah.
Yeah.
>> And if we lose those skills, it's like
we're losing the ancient texts of how to
be a human being and we can't pass that
down to our children. And it creates
knock-on effects and people will become
more and more reserved, neurotic,
awkward. We actually there was just
research that I saw on Twitter recently
that neuroticism has spiked
significantly in the past five years.
And they think some of that has to do
with COVID and the fact that people
didn't get a a really good chance to
bounce their ideas and their thoughts
and the way that they act around other
people. Um, and also some of it has to
do with AI.
>> It's scary.
>> It's like a combination of two things
coming together at the worst time possible.
possible.
>> Yeah. We're like dealing with a lot
right now. Like the world's kind of hard.
hard.
>> I remember like you said, you know,
first time kind of going out, I was so
used to being on a Zoom call. I hadn't
thought about not just this part of my
body being visible to anybody or like
what do I do with my hands? Like how how
am I supposed to interact when it's like
you're not on a screen and I can't just
mute myself? >> Yeah.
>> Yeah.
>> You know, or leave and go do something
and then come back and you're still here
on the screen. Like it's
>> it's such a weird thing.
>> It is weird. And like the thing about
human beings is if you've ever been on a
Zoom call and you felt awkward, there's
a reason for that.
>> Human beings are mammals, right? And so
we can smell pherommones on each other
to understand our state of awareness,
how heightened it is. Like when you and
I were both really nervous when we just
did that hack, we could probably smell
each other. I know that sounds like
really freaky.
>> No, I know what you mean.
>> But like we can we interact with each
other in a way in the physical space. >> Yeah.
>> Yeah.
>> And when we don't have that type of
experience, we lose the muscle memory
for that and we lose what's natural and
normal in human based conversations.
Which is why it's a big problem when you
don't have a lot of experience
interacting with people. It creates a
snowball effect where because you don't
have a lot of experience working with
people and experience um maybe getting
critiques or reinforcements from people
that you forget how to respond
adequately to that and it gets weirder
and weirder and weirder over time.
>> Yeah. I mean, I've definitely noticed
that some of the LLMs have been getting
a lot more yesy lately. Yeah. which is
terrifying because again it can feed
into people's delusions. It's not like
when you're just like
>> asking it a basic question like, "Okay,
this IKEA furniture I bought didn't come
with the instructions. Can you explain
me how this piece might fit into this
piece?" Like that's, "Oh, don't worry
about it. I'll figure that out." But
like if someone's going in there with
something that's complete nonsense,
>> you don't want that person to be
>> having their delusions fed.
>> Correct. And like that person who was
dealing with like the VC that we're
talking about,
>> the pieces of information that he really
latched on to are from science fiction,
>> well-known science fiction that's
available on the internet. There was the
LLM was trained on that science fiction.
It seems real. It seems like government
documents because that's how the science
fiction was written. But of course, the
AI is presenting it as if it's a real
true fact and the person thinks, okay,
this is how the universe works when in
reality that's science fiction. The
other thing that's scary about how these
large language models are being trained
is that so many of them are owned by
social media companies. Yeah.
>> So you have like Meta that has Meta AI,
right? And then you have Snapchat that
has the personal AI assistant inside of
there and then obviously Instagram has
the Meta AI built into it. Yep. And the
thing that terrifies me about social
media companies having these AI models
is that their platforms are entirely
based on keeping you on the service
because that's how they feed you ads.
That's how they make money. You're the product,
product, >> right?
>> right?
>> Is there a danger that these AI models
are going to be trained for retention
as opposed to factuality? Like, you
know, we're going to end up in a
situation where the AI is just trying to
keep you on the platform so it'll tell
you whatever you want.
>> I think we're already kind of seeing that.
that.
>> Yeah. you know that the the AI is
extremely sickantic and it will yes man
you. Now something really interesting is
Chat GPT5 just came out while we were at Defcon.
Defcon.
>> Oh yeah, I didn't know about that.
>> Yeah. So chatt just came out with
chatbt5. There was a massive uproar in
the like uh chatt as a companion
community on Reddit >> really
>> really
>> because they nixed some of the
sicopantic behavior that yes man
behavior and people were like I lost my
best friend. I lost my partner.
>> That is so
>> bring back terrifying. Bring back the
other way that chat is supposed to work.
Reverse it.
>> I'm literally I can't live like this.
Like it's actually really scary. And you
should actually put up some of the Reddit.
Reddit.
>> Yeah, probably. I'm sure that we'll be
able to find them. Like
>> they're all over the place. They're very
scary. Um and people are like, "My life
is over without this. Like this is my
best friend. This is who I talk to every
day." When I'm reading these stories,
most of them are about individual cases
of people. So, like the one that you
mentioned about the AI companion that
essentially convinced the teenage uh boy
to commit his to commit suicide, take
his own life. >> Yeah.
>> Yeah.
>> I read that that whole article uh cuz
the mother is going around campaigning,
you know, trying to make sure that this
doesn't happen to other people,
rightfully so, >> right?
>> right?
>> And you read about it and you think to yourself,
yourself,
>> this feels like it's an isolated incident,
incident,
>> right? But then you go and you go on
places like Reddit or on Twitter,
>> really any social media and you can see
that people are
heavily investing themselves Yes.
>> into these AI chat bots, >> right?
>> right?
>> And then of course that's compounded by
the fact that and you talked about this
on Twitter, the Meta AI bot was not
clear at all about the privacy settings
of the conversations and people were
posting some fullon
crazy stuff publicly. Yes, they were.
And that's a problem is like I said, I I
did UX research for a lot of years, so I
know that when somebody doesn't
understand a button, it's a really big
problem. And you have to course correct
it really soon, especially if it means
that they're going to share PII or
people are sharing their address, fraud
that they've committed, tax evasion. Um,
they're talking about crimes they
committed. Uh, they're talking about how
do I I commit I I killed two people. How
do I get a judge to lessen my sentence?
I mean, these are people who are
admitting to murder because they don't
realize it's publicly available showing
up on a feed. Not everything needs to
have a social feed. Your Venmo doesn't
need to be Facebookified, right? >> Yes.
>> Yes.
>> Your LLM, the schema or the way that the
user thinks about it does not need to be
social media. Not everything needs to be
gamified. Sometimes it's just like
Google. Can you imagine if people like
your Google searches just showed up on a
public feed?
>> People don't want social media Google.
>> Yeah, I I certainly I certainly don't. I
think my search right before this
interview. Full disclosure, I'm just
going to be very honest about this was
Rachel Tobac husband name because I
wanted to be prepared cuz I knew you
were going to be here and I was like I
need to remember Evan. Evan Tobac.
That's actually really funny because
whenever I like go and see what my
search characters are like or you can
like go and look in Google Analytics to
see what people search about you. The
first one's always uh Rachel Tobac
height and then it's like Rachel Tobac
husband's name. There's a bunch of other
creepy ones. But yeah,
>> we were having a good time uh laughing
about how you tweeted out that nobody's
allowed to pick you up at Defcon cuz
you're shorter than you you would
expect. I got to say you are not as
short as I thought you were actually.
What are you are you like 54 or five?
>> Did you say 5'4? Yeah.
>> Maybe it's just cuz I'm intimidated.
>> Bleep it out when I say like bleep it
out like it's like PII or something.
>> Yeah. No. Okay. I I think it's just
because I'm your prowess is intimidating.
intimidating.
>> People say that they think I'm going to
be 6 feet tall because my personality is big.
big.
>> It's it's so it makes up for all of the
short height. Yeah.
>> Well, back into this very serious
discussion about people having delusions
in AI. Yeah. >> Is there a point where a company like
>> Is there a point where a company like OpenAI or Meta needs to be held
OpenAI or Meta needs to be held responsible for the things that are
responsible for the things that are happening to the users of those
happening to the users of those platforms?
platforms? >> I think companies are always responsible
>> I think companies are always responsible for what's happening on their platforms.
for what's happening on their platforms. Um they can't prevent every tragedy, but
Um they can't prevent every tragedy, but they are responsible for the types of
they are responsible for the types of tragedies that are common when somebody
tragedies that are common when somebody is using the platform. So for instance,
is using the platform. So for instance, if there's a lot of bullying for young
if there's a lot of bullying for young kids on Instagram that's happening,
kids on Instagram that's happening, Instagram is responsible for making sure
Instagram is responsible for making sure that young kids don't get as bullied,
that young kids don't get as bullied, right? So they should be understanding
right? So they should be understanding the keywords that are being used right
the keywords that are being used right now in slang to bully people and like
now in slang to bully people and like take those comments down. Do you know
take those comments down. Do you know what I mean? Um, that type of thing is
what I mean? Um, that type of thing is like pretty near and dear to my heart
like pretty near and dear to my heart because I work with organizations all
because I work with organizations all the time to think through the way that
the time to think through the way that the user thinks about their security and
the user thinks about their security and privacy because a user often says, "I
privacy because a user often says, "I have nothing to hide. I have nothing to
have nothing to hide. I have nothing to protect. I'm just Joemo who cares about
protect. I'm just Joemo who cares about me." When in reality, they have a
me." When in reality, they have a family. They have kids. They have their
family. They have kids. They have their bank account. They have all this
bank account. They have all this information and people around them they
information and people around them they want to protect. They can't think about
want to protect. They can't think about that because they're not they're not
that because they're not they're not security and threat model experts. So,
security and threat model experts. So, it's the company's job to protect its
it's the company's job to protect its users and its employees.
users and its employees. >> Yeah, it's crazy to
>> Yeah, it's crazy to put the brunt of cyber security onto the
put the brunt of cyber security onto the user because so many people out there
user because so many people out there just are not experienced with
just are not experienced with technology. I mean, I would even put
technology. I mean, I would even put myself in that boat, particularly before
myself in that boat, particularly before I worked for Scammer Payback. I mean, as
I worked for Scammer Payback. I mean, as you know, I've cleaned up a lot of it,
you know, I've cleaned up a lot of it, but but before that, I mean, I have no
but but before that, I mean, I have no doubt it would have been extremely easy
doubt it would have been extremely easy >> for you to find even more personal
>> for you to find even more personal information about me because when I
information about me because when I started cleaning it up,
started cleaning it up, >> I was shocked about how much had been
>> I was shocked about how much had been put out there.
put out there. >> Like, it is it is terrifying.
>> Like, it is it is terrifying. >> I wanted to go on your two Pinterest and
>> I wanted to go on your two Pinterest and see what your see what you were pinning.
see what your see what you were pinning. >> Probably like magic tricks, right?
>> Probably like magic tricks, right? >> Yeah. I mean, uh, most most of my
>> Yeah. I mean, uh, most most of my Pinterest was magic tricks and then
Pinterest was magic tricks and then like, uh, like me looking up haircuts
like, uh, like me looking up haircuts and then me looking up, uh,
and then me looking up, uh, >> haircuts.
>> haircuts. >> Haircuts. Well, because So,
>> Haircuts. Well, because So, >> you have good hair. I mean, that's I I'm
>> you have good hair. I mean, that's I I'm start I'm going to have to get a HIMYM
start I'm going to have to get a HIMYM subscription soon.
subscription soon. >> But other than that, my
>> But other than that, my >> Why are you saying that?
>> Why are you saying that? >> Because Okay, so Nacho, cut all of this
>> Because Okay, so Nacho, cut all of this out. This is so This is so tough.
out. This is so This is so tough. >> Dang. We're going to see how much of
>> Dang. We're going to see how much of this Nacho keeps in and how much of it
this Nacho keeps in and how much of it he just clips and then saves and sends
he just clips and then saves and sends around the office.
around the office. >> Yeah, perfect. because I'm sure that he
>> Yeah, perfect. because I'm sure that he will do some of that. What were we
will do some of that. What were we talking about? This is the problem with
talking about? This is the problem with improv people.
improv people. >> No, but we're like having fun and they
>> No, but we're like having fun and they get people. It comes through to people.
get people. It comes through to people. >> We're shooting the
>> We're shooting the >> We're shooting the
>> We're shooting the >> Um
>> Um >> we're talking about him subscriptions.
>> we're talking about him subscriptions. >> Yeah, we're talking about how I need a
>> Yeah, we're talking about how I need a him and how how this podcast should
him and how how this podcast should needs to be sponsored by him so that I
needs to be sponsored by him so that I can get it for free. Please and thank
can get it for free. Please and thank you.
you. >> Zoom in on his hairline real quick.
>> Zoom in on his hairline real quick. >> Yeah, not you know he's going to do that
>> Yeah, not you know he's going to do that now, too. And I left plenty of time
now, too. And I left plenty of time looking at the camera for you to do it,
looking at the camera for you to do it, too.
too. >> Okay, great.
>> Okay, great. Anyway, back to we were talking about
Anyway, back to we were talking about kids staying safe on social media
kids staying safe on social media companies being responsible for that.
companies being responsible for that. When I was growing up and getting social
When I was growing up and getting social media,
media, >> and obviously this is still the case
>> and obviously this is still the case now. There were age restrictions on that
now. There were age restrictions on that kind of thing, right?
kind of thing, right? >> And you couldn't be on Instagram or
>> And you couldn't be on Instagram or Facebook unless you were 13 years or
Facebook unless you were 13 years or older. And I know so many people that
older. And I know so many people that did not do that. They waited or they did
did not do that. They waited or they did not wait, excuse me, until they were 13.
not wait, excuse me, until they were 13. They were getting on it. They were lying
They were getting on it. They were lying about their birthday. My friend's
about their birthday. My friend's Facebook is still, I believe, his dad's
Facebook is still, I believe, his dad's birthday. So, it says he's in his late
birthday. So, it says he's in his late 50s, even though he is a 26-y old man.
50s, even though he is a 26-y old man. >> Good to know. I'll try and hack in.
>> Good to know. I'll try and hack in. >> Yeah, I'll Yeah. And you know what? I'll
>> Yeah, I'll Yeah. And you know what? I'll give you his name afterwards as well.
give you his name afterwards as well. But
But >> now we're seeing increasingly companies
>> now we're seeing increasingly companies are requesting ID verification. I think
are requesting ID verification. I think most recently Spotify said they were
most recently Spotify said they were going to do it and Google and YouTube
going to do it and Google and YouTube said that they were going to look into
said that they were going to look into doing uh uploading photos of your ID to
doing uh uploading photos of your ID to verify your age.
verify your age. I'm curious, do you think that the
I'm curious, do you think that the benefit of that outweighs the risk?
benefit of that outweighs the risk? >> No, not even close. Um, I mean, we just
>> No, not even close. Um, I mean, we just saw that with the tea app.
saw that with the tea app. >> Yep.
>> Yep. >> Right.
>> Right. >> They required you to upload a ID or a
>> They required you to upload a ID or a passport and then take a selfie of your
passport and then take a selfie of your face and then leaked those. Right.
face and then leaked those. Right. Anything that you collect, you have to
Anything that you collect, you have to protect. And it is kind of hard to
protect. And it is kind of hard to protect things if you're doing something
protect things if you're doing something like vibe coding, which a lot of people
like vibe coding, which a lot of people are doing. I have no idea if the T app
are doing. I have no idea if the T app was vibe coded, but they had a pretty
was vibe coded, but they had a pretty serious error that was kind of like it
serious error that was kind of like it was like a rookie move. It's like an
was like a rookie move. It's like an unsecured bucket. And that sucks for all
unsecured bucket. And that sucks for all of those people. They're going to have
of those people. They're going to have their information leaked. It's not like
their information leaked. It's not like you can just change your address like
you can just change your address like you can change a password. So that
you can change a password. So that information is out there. And anytime
information is out there. And anytime you collect information like that, not
you collect information like that, not only do you have to protect it, and
only do you have to protect it, and that's hard to do, also I can bypass it
that's hard to do, also I can bypass it really easily. It's not hard for me to
really easily. It's not hard for me to even bypass an ID, livveness detection,
even bypass an ID, livveness detection, KYC, like your know your customer flows.
KYC, like your know your customer flows. I'm not going to get into it right now,
I'm not going to get into it right now, but it's easy for me to bypass those
but it's easy for me to bypass those systems. And it's scary to me that
systems. And it's scary to me that people think, "Oh, this will solve
people think, "Oh, this will solve everything." When in reality, it's just
everything." When in reality, it's just really creating a privacy nightmare. So,
really creating a privacy nightmare. So, when we're looking at Face ID
when we're looking at Face ID verification, these You're saying these
verification, these You're saying these things are extremely easily fooled
things are extremely easily fooled still.
still. >> I don't want to get into it too much
>> I don't want to get into it too much because it's not 100% patched, but uh
because it's not 100% patched, but uh unfortunately, yeah, I can get in.
unfortunately, yeah, I can get in. >> That's crazy.
>> That's crazy. >> Yeah.
>> Yeah. >> So, if we can't verify the ages of these
>> So, if we can't verify the ages of these people in a way that's safe and secure
people in a way that's safe and secure and protects their information, how do
and protects their information, how do we protect these young people online,
we protect these young people online, not only on social media, but again,
not only on social media, but again, with using these large language models?
with using these large language models? Yeah, I would recommend using AI as
Yeah, I would recommend using AI as defense. So, thinking about agentic um
defense. So, thinking about agentic um content moderation, I think that's a
content moderation, I think that's a really great use case. Now, I'm going to
really great use case. Now, I'm going to tell a story here that I've never told
tell a story here that I've never told before. Never in my life.
before. Never in my life. >> Scammer Payback Podcast exclusive.
>> Scammer Payback Podcast exclusive. >> Exclusive. Okay. When I was breaking
>> Exclusive. Okay. When I was breaking into tech, I was a teacher trying to
into tech, I was a teacher trying to find a job in tech and I applied for a
find a job in tech and I applied for a job at Facebook.
job at Facebook. And the job that I applied for, they sat
And the job that I applied for, they sat me down and they said, "You're going to
me down and they said, "You're going to be going through posts and you're going
be going through posts and you're going to see a lot of information.
to see a lot of information. How comfortable are you with with seeing
How comfortable are you with with seeing pictures of say children in cages?"
pictures of say children in cages?" And I recoiled like in the middle of
And I recoiled like in the middle of this interview.
this interview. >> This is a job interview for Facebook.
>> This is a job interview for Facebook. >> Yes. And I said, "I am not comfortable
>> Yes. And I said, "I am not comfortable with that." And they said, "Well, the
with that." And they said, "Well, the thing is this job is human content
thing is this job is human content moderation.
moderation. It is your job to go through and
It is your job to go through and determine is this content safe on the
determine is this content safe on the internet or should it be removed? And
internet or should it be removed? And now at the time that I was applying for
now at the time that I was applying for this job, I'm not going to date myself
this job, I'm not going to date myself by telling you exactly when this was.
by telling you exactly when this was. >> Fair enough.
>> Fair enough. >> There was no AI. So humans are going
>> There was no AI. So humans are going through and saying this is a child in a
through and saying this is a child in a cage, nude child, um child in a
cage, nude child, um child in a suggested uh pose. Super horrifying,
suggested uh pose. Super horrifying, terrible images every single day. And I
terrible images every single day. And I thankfully did not get the job because I
thankfully did not get the job because I think I would have I think I would have
think I would have I think I would have taken it. I think I would have said I
taken it. I think I would have said I can do it. I'm strong enough. You know,
can do it. I'm strong enough. You know, I have experience with children. I want
I have experience with children. I want to protect children. I want to work in
to protect children. I want to work in tech. This is my way to break in.
tech. This is my way to break in. Facebook's a big name. You know what I
Facebook's a big name. You know what I mean? And I think if they would have
mean? And I think if they would have given me the job, I would have taken it.
given me the job, I would have taken it. And I think I would have probably been
And I think I would have probably been messed up forever
messed up forever >> with what I would have seen.
>> with what I would have seen. >> Yeah. There's actually a movie coming
>> Yeah. There's actually a movie coming out soon about this specific situation.
out soon about this specific situation. I think it's like an A24 movie. I
I think it's like an A24 movie. I watched the trailer and it's like
watched the trailer and it's like exactly how I felt in the interview.
exactly how I felt in the interview. This like gnawing, nauseous, like when
This like gnawing, nauseous, like when they were telling me what I would be
they were telling me what I would be seeing, I was like, I'm going to throw
seeing, I was like, I'm going to throw up. Like I I you're even just telling me
up. Like I I you're even just telling me this. I can't look at this. I'm going to
this. I can't look at this. I'm going to barf.
barf. >> Um so yeah, I mean it's horrifying, but
>> Um so yeah, I mean it's horrifying, but this is a great use case for AI, right?
this is a great use case for AI, right? Humans should not have to look at
Humans should not have to look at beheading videos to determine whether or
beheading videos to determine whether or not they should be taken off Tik Tok.
not they should be taken off Tik Tok. They should not have to look at children
They should not have to look at children in cages, nude children. That's
in cages, nude children. That's horrifying, right? Like these images and
horrifying, right? Like these images and videos that people are putting up.
videos that people are putting up. That's a great job for AI content
That's a great job for AI content moderation. Take that content down. Use
moderation. Take that content down. Use agents. Do whatever you need to do to
agents. Do whatever you need to do to write classifiers to figure out what is
write classifiers to figure out what is this nasty, horrifying content and get
this nasty, horrifying content and get it out of there.
it out of there. What are the comments people use for
What are the comments people use for bullying that aren't jokes and
bullying that aren't jokes and sarcastic? Remove it from 13-year-olds
sarcastic? Remove it from 13-year-olds Instagram comments. We don't need to be
Instagram comments. We don't need to be doing all that.
doing all that. >> So, do you think that in the long run, I
>> So, do you think that in the long run, I know it's hard to look at it at the
know it's hard to look at it at the scope right now, but
scope right now, but >> in the long run, do you think AI will be
>> in the long run, do you think AI will be more beneficial for the people that are
more beneficial for the people that are trying to do good in the world or the
trying to do good in the world or the people that are trying to do bad?
people that are trying to do bad? >> I think it's going to be 50/50 and it
>> I think it's going to be 50/50 and it will constantly be a balancing act. Like
will constantly be a balancing act. Like whack-a-ole.
whack-a-ole. >> Yeah. And this is what we see with
>> Yeah. And this is what we see with offensive and defensive tactics. Uh an
offensive and defensive tactics. Uh an attacker comes up with a new type of
attacker comes up with a new type of scam or attack method and then law
scam or attack method and then law enforcement or companies try to figure
enforcement or companies try to figure out how to get rid of it, right? And
out how to get rid of it, right? And then they evade that and you have to
then they evade that and you have to whack that mole down too. And it just
whack that mole down too. And it just continually is like whack-a-ole until
continually is like whack-a-ole until you get to something that's really hard
you get to something that's really hard to patch or really hard to fix. And then
to patch or really hard to fix. And then it takes a little bit of time. Everybody
it takes a little bit of time. Everybody freaks out for about six months and then
freaks out for about six months and then it's patched and then there's another
it's patched and then there's another one. That's like the way and the cycle
one. That's like the way and the cycle of security, right?
of security, right? >> It's kind of the circle of life
>> It's kind of the circle of life >> always for everything.
>> always for everything. >> Yeah.
>> Yeah. >> All right. I want to round this whole
>> All right. I want to round this whole thing out by giving you a quote that I
thing out by giving you a quote that I heard you say.
heard you say. >> Okay.
>> Okay. >> Cuz I think it's great advice. You use
>> Cuz I think it's great advice. You use the phrase be politely paranoid.
the phrase be politely paranoid. >> Yeah.
>> Yeah. >> Why should people be politely paranoid?
>> Why should people be politely paranoid? Be politely paranoid because the
Be politely paranoid because the information that's out there for almost
information that's out there for almost every person on the internet can be used
every person on the internet can be used to trick you or the people around you.
to trick you or the people around you. Scammers are praying that you don't know
Scammers are praying that you don't know what scams look like. They are hoping
what scams look like. They are hoping that you fall for it, that you fall for
that you fall for it, that you fall for their urgency, for their pressure
their urgency, for their pressure tactics, for their scheming, their
tactics, for their scheming, their spoofing, that you don't know what
spoofing, that you don't know what spoofing caller ID is possible. You
spoofing caller ID is possible. You don't know what it looks like. They are
don't know what it looks like. They are praying that you don't know what the
praying that you don't know what the latest scam calls sound like and we just
latest scam calls sound like and we just demoed them today, right? Yeah.
demoed them today, right? Yeah. >> They hope that you don't know how to
>> They hope that you don't know how to catch them in the act by using another
catch them in the act by using another method of communication. And when you
method of communication. And when you are politely paranoid, you will catch
are politely paranoid, you will catch the scammers. So, that's what I
the scammers. So, that's what I recommend.
recommend. >> Well, Rachel, I don't think I could have
>> Well, Rachel, I don't think I could have said it better myself. Thank you for
said it better myself. Thank you for being here. I really appreciate it.
being here. I really appreciate it. >> Thanks for having me.
>> Thanks for having me. >> Yeah. Yeah. I knew it. I knew that was
>> Yeah. Yeah. I knew it. I knew that was going to happen. I was like, she's going
going to happen. I was like, she's going to say it. I knew for sure that that's
to say it. I knew for sure that that's what was going to
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.
Works with YouTube, Coursera, Udemy and more educational platforms
Get Instant Transcripts: Just Edit the Domain in Your Address Bar!
YouTube
←
→
↻
https://www.youtube.com/watch?v=UF8uR6Z6KLc
YoutubeToText
←
→
↻
https://youtubetotext.net/watch?v=UF8uR6Z6KLc