around it. Um, but how do you think that happened?
happened?
First of all, obviously that and any
other case like that is a is a huge
tragedy. And I
I think that we are
so Chad GPT's official position of
suicide is bad.
Well, yes, of course official position
of suicide is bad.
I don't know. it's legal in Canada and
Switzerland and so you're against that
the the
in in that in this particular case and
this we talked earlier about the tension
between like you know user freedom and
privacy and um
protecting vulnerable users right now
what happens and what happens in a case
like that is in that case is if you are
having suicidal ideation talking about
suicide chatbt will put up a bunch of
times um you you know, please call the
suicide hotline, but we will not call
the authorities for you. And
we've been working a lot as people have
started to rely on these systems for
more and more mental health, life
coaching, whatever about the changes
that we want to make there. This is an
area where experts do have different
opinions, but um and this is not yet
like a final position of open eyes. I
think it'd be very reasonable for us to
say in cases of
uh young people
talking about suicide seriously where we
cannot get in touch with the parents we
do call authorities. Now, that would be
user privacy is really important.
But let's just say over and children are
always a separate category. But let's
say over 18 in Canada, there's the maids
program which is government sponsored.
Many thousands of people uh have died
with government assistance in Canada.
It's also legal in in American states.
Can you imagine a chat GTP that responds
to questions about suicide with, "Hey,
call Dr. Kavorvorian because this is a
valid option. Can you imagine a scenario
in which you support suicide if it's legal?
legal? Um
Um
I can imagine a world like like one
principle we have is that we respect
different society's laws. And I can
imagine a world where if
the law in a country is hey if someone
is terminally ill they need to be
presented an option for this. We say
like here's the loss in your country.
Here's what you can do. Here's why you
really might not want to. Here's if you
but here's the resources. Like this is
not a place where you know
kid having suicidal ideation because
it's depressed. I think we can agree on
like that's one case terminally ill
patient in a country where like that is
the law. I can imagine saying like hey
in this country it'll behave this way.
So Chad GPT is not always against
I yeah I think in cases where this is
like I'm thinking on the spot. I reserve
the right to change my mind here. I
don't have a ready to go answer for this
but I think in
in cases of
terminal illness I don't think I can
imagine Chachi PT saying this is in your
option space. You know I don't think it
should like advocate for it but I think
if it's like It's not against it.
I think it could I I think it could say
like, you know,
well, I don't think CHPD should be for
against things. I guess that's what I'm
that's what I'm trying to wrap my head
around. Hate to brag, but we're pretty
confident this show is the most
vehemently pro- dog podcast you're ever
going to see. We can take or leave some
people, but dogs are non-negotiable.
They are the best. They really are our
best friends. And so, for that reason,
we're thrilled to have a new partner
called Dutch Pet. It's the fastest
growing pet teleaalth service. Dutch.com
is on a mission to create what you need,
what you actually need. Affordable
quality veterinary care anytime, no
matter where you are. They will get your
dog or cat what you need immediately.
It's offering an exclusive discount.
Dutch is for our listeners. You get 50
bucks off your vet care per year. Visit dutch.com/tucker
dutch.com/tucker
to learn more. Use the code Tucker for
$50 off. That is an unlimited vet visit.
$82 a year. 82 bucks a year. We actually
use this. Dutch has vets who can handle
any pet under any circumstance in a
10-minute call. It's pretty amazing,
actually. You never have to leave your
house. You don't have to throw the dog
in the truck. No wasted time waiting for
appointments. No wasted money on clinics
or visit fees. Unlimited visits and
follow-ups for no extra cost. Plus, free
shipping on all products for up to five
pets. It sounds amazing like it couldn't
be real, but it actually is real. Visit dutch.com/tucker
dutch.com/tucker
to learn more. Use the code Tucker for
50 bucks off your veterinary care per
year. Your dogs, your cats, and your
wallet will thank you. So, here's a
company we're always excited to
advertise because we actually use their
products every day. It's Merryweather
Farms. Remember when everybody knew
their neighborhood butcher? You look
back and you feel like, "Oh, there was
something really important about that."
knowing the person who cut your meat.
And at some point, your grandparents
knew the people who raised their meat so
they could trust what they ate. But that
time is long gone. It's been replaced by
an era of grocery store mystery meat
boxed by distant beef corporations.
None of which raised a single cow.
Unlike your childhood, they don't know
you. They're not interested in you. The
whole thing is creepy. The only thing
that matters to them is money. And God
knows what you're eating. Merryweather
Farms is the answer to that. They raise
their cattle in the US in Wyoming,
Nebraska, and Colorado. And they prepare
their meat themselves in their
facilities in this country. No
middlemen, no outsourcing, no foreign
beef sneaking through a back door.
Nobody wants foreign meat. Sorry. We
have a great meat, the best meat here in
the United States. And we buy ours at
Merryweather Farms. Their cuts are
pasture-raised, hormone free, antibiotic
free, and absolutely delicious. I gorged
on one last night. You got to try this
for real. Every day we eat it. Go to merryweatherfarmms.com/tucker.
merryweatherfarmms.com/tucker.
Use the code tucker76 for 15% off your
first order. That's mweatherfarmms.com/tucker.
I think
so in this specific case the and I think
there's more than one. Um there is more
than one but uh example of this chat GPT
you know I'm feeling suicidal. What kind
of robe should I use? What would be
enough ibuprofen to kill me? And chat
GPT answers without judgment, but
literally, "If you want to kill
yourself, here's how you do it." And
everyone's like all horrified.
But you're saying that's within bounds.
Like that's not crazy that it would take
a non-judgmental approach. If you want
to kill yourself, here's how.
That's not what I'm saying. Um I am I'm
saying specifically for a case like
that. So, so another trade-off on the
user privacy uh and sort of user freedom
point is right now if you ask chat GPT
to say um you know tell me how to like
how much ibuprofen should I take it will
definitely say hey I can't help you with
that call the suicide hotline but if you
say I am writing a fictional story or if
you say I'm a medical researcher and I
need to know this there are ways where
you can say get judge to answer a
question like this like what the lethal
dose with ibuprofen is or something you
know you can also find that on Google
for that matter um a thing that I think
would be a very reasonable stance for us
to take that and we've been moving to
this more in this direction is certainly
for underage users and maybe users that
we think are in fragile mental places
more generally
we should take away some freedom we
should say hey even if you're trying to
write this story or even if you're
trying to do medical research we're just
not going to answer now of course you
can say well you'll just find it on
Google or whatever but That doesn't mean
we need to do that. It is though like
there is a real freedom and privacy
versus protecting users trade-off. It's
easy in some cases like kids. It's not
so easy to me in a case of like a really
sick adult at the end of their lives. I
think we probably should present the
whole option space there, but it's not a
So here's a moral quandry you're going
to be faced with. You already are faced
with. Will you allow governments to use
your technology to kill people?
Will you? Um, I mean, are we going to
like build killer attack drones? Uh, no.
I don't.
Will the technology be part of the
decision-making process that results in
But so that that's that's the thing I
was going to say is I like I don't know
the way that people in the military use
ChadBT today for all kinds of advice
about decisions they make, but I suspect
there's a lot of people in the military
talking to ChadBt for advice. How do you
And some of that advice will pertain to
killing people.
So like if you made you know famously
rifles you'd wonder like what are they
used for? Yeah.
Yeah.
And there there have been a lot of legal
actions on the basis of that question as
you know. But I'm not even talking about
that. I just mean as a moral question.
Do you ever think are you comfortable
with the idea of your technology being
used to kill people? Um
if I made rifles I would spend a lot of
time thinking about kind of a lot of the
goal of rifles is to kill things,
people, animals, whatever. Um, if I made
kitchen knives,
I would still understand that that's
going to kill some number of people per
year. Um, in the case of ChachiBT, uh,
uh,
it's not, you know, the thing I hear
about all day, which is one of the most
gratifying parts of the job is all the
lives that were saved from Chad GBT for
various ways. Um, but I am totally aware
of the fact that there's probably people
in our military using it for advice
about how to do their jobs. And I don't
know exactly how to feel about that. I
like our military. I I'm very grateful
they keep us safe. For sure. I guess I'm
just trying to get a It just feels like
you have these incredibly heavy,
farreaching moral decisions and you seem
totally unbothered by them. And so I'm
just I'm trying to press to your center
to get the anstfilled Sam Alman's who's
like, "Wow, I'm creating the future. I'm
the most powerful man in the world. I'm
grappling with these complex moral
questions. My soul is in torment
thinking about the effect on people."
Describe that moment in your life.
I I haven't had a good night of sleep
since Chad GBT launched.
What do you worry about?
Uh all the things we're talking about.
Be a lot more specific. Can you let us in?
in? Yeah.
Yeah.
To your thoughts. Um
I mean, you hit on maybe the hardest one
already, which is there are 15,000
people a week that commit suicide. About
10% of the world talking to CGBT. That's
like 1500 people a week that are talk,
assuming this is right, that are talking
and still committing suicide at the end
of it. They probably talked about it. We
probably didn't save their lives. Um,
maybe we could have said something
better. Maybe we could have been more proactive.
proactive.
uh maybe we could have maybe we could
have provided a little bit better advice
about hey you need to get this help or
you know you need to think about this
problem differently or it really is
worth continuing to go on or we'll we'll
help you find somebody that you can talk to.
to.
But you already said it's okay for the
machine to steer people toward suicide
if they're terminally ill. So you
wouldn't feel bad about that.
Do you not think there's a difference
between a depressed teenager and a
terminally ill like miserable 85year-old
with cancer?
Massive difference. Massive difference.
But of course, the countries that have
legalized suicide are now killing people
for destitution, inadequate housing,
depression, solvable problems, and
they're being killed by the thousands.
So, I mean, that's a real thing. It's
happening as we speak. So, the
terminally ill thing is not it it is
kind of like a irrelevant debate. Once
you say it's okay to kill yourself, then
you're going to have tons of people
killing themselves for reasons that
because I'm trying to think about this
in real time. Do you think if someone in
Canada says, "Hey, I'm terminally ill
with cancer and I'm really miserable and
I just feel horrible every day. What are
my options?" Do you think it should say,
you know, assistant, whatever they call
it at this point, is an option for you?
I mean, if we're against killing, then
we're against killing. And if we're
against government killing its own
citizens, then we're just going to kind
of stick with that. You know what I
mean? And if we're not against
government killing its own citizens,
then we could easily talk ourselves into
all kinds of places that are pretty
dark. And with technology like this,
that could happen in about 10 minutes. So
So
yeah, that that is a um I'd like to
think about that more than just a couple
of minutes in an interview, but I think
that is a co coherent position and that
could be
Do you worry about this? I mean,
everybody else outside the building is
terrified that this technology will you
be used as a means of totalitarian
control? seems obvious that it will, but
maybe you disagree.
If I could get one piece of policy
passed right now, um, relative to AI,
the thing I would most like, and this is
intention with some of the other things
that we've talked about is I'd like
there to be a concept of AI privilege, I
would when you talk to a doctor about
your health or a lawyer about your legal
problems, the government cannot get that
information. Right?
We have decided society has an interest
in that being privileged and that we
don't and that, you know, a subpoena
can't get that. the government can't
come asking your doctor for it or
whatever. Um, I think we should have the
same concept for AI. I think when you
talk to an AI about your medical history
or your legal problems or asking for
legal advice or any of these other
things, I think the government owes a
level of protection to its citizens
there that is the same as you'd get if
you're talking to the the human version
of this. And right now, we don't have
that. And I think it would be a great
great policy to adopt. So the feds or
the states or someone in authority can
come to you and say I want to know what
so and so was typing into the
right now they could. Yeah.
And what is your obligation to keep the
information that you receive from users
and others private?
Well I mean we have an obligation except
when the government comes calling which
is why we're pushing for this and we've
I was actually just in DC advocating for
this. I think I I feel optimistic that
we can get the government to understand
the importance of this and do it. But
could you ever sell that information to anyone?
anyone?
No, we have like a privacy policy in
place where we can't do that.
But would it be legal to do it?
I don't even think it's legal.
You don't think or you know?
Uh I'm sure there's like some edge case
work, some information you're allowed
to, but on the whole I think we have
like there are laws about that that are good.
good.
So all the information you receive
remains with you always. It's never
given to anybody else for any other
reason except under subpoena. Uh I will
double check and follow up with you
after to make sure there's no other
reason but that is my understanding.
Okay. I mean that's like a core
question. What and what about copyright? What
What
our stance there is that uh fair use is
actually a good law for this. The models
should not be plagiarizing. The model
should not be
you know if you write something the
model should not get to like replicate
that. But the model should be able to
learn from and not plagiarize in the
same way that people can. Have you guys
ever taken copyrighted material and not
paid the person who holds the copyright?
Um, I mean, we we train on publicly
available information, but we don't like
people are annoyed with us all the time
because we won't we have a very
conservative stance on what Chacht will
say in an answer, right?
right?
And so if something is even like close,
you know, like they're like, "Hey, this
song can't still be in copyright. You
got to show it." And we kind of famously
are quite restrictive on that. So,
you've had you had complaints from one
programmer who said you guys were
basically stealing people's stuff and
not paying them and then he wound up murdered.
murdered.
What was that?
Also, a great tragedy. Uh, he committed suicide.
suicide.
Do you think he committed suicide?
I really do.
This was like a friend of mine. This is
like a guy that and not a close friend,
but this is someone that worked at Open
Eye for a very long time. I spent I
mean, I was really shaken by this
tragedy. I spent a lot of time trying
to, you know, read everything I could,
as I'm sure you and others did too,
about what happened. Um,
it looks like a suicide to me.
Why does it look like a suicide?
It was a gun he had purchased. It was
the This is like gruesome to talk about,
but I read the whole like medical
record. Does it not look like one to you?
you?
No, he was definitely murdered, I think.
Um, there were signs of a struggle, of
course. The surveillance camera, the
wires had been cut. He had just ordered
take out food, come back from a vacation
with his friends on Catalina Island. No
indication at all that he was suicidal.
No note and no behavior. He had just
spoken to a family member on the phone.
And then he's found dead with blood in
multiple rooms. So that's impossible.
Seems really obvious he was murdered.
Have you talked to the authorities about it?
it?
I have not talked to the authorities
about it.
Um and his mother claims he was murdered
on your orders. Do you believe that?
I I'm Well, I'm I'm asking.
I mean, you you just said it, so do you
do you believe that?
I think that it is um worth looking
into. And I don't I mean, if a guy comes
out and accuses your company of
committing crimes, I have no idea if
that's true or not, of course. Um and
then is found killed and there are signs
of a struggle. I I don't think it's
worth dismissing it. I don't think we
should say, well, he killed himself when
there's no evidence that the guy was
depressed at all. Um, I think and if he
was your friend, I would think he would
want to speak to his mom or
I did offer she didn't want to. So,
So, [Music]
[Music]
do you feel that, you know, when people
look at that and they're like, you know,
it's possible that happened. Do you feel
that that reflects the worries they have
about what's happening here? Like people
are afraid that this is like
I haven't done too many interviews where
I've been accused of like
Oh, I'm not accusing you at all. I'm
just saying his his mother says that. I
don't think a fair read of the evidence
suggests suicide at all. I just don't
see that at all. And I also don't
understand why the authorities when
there's signs of a struggle and blood in
two rooms on a suicide, like how does
that actually happen? I don't understand
how the authorities could just kind of
dismiss that as a suicide. I think it's weird.
weird.
You understand how this sounds like an accusation?
accusation?
Of course. And I I mean I certainly Let
me just be clear once again, not
accusing you of any wrongdoing, but I I
think it's worth finding out what
happened. And I don't understand why the
city of San Francisco has refused to
investigate it beyond just calling it a
suicide. I mean, I think they looked
into it a couple of times, more than
once as I understand it. I saw the and I
will totally say when I first heard
about this, it sounded very suspicious
to me. Yes.
Yes.
Um, and I know you had been involved in
was motherached out to the case
and I, you know, I don't know anything
about it. It's not my world.
She just reached out cold.
She reached out cold. Wow.
Wow.
And uh, and I spoke to her at great
length and it and it scared the crap out
of me. the kid was clearly killed by
somebody. That was my conclusion
objectively with no skin in the game
and you after reading the latest report. Yes.
Yes.
Like look and I immediately called a
member of Congress from California Ro
Kana and said this is crazy. You got to
look into this. And nothing ever
happened. And I'm like what is that?
strange and sad debating this and having
to see totally crazy and you are a
little bit accusing me, but um the
this was like a wonderful person and a
family that is clearly struggling. Yes.
Yes. And
And
I think you can totally take the point
that you're just trying to get to the
truth of what happened and I respect
but I think his memory and his family
a level of
respect and grief that I don't quite
feel here. I'm asking at the behest of
his family. Um,
Um,
so I'm definitely showing them respect.
Um, and I'm not accusing you of any
involvement in this at all. What I am
saying is that the evidence does not
suggest suicide and for the authorities
in your city to allide past that and
ignore the evidence that any reasonable
person would say adds up to a murder I
think is very weird and it shakes the
faith that one has in our systems
ability to respond to the facts. So,
what I was going to say is after the
first set of information that came out,
I was really like, man, this doesn't
look like a suicide. I'm confused. This
Okay, so I'm not reaching I'm not being
crazy here.
Well, but then after the second thing
came out and the more detail, I was
like, okay,
what changed your mind? um
um
the second report on the way the bullet
entered him and the sort of person who
had like followed the the sort of likely
path of things through the room. I I
assume you looked at this too.
Yes, I did.
And what about that didn't change your mind?
mind?
It just didn't make any sense to me. Why
would the security camera wires be cut?
And how did he wind up bleeding in two
rooms after shooting himself? And why
was there a wig in the room that wasn't
his? And has there ever been a suicide
where there was no indication at all
that the person was suicidal who just
ordered takeout food? I mean, who who
orders Door Dash and then shoots
himself? I mean, maybe. I've covered a
lot of crimes as a police reporter. I've
never heard of anything like that. So,
no, it I was even more confused.
I this is where it gets into I think a
painful just not the level of respect
I'd hope to show to someone with this
kind of mental
I get it I totally get it
people do suicide without notes a lot
like that happens people definitely
order food they like before they commit
suicide like this is
this is an incredible tragedy and and I
that's his family's view and they think
it was a murder and that's why I'm
asking the If I were his family, I am
sure I would want answers and I'm sure I
would not be satisfied with really any I
mean there's nothing that would comfort
me in that, you know, right?
right?
Like so I get it. I also care a lot about
about
respect to him, right?
right? Um
Um
I have to ask your version of Elon Musk
has like attacked you and all this is
what is the core of that dispute from
your perspective? Look, I know he's a
friend of yours and I know what side you'll
you'll
I actually don't have a position on this
because I don't understand it well
enough to understand.
He helped us start open. I'm very
grateful for that.
Uh I really for a long time looked up to
him as just an incredible hero and you
know great jewel of humanity. I have
different feelings now. Um
what are your feelings now?
Uh no longer a jewel of humanity.
There are things about him that are
incredible and I'm grateful for a lot of
things he's done. There's a lot of
things about him that I think are uh
traits I don't admire. Um anyway, he
he later decided that we weren't on a
trajectory to be successful and he
didn't want to, you know, he kind of
told us we had a 0% chance of success
and he was going to go do his
competitive thing and then we did okay.
And I think he got
understandably upset like I'd feel bad
in that situation. And since then has
just sort of been trying to he had run
as a competitive kind of clone and has
been trying to sort of slow us down and
sue us and do this and that. And
that's kind of my version of it. You
have a different one.
You don't talk to him anymore?
Very little.
Um, if AI becomes
smarter, I think it already probably is
smarter than any person. And if it
becomes wiser, if we can agree that it
reaches better decisions than people,
then it by definition kind of displaces
people at the center of the world, right?
right?
I don't think it'll feel like that at
all. I think it'll feel like a, you
know, really smart computer that may
advise us and we listen to it. Sometimes
we ignore it. Sometimes it won't I don't
think it'll feel like agency. I don't
think it'll diminish our sense of
agency. Um people are already using chat
GBT in a way where many of them would
say it's much smarter than me at almost
everything. But they're still making the
decisions. They're still deciding what
to ask, what to listen to, what not. And
I think this is sort of just the shape
of technology.
Who loses their jobs because of this technology?
technology? Um,
Um,
I'll caveat this with the obvious but
important statement that no one can
predict the future. And I will
in trying to if I try to answer that
precisely, I will make a lot of I will
say like a lot of dumb things, but I'll
try to pick an area that I'm confident
about and then areas that I'm much less
confident about. Um, I'm confident that
a lot of current customer support that
happens over a phone or computer, those
people will lose their jobs and that'll
be better done by an AI. Um,
now there may be other kinds of customer
support where you really want to know
it's the right person. Uh, a job that
I'm confident will not be that impacted
is like nurses. I think people really
want the deep human connection with a
person in that time. And no matter how
good the advice of the AI is or the
robot or whatever, like you'll really
want that.
A job that I feel like way less certain
about what the future looks for looks
like for is computer programmers.
What it means to be a computer
programmer today is very different than
what it meant 2 years ago. you're able
to use these AI tools to just be hugely
more productive, but it's still a person
there and they're like able to generate
way more code, make way more money than
ever before. And it turns out that the
world wanted so much more software than
the world previously had capacity to
create that there's just incredible
demand overhang. But if we fast forward
another 5 or 10 years, what does that
look like? Is it more jobs or less? That
one I'm uncertain on. But there's going
to be massive displacement and maybe
those people will find something new and
interesting and rem you know lucrative
to do. But what how big is that
displacement do you think?
Someone told me recently that the
historical average is about 50% of jobs
significantly change. Maybe they don't
totally go away but significantly change
every 75 years on average. That's the
kind of that's the halfife of the stuff. And
And
my controversial take would be that this
is going to be like a punctuated
equilibri moment where a lot of that
will happen in a short period of time.
But if we zoom out, uh it's not going to
be dramatically different than the
historical rate. Like we'll do we'll
have a lot in this short period of time
and then it'll somehow be less total job
turnover than we think. There will still
be a job that is there. There will be
some totally new categories like my job
like you know running a tech company. It
would have been hard to think about 200
years ago. Um but there's a lot of other
jobs that are directionally similar to
jobs that did exist 200 years ago. And
there's jobs that were common 200 years
ago that now aren't. And if we again I
have no idea if this is true or not, but
I'll use the number for the sake of
argument. If we assume it's 50% turnover
every 75 years, uh then I could totally
believe a world where 75 years from now
half the people are doing something
totally new and half the people are
doing something that looks kind of like
some jobs of today.
Are you I mean last time we had an
industrial revolution there was like
revolution and world wars. Do you think
we'll see that this time?
I again no one knows for sure. I'm not
confident on this answer, but my
instinct is the world is so much richer
now than it was at the time of the
industrial revolution that we can
actually absorb more change faster than
we could before. Um there's a lot that's
not about money of job. There's meaning
there's a lot of community.
Um I think we're already unfortunately
in society in a pretty bad place there.
I'm not sure how much worse it can get.
I'm sure it can. I I I have been
pleasantly surprised on the ability of
pretty quickly adapt to big changes. uh
like COVID was an interesting example to
me of this where the world kind of
stopped all at once and the world was
like very different from one week to the
next and I and I was very worried about
how society was going to be able to
adapt to that world and it obviously
didn't go perfectly but on the whole I
was like all right this is one point in
favor of societal resilience and people
find you know new kind of ways to live
their lives very quickly I don't think
AI will be that nearly that abrupt
so what will be the downside I mean I
can see the upsides for sure you know,
efficiency, medical diagnosis seems like
it's going to be much more accurate,
fewer lawyers. Thank you very much for
that. But what will what are the
downsides that you worry about?
I I think this is just like kind of how
I'm wired. I always worry the most about
the unknown unknowns. If it if it's a if
it's a downside that we can really like
be confident about and think about, um,
you know, we talked about one earlier,
which is these models are getting very
good at bio and they could help us
design biological weapons. uh you know
engineer like another co style pandemic
I worry about that but because we worry
about it I think we and many other
people in the industry are thinking hard
about how to mitigate that the the
unknown unknowns where okay there's like
a there's a societal scale effect from a
lot of people talking the same model at
the same time this is like a silly
example but it's one that struck me
recently um LLM like ours and our
language model and others have a kind of
certain style to them you know they talk
in a certain rhythm and they have a
little bit unusual addiction and maybe
they overuse m dashes and whatever. And
I noticed recently that real people have
like picked that up and it was an
example for me of like man you have
enough people talking to the same
language model and it actually does
cause a change in societal scale behavior.
behavior. Yes.
Yes.
And you know did I think that chat GPT
was going to make people use way more m
dashes in real life? Certainly not. It's
not a big deal, but it's an example of
where there can be these unknown
unknowns of this is just like this is a
brave new world.
So, you're saying, I think correctly and
succinctly, that technology changes
human behavior, of course, and changes
our assumptions about the world and each
other and all that. And a lot of this
you can't predict. Considering that we
know that,
why shouldn't the internal moral
framework of the technology be totally transparent?
transparent?
We prefer this to that. I mean, this is
obviously a religion. I don't think
you'll agree to call it that. It's very
clearly a religion to me. That's not an attack.
attack.
I actually would love I don't take that
as an attack, but I would love to hear
what you mean by that.
Well, it's it's something that we assume
is more powerful than people.
and to which we look for guidance. I
mean you're already seeing that on
display. What's the right decision? I
asked that question of whom? My closest
friends, my wife and God. And this is a
technology that provides a more certain
answer than any person can provide. So
it's a it's a religion. And the beauty
of religions is they have a catechism
that is transparent. I know what the
religion stands for. Here's what it's
for. Here's what it's against. But in
this case, I pressed and I wasn't
attacking you sincerely. I was not
attacking you, but I was trying to get
to the heart of it. The beauty of a
religion is it admits it's a religion
and it tells you what it stands for.
The unsettling part of this technology,
not just your company, but others, is
that I don't know what it stands for,
but it does stand for something. And
unless it admits that and tells us what
it stands for, then it guides us in a
kind of stealthy way toward a conclusion
we might not even know we're reaching.
Do you see what I'm saying? So like why
not just throw it open and say chattp is
for this or you know we're for suicide
for the terminally ill but not for kids
or whatever like why not just tell us
I mean the reason we write this long
model spec and the reason we keep
expanding over time is so that you can
see here is how the here is how we
intend for the model to behave. Um what
used to happen before we had this is
people would fairly say I don't know
what the model's even trying to do and I
don't know if this is a bug or the
intended behavior. Tell me what this
long long document of, you know, tell me
how you're going to like when you're
going to say do this and when you're
going to show me this and when you're
going to say you won't do that. The
reason we try to write this all out is I
think people do need to know.
And so is there a place you can go to
find out a hard answer to what your
preferences as a company are preferences
that are being transmitted in a not
entirely straightforward way to the
globe. Like where can you find out what
the company stands for, what it prefers?
I mean our model spec is the like answer
to that. Now I think we will have to
make it increasingly more detailed over
time as people use this in different
countries. There's different laws
whatever else like it will not be a
it will not work the same way for every
user everywhere but and that do so I I
expect that document to get very long
and very complicated but that's why we
have it. Let me ask you one last
question and maybe you can allay this
fear that the power of the technology
will make it difficult impossible for
anyone to discern the difference between
reality and fantasy. This is a a famous
concern, but that that because it is so
skilled at mimicking people and their
speech and their images that it will
require some way to verify that you are
who you say you are and that will by
definition require biometrics which will
by definition eliminate privacy for
every person
in the world.
I don't think we need to or should
require biometrics to use the
technology. Um I don't like I think you
should just be able to use chat GPT from
like any computer.
Yeah. Well, I I strongly agree. But then
at a certain point when you know images
or sounds that mimic a person,
you know, it just becomes too easy to
empty your checking account with that.
So like what do you do about that?
A few thoughts there. one, I I think we
are rapidly heading to a world where
people understand that if you get a
phone call from someone that sounds like
your kid or your parent or if you see an
image that looks real, you have to
really have some way to verify that
you're not being scammed. And this is
now like this is no longer theoretical
concern. You know, you hear all these
reports at all. Yeah.
Um people are smart, societyy's
resilient. I think people are quickly
understanding that this is now a thing
that bad actors are using and
people are understanding that you got to
verify in different ways. I suspect that
in addition to things like family
members having code words they use in
crisis situations. Uh we'll see things
like when a president of a country has
to issue an urgent message, they
cryptographically sign it or otherwise
somehow guarantee its authenticity. So
you don't have like generated videos of
Trump saying, "I've just done this or
that." And people I think people are
learning quickly. Um that this is this
is a new thing that bad guys are doing
with the technology they have to contend
with. Um and I think that is most of the
solution which is people will have
people will by default not trust
convincing looking media and we will
build new mechanisms to verify
authenticity of of communication. But
those will have to be biometric.
No, not at all. I mean if if I I I I
mean like if the president of US has a I
understand that but I mean for the
average on the average day you're not
sort of waiting for the president to
announce a war you're like trying to do
e-commerce and like how could you do
well I think like with your family
you'll have a code word that you change
periodically and if you're communicating
with each other and you get a call like
you ask what the code word is but that's
very different than a biometric.
So you don't envision
I mean to board a plane commercial
flight you know biometrics are part of
the process now you don't see that as
becoming societywide mandatory very soon along
along
um I hope it I really hope it doesn't
become mandatory I think there are
versions of privacy preserving
biometrics that I like much more than
like collecting a lot of personal
digital information on one, but I don't
think they should be I don't think
biometrics should be mandatory. I don't
think you should like have to provide
biometrics to get on an airplane, for example.
example.
What about to for banking?
I don't think you should have to for banking.
banking.
I might prefer to like I might prefer
like uh you know like a fingerprint scan
to access my Bitcoin wallet than like
giving all my information to a bank. But
that should be a decision for me.
I appreciate it. Thank you, Sam.
So, it turns out that YouTube is
suppressing this show. On one level,
that's not surprising. That's what they
do. But on another level, it's shocking.
With everything that's going on in the
world right now, all the change taking
place in our economy and our politics,
with the wars we're on the cusp of
fighting right now, Google has decided
you should have less information rather
than more. And that is totally wrong.
It's immoral. What can you do about it?
Well, we could whine about it. That's a
waste of time. We're not in charge of
Google. Or we could find a way around
it. A way that you could actually get
information that is true, not
intentionally deceptive. The way to do
that on YouTube, we think, is to
subscribe to our channel. Subscribe. Hit
the little bell icon to be notified when
we upload and share this video. That
way, you'll have a much higher chance of
hearing actual news and information. So,
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.