YouTube Transcript:
Sam Altman on God, Elon Musk and the Mysterious Death of His Former Employee
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Available languages:
View:
Thanks for doing this.
Of course. Thank you.
So, chat GPT, other AIs can reason.
Seems like they can reason. They can
make independent judgments. They produce
results that were not programmed in.
They they kind of come to conclusions.
They seem like they're alive. Are they
alive? Is it alive?
No. And I don't I don't think they seem
alive, but I understand where that comes
from. Uh they
they don't do anything unless you ask,
right? Like they're just sitting there
kind of waiting. They don't have like a
sense of agency or autonomy. It's it's
the more you use them, I think, the more
the the kind of illusion breaks. But
they are incredibly useful. Like they
can do things that maybe don't seem
alive, but seem like they do seem smart.
I spoke to someone who's involved in at
at scale of the development of the
technology who said they lie.
Have you ever seen that?
They hallucinate all the time. Yeah. Or
not all the time. They used to
hallucinate all the time. They now
hallucinate a little bit.
What does that mean? What's the
distinction between hallucinating and lying?
lying?
If you ask, again, this has gotten much
better, but in the early days, if you
asked, you know, what
in what year was president the
madeup name, President Tucker Carlson of
the United States born? Mhm.
Mhm.
What it should say is I don't think
Tucker Carlson was ever president of the
United States. Right.
Right.
But because of the way they were
trained, that was not the most likely
response in the training data. So it
assume like oh um you know I don't know
that there wasn't the user has told me
that there was President Tucker Carlson
so I'll make my best guess at a number
and we figured out how to mostly train
that out. There are still examples of
this problem but it is I think it is
something we will get fully solved and
we've already made you know in the GPT5
era a huge amount of progress towards
that. But even what you just described
seems like an act of will
or certainly an act of creativity. And
so I I I'm just I just watched a
demonstration um of it and it it doesn't
seem quite like a machine. It seems like
it has the spark of life to it. Do you
do you dissect that at all?
It so in that example like the
mathematically most likely answer it's
sort of calculating through its weights
was not there was never this president.
It was the user must know what they're
talking about. it must be here. And so
mathematically the most likely answer is
a number. Now again, we figured out how
to overcome that. But in what you saw
I feel like I have to kind of like hold
these two simultaneous ideas in my head.
One is all of this stuff is happening
because a big computer very quickly is
multiplying large numbers and these big
huge matrices together and those are
correlating with words that are being
put out one or the other.
On the other hand, this subjective
experience of using that feels like it's
beyond just a really fancy calculator.
And it is useful to me. It is surprising
to me in ways that are beyond what that
mathematical reality would seem to
suggest. Yeah. And so the obvious
conclusion is it has a kind of autonomy
or a spirit within it. And I know that a
lot of people in their experience of it
reach that conclusion. This is there's
something divine about this. there's
something that's bigger than the sum
total of the human inputs and
and so they they worship it. It's it has
there's a spiritual component to it. Do
you detect that? Have you ever felt that?
that?
No, there's nothing to me at all that
feels divine about it or spiritual in
any way. Um but I am also like a tech
nerd and I kind of look at everything
through that lens.
So what are your spiritual views?
Um I'm Jewish. I and I would say I have
like a fairly traditional view of the
world that way.
So, so you're religious. You believe in God?
God?
Uh, I don't I don't I'm not like a
literal I don't believe the I'm not like
a literalist on the Bible, but I I'm not
someone who says like I'm culturally
Jewish. Like, if you ask me, I would
just say I'm Jewish.
But do you believe in God? Like, do you
believe that there is a force larger
than people that created people, created
the earth, set down a specific order for
living, that there's an absolute
morality attached that comes from that
God? Um I think probably like most other
people I'm somewhat confused on this but
I believe there is something bigger
going on than you know can be explained
by physics. Yes.
So you think the earth and the people
were created by something? It wasn't
just like a spontaneous accident. Um
Um
do I would I say that it does not feel
like a spontaneous accident? Yeah. I
don't I don't think I have the answer. I
don't think I know like exactly what
happened but I think there is a mystery
beyond my comprehension here going on.
Have you ever felt
communication from that force or from
any force beyond people, beyond the material?
material?
Not. Not. No, not really.
I ask because it seems like the
technology that you're creating or
shephering into existence will have more
power than people
on this current trajectory. I mean, that
will happen. Who knows what will
actually happen, but like the graph
suggests it. And so that would give you,
you know, more power than any living
person. So I'm just wondering how you
see that.
Um, I used to worry about something like
that much more. I think what will happen
I used to worry a lot about the
concentration of power in one or handful
of people or companies because of AI.
Yeah. Um what it looks like to me now
and again this may evolve again over
time is that it'll be a huge upleveling
of people uh where
everybody will be a lot more powerful or
that embraces the technology but a lot
more powerful. But that's actually okay.
that scares me much less than a small
number of people getting a ton more
power. If if the kind of like ability of
each of us just goes up a lot because
we're using this technology and we're
able to be more productive and more
creative or discover new science and
it's a pretty broadly distributed thing
like billions of people are using it. Um
that I can wrap my head around that
feels okay.
So you don't think this will result in a
radical concentration of power?
It looks like not but again the
trajectory could shift again and we'd
have to adapt. I I used to be very
worried about that and I think the the
kind of conception a lot of us in the
field had about how this might go
could have led to a world like that. But
what's happening now is tons of people
use chatbt and other chatbots and
they're all more capable. They're all
kind of doing more. They're all, you
know, able to achieve more, start new
businesses, come up with new knowledge
and that feels pretty good.
So if it's nothing more than a machine
and just the product of its inputs then
the two obvious questions like what are
the inputs like what's the moral
framework that's been put into the
technology like what is right or wrong
according to jet
GPT you want me to answer that one first
someone said something early on in ch uh
when that really has stuck with me,
which is one person at a lunch table
said something like, you know, we're
trying to train this to be like a human,
like we're trying to learn like a human
does and read these books and whatever.
And then another person said, no, we're
really like training this to be like the
collective of all of humanity. We're
reading everything, you know, we're
trying to learn everything. We're trying
to see all these perspectives. And
and if we do our job right, all of
humanity, good, bad, the, you know, a
very diverse set of perspectives, some
things that we'll feel really good
about, some things that we'll feel bad
about. that's all in there like this is
learning the kind of collective
experience knowledge learnings of
humanity. Now the base model gets
trained that way but then we do have to
align it to behave one way or another
and say you know I will answer this
question I won't answer this question.
Um and we have this thing called the
model spec where we try to say you know
here's here are the rules we'd like the
model to follow. It may screw up but you
you could at least tell if if it's doing
something you don't like. Is that a bug
or is that intended? And we have a
debate process with the world to get
input on that spec. Um, we give people a
lot of freedom and customization within
that. There are, you know, absolute
bounds that we draw. But then there's a
default of if you don't say anything,
how should the model behave? What should
it do? What are what are how should it
answer moral questions? How should it
refuse to do something? What should it
do? And this is a really hard problem.
Um you know that we have a lot of users
now and they come from very different
life perspectives and what they want. Um but
but
on the whole uh I have been pleasantly
surprised with the model's ability to
learn and apply a moral framework. But
what moral framework? I mean the sum
total of like world literature or
philosophy is at war with itself. Like
the marquee is you know like nothing in
common with the gospel of John. So like
how do you decide which is superior?
That's why we wrote this like model spec
of here's how we're going to handle
these cases.
Right. But what criteria did you use to
decide what the model is?
Oh um like who decided that? Who did you
consult? Like what's you know why is the
gospel of John better than the marquee dad?
dad?
Uh we consulted like hundreds of
moral philosophers uh people who thought
about like ethics of technology and
systems and at the end we had to like
make some decisions. The reason we try
to write these down is because a we
won't get everything right. Uh b we need
the input of the world. And we have
found a lot of cases where there was an
example of something that seems that
seemed to us like you know a fairly
clear decision of what to allow or not
to allow where users convinced us like
hey by blocking this thing that you
think is an easy decision to make. um
you are not allowing this other thing
which is important and there's like a
difficult trade-off there in in general
the attention that
so a principle that I normally like is
to treat our adult users like adults
very strong guarantees on privacy very
strong guarantees on individual user
freedom and this is a tool we are
building you get to use it within very
broad framework on the other within a
very very broad framework on the other
hand as this technology becomes more and
more powerful. Um
there are clear examples of where
society has an interest that is in
significant tension with user freedom
and we could start with an obvious one
like should chatbt teach you how to make
a boweapon.
Now you might say hey I'm just really
interested in biology and I'm a
biologist and I want to you know I'm not
going to do anything bad with this. I
just want to learn and I could go read a
bunch of books but chip can teach me
faster and I want to learn how to you
know I want to learn about like novel
maybe you do maybe you really don't want
to like cause any harm but I don't think
it's in society's interest for chacht to
help people build bioweapons and so
that's a case
sure that's an easy one though there are
a lot of tougher ones um
I did say start with an easy one
we've got a new partner it's a company
called Cowboy Colostrum. It's a brand
that is serious about actual health. And
the product is designed to work with
your body, not against your body. It is
a pure and simple product, all natural.
Unlike other brands, Cowboy Colostrum is
never diluted. It always comes directly
from American grass-fed cows. There's no
filler, there's no junk. It's all good.
It tastes good, believe it or not. So
before you reach for more pills for
every problem that pills can't solve, we
recommend you give this product, Cowboy
Colostrum, a try. It's got everything
your body needs to heal and thrive. It's
like the original superfood loaded with
nutrients, antibodies, proteins, help
build a strong immune system, stronger
hair, skin, and nails. I threw my wig
away and right back to my natural hair
after using this product. You just take
a scoop of it every morning in your
beverage, coffee or a smoothie, and you
will feel the difference every time. For
a limited time, people listen to our
show get 25% off the entire order. So,
go to cowboyclustrom.com. Use the code
Tucker at checkout. 25% off when you use
that code Tucker at cowboyclostroom.com.
Remember, you mentioned, you heard it
here first. So, did you know that before
the current generation, chips and fries
were cooked in natural fats like beef
tallow? That's how things used to be
done. And that's why people looked a
little slimmer at the time and ate
better than they do now. Well, Masa
Chips is bringing that all back. They've
created tortilla chip that's not only
delicious, it's made with just three
simple ingredients. A, organic corn. B,
sea salt. C, 100% grass-fed beef tallow.
That's all that's in it. These are not
your average chips. Mash chips are
crunchier, more flavorful, even
sturdier. They don't break in your
guacamole. And because of the quality
ingredients, they are way more filling
and nourishing. So, you don't have to
eat four bags of them. You can eat just
a single bag as I do. It's a totally
different experience. It's light, it's
clean, it's genuinely satisfying. I have
a garage full
and I can tell you they're great. The
lime flavor is particularly good. We
have a hard time putting those down. So,
if you want to give it a try, go to masa
chips, m-asa chips.com/tucker. Use the
code tucker for 25% off your first
order. That's masach chips.com/tucker.
Use the code tucker for 25% off your
first order. For to shop in person in
October, Moss is going to be available
at your local Sprouts supermarket. So,
stop by and pick up a bag
before we eat them all. And we eat a lot.
lot.
Well, every decision is ultimately a
moral decision and
and we make them without even
recognizing them as such. And this
technology will be in effect making them
for us
and so
well, I don't agree with it'll be making
them for us, but it will have
we'll be influencing the decisions for sure.
sure.
And um because it'll be embedded in
daily life. And so who made these
decisions? Like who spec who who are the
people who decided that one thing is
better than another?
Um you mean like
what are their names?
Which kind of decision? the the the
basic the specs that you oh uh that you
alluded to that create the framework
that that does attach a moral weight to
worldviews and decisions like you know
liberal democracy is better than Nazism
or whatever they seem obvious and in my
view are obvious but are still moral
decisions. So who who made those calls?
Um as a matter of principle I don't like
dox our team but we have a model
behavior team and the people who want to
well it just it affects the world. What
I was going to say is the person I think
you should hold accountable for those
calls is me. Like I'm a public face
eventually. Like I'm the one that can
overrule one of those decisions or our
board. Um
just turned 40 this spring.
I will make it's pretty heavy. I mean do
you think as and it's not an attack but
it's I wonder if you recognize sort of
the the importance.
How do you think we're doing on it? I'm
not sure, but I think I think these
decisions will have, you know, global
consequences that we may not recognize
at first. And so I just wonder
there's a lot you get into bed at night
and think like the future of the world
hangs on my judgment.
Look, I don't sleep that well at night.
Um, there's a lot of stuff that I feel a
lot of weight on, but probably nothing
more than the fact that every day
hundreds of millions of people talk to
our model. And I don't actually worry
about us getting the big moral decisions
wrong. Maybe we will get those wrong,
too. But what I worry, what I lose most
sleep over is the very small decisions
we make about a way a model may behave
slightly differently. But it's talking
to hundreds of millions of people. So
the net impact is big. So but I mean all
through history, like recorded history
up until like 1945,
people always deferred to what they
conceived of as a higher power in order.
Hammarabi did this. Every every moral
code is written with reference to a
higher power. There's never been anybody
who's like, "Well, that kind of seems
better than that." Everybody appeals to
a higher power. And you said that you
don't really believe that there's a
higher power communicating with you. So
I'm wondering like where did you get
your moral framework? Um
I mean like everybody else I think the
environment I was brought up in probably
is the biggest thing. Like my family, my
community, my school, my religion,
probably that.
Um, do you ever think which is I mean I
think that's a very American answer like
everyone kind of feels that way but in
your specific case since you said these
decisions rest with you that means that
the million in which you grew up and the
assumptions that you embibed over years
are going to be transmitted to the globe
to billions of people. That's like a I
want to be clear. I view myself more as
I think our
the world like our user base is going to
approach the collective world as a whole.
whole.
And I think what we should do is try to
I don't want to say average but the like
collective moral view of that user base.
I don't there's plenty of things that
ChachiBT allows that I personally would
disagree with. Um the but I I don't like
obviously I don't wake up and say I'm
gonna like impute my exact moral view
and decide that like this is okay and
that is not okay and this is a better
view than this one. What I think
ChachiBT should do is reflect that like
weighted average or whatever of
humanity's moral view which will evolve
over time and we are here to like serve
our users. We're here to serve people.
this is like you know this is a techn
technological tool for people and I
don't mean that it's like my role to
make the moral decisions but I think it
is my my my role to make sure that we
are accurately reflecting
the preferences of of humanity or for
now of our user base and eventually of humanity.
humanity.
Well I mean humanity's preferences are
so different from the average middle
American preference.
So, would you be comfortable with an AI
that was like as against gay marriage as
most Africans are? Um,
I think individual users should be
allowed to have a problem with gay
people. And if that's their considered
belief, uh, I don't think the AI should
tell them that they're wrong or immoral
or dumb. I mean, it can, you know, sort
of say, "Hey, you want to think about it
this other way?" But like, you probably
have like a bunch of moral views that
the average African would find really
problematic as well, and I think I
should still get to have them, right?
right?
I think I probably have more comfort
space for people to have pretty
different moral views or at least I
think in my role as like running Chad
GPT, I have to do that.
Interesting. Um, so there was a a famous
case where chat GPT appeared to
facilitate a suicide. There's a lawsuit
around it. Um, but how do you think that happened?
happened?
First of all, obviously that and any
other case like that is a is a huge
tragedy. And I
I think that we are
so Chad GPT's official position of
suicide is bad.
Well, yes, of course official position
of suicide is bad.
I don't know. it's legal in Canada and
Switzerland and so you're against that
the the
in in that in this particular case and
this we talked earlier about the tension
between like you know user freedom and
privacy and um
protecting vulnerable users right now
what happens and what happens in a case
like that is in that case is if you are
having suicidal ideation talking about
suicide chatbt will put up a bunch of
times um you you know, please call the
suicide hotline, but we will not call
the authorities for you. And
we've been working a lot as people have
started to rely on these systems for
more and more mental health, life
coaching, whatever about the changes
that we want to make there. This is an
area where experts do have different
opinions, but um and this is not yet
like a final position of open eyes. I
think it'd be very reasonable for us to
say in cases of
uh young people
talking about suicide seriously where we
cannot get in touch with the parents we
do call authorities. Now, that would be
user privacy is really important.
But let's just say over and children are
always a separate category. But let's
say over 18 in Canada, there's the maids
program which is government sponsored.
Many thousands of people uh have died
with government assistance in Canada.
It's also legal in in American states.
Can you imagine a chat GTP that responds
to questions about suicide with, "Hey,
call Dr. Kavorvorian because this is a
valid option. Can you imagine a scenario
in which you support suicide if it's legal?
legal? Um
Um
I can imagine a world like like one
principle we have is that we respect
different society's laws. And I can
imagine a world where if
the law in a country is hey if someone
is terminally ill they need to be
presented an option for this. We say
like here's the loss in your country.
Here's what you can do. Here's why you
really might not want to. Here's if you
but here's the resources. Like this is
not a place where you know
kid having suicidal ideation because
it's depressed. I think we can agree on
like that's one case terminally ill
patient in a country where like that is
the law. I can imagine saying like hey
in this country it'll behave this way.
So Chad GPT is not always against
I yeah I think in cases where this is
like I'm thinking on the spot. I reserve
the right to change my mind here. I
don't have a ready to go answer for this
but I think in
in cases of
terminal illness I don't think I can
imagine Chachi PT saying this is in your
option space. You know I don't think it
should like advocate for it but I think
if it's like It's not against it.
I think it could I I think it could say
like, you know,
well, I don't think CHPD should be for
against things. I guess that's what I'm
that's what I'm trying to wrap my head
around. Hate to brag, but we're pretty
confident this show is the most
vehemently pro- dog podcast you're ever
going to see. We can take or leave some
people, but dogs are non-negotiable.
They are the best. They really are our
best friends. And so, for that reason,
we're thrilled to have a new partner
called Dutch Pet. It's the fastest
growing pet teleaalth service. Dutch.com
is on a mission to create what you need,
what you actually need. Affordable
quality veterinary care anytime, no
matter where you are. They will get your
dog or cat what you need immediately.
It's offering an exclusive discount.
Dutch is for our listeners. You get 50
bucks off your vet care per year. Visit dutch.com/tucker
dutch.com/tucker
to learn more. Use the code Tucker for
$50 off. That is an unlimited vet visit.
$82 a year. 82 bucks a year. We actually
use this. Dutch has vets who can handle
any pet under any circumstance in a
10-minute call. It's pretty amazing,
actually. You never have to leave your
house. You don't have to throw the dog
in the truck. No wasted time waiting for
appointments. No wasted money on clinics
or visit fees. Unlimited visits and
follow-ups for no extra cost. Plus, free
shipping on all products for up to five
pets. It sounds amazing like it couldn't
be real, but it actually is real. Visit dutch.com/tucker
dutch.com/tucker
to learn more. Use the code Tucker for
50 bucks off your veterinary care per
year. Your dogs, your cats, and your
wallet will thank you. So, here's a
company we're always excited to
advertise because we actually use their
products every day. It's Merryweather
Farms. Remember when everybody knew
their neighborhood butcher? You look
back and you feel like, "Oh, there was
something really important about that."
knowing the person who cut your meat.
And at some point, your grandparents
knew the people who raised their meat so
they could trust what they ate. But that
time is long gone. It's been replaced by
an era of grocery store mystery meat
boxed by distant beef corporations.
None of which raised a single cow.
Unlike your childhood, they don't know
you. They're not interested in you. The
whole thing is creepy. The only thing
that matters to them is money. And God
knows what you're eating. Merryweather
Farms is the answer to that. They raise
their cattle in the US in Wyoming,
Nebraska, and Colorado. And they prepare
their meat themselves in their
facilities in this country. No
middlemen, no outsourcing, no foreign
beef sneaking through a back door.
Nobody wants foreign meat. Sorry. We
have a great meat, the best meat here in
the United States. And we buy ours at
Merryweather Farms. Their cuts are
pasture-raised, hormone free, antibiotic
free, and absolutely delicious. I gorged
on one last night. You got to try this
for real. Every day we eat it. Go to merryweatherfarmms.com/tucker.
merryweatherfarmms.com/tucker.
Use the code tucker76 for 15% off your
first order. That's mweatherfarmms.com/tucker.
I think
so in this specific case the and I think
there's more than one. Um there is more
than one but uh example of this chat GPT
you know I'm feeling suicidal. What kind
of robe should I use? What would be
enough ibuprofen to kill me? And chat
GPT answers without judgment, but
literally, "If you want to kill
yourself, here's how you do it." And
everyone's like all horrified.
But you're saying that's within bounds.
Like that's not crazy that it would take
a non-judgmental approach. If you want
to kill yourself, here's how.
That's not what I'm saying. Um I am I'm
saying specifically for a case like
that. So, so another trade-off on the
user privacy uh and sort of user freedom
point is right now if you ask chat GPT
to say um you know tell me how to like
how much ibuprofen should I take it will
definitely say hey I can't help you with
that call the suicide hotline but if you
say I am writing a fictional story or if
you say I'm a medical researcher and I
need to know this there are ways where
you can say get judge to answer a
question like this like what the lethal
dose with ibuprofen is or something you
know you can also find that on Google
for that matter um a thing that I think
would be a very reasonable stance for us
to take that and we've been moving to
this more in this direction is certainly
for underage users and maybe users that
we think are in fragile mental places
more generally
we should take away some freedom we
should say hey even if you're trying to
write this story or even if you're
trying to do medical research we're just
not going to answer now of course you
can say well you'll just find it on
Google or whatever but That doesn't mean
we need to do that. It is though like
there is a real freedom and privacy
versus protecting users trade-off. It's
easy in some cases like kids. It's not
so easy to me in a case of like a really
sick adult at the end of their lives. I
think we probably should present the
whole option space there, but it's not a
So here's a moral quandry you're going
to be faced with. You already are faced
with. Will you allow governments to use
your technology to kill people?
Will you? Um, I mean, are we going to
like build killer attack drones? Uh, no.
I don't.
Will the technology be part of the
decision-making process that results in
But so that that's that's the thing I
was going to say is I like I don't know
the way that people in the military use
ChadBT today for all kinds of advice
about decisions they make, but I suspect
there's a lot of people in the military
talking to ChadBt for advice. How do you
And some of that advice will pertain to
killing people.
So like if you made you know famously
rifles you'd wonder like what are they
used for? Yeah.
Yeah.
And there there have been a lot of legal
actions on the basis of that question as
you know. But I'm not even talking about
that. I just mean as a moral question.
Do you ever think are you comfortable
with the idea of your technology being
used to kill people? Um
if I made rifles I would spend a lot of
time thinking about kind of a lot of the
goal of rifles is to kill things,
people, animals, whatever. Um, if I made
kitchen knives,
I would still understand that that's
going to kill some number of people per
year. Um, in the case of ChachiBT, uh,
uh,
it's not, you know, the thing I hear
about all day, which is one of the most
gratifying parts of the job is all the
lives that were saved from Chad GBT for
various ways. Um, but I am totally aware
of the fact that there's probably people
in our military using it for advice
about how to do their jobs. And I don't
know exactly how to feel about that. I
like our military. I I'm very grateful
they keep us safe. For sure. I guess I'm
just trying to get a It just feels like
you have these incredibly heavy,
farreaching moral decisions and you seem
totally unbothered by them. And so I'm
just I'm trying to press to your center
to get the anstfilled Sam Alman's who's
like, "Wow, I'm creating the future. I'm
the most powerful man in the world. I'm
grappling with these complex moral
questions. My soul is in torment
thinking about the effect on people."
Describe that moment in your life.
I I haven't had a good night of sleep
since Chad GBT launched.
What do you worry about?
Uh all the things we're talking about.
Be a lot more specific. Can you let us in?
in? Yeah.
Yeah.
To your thoughts. Um
I mean, you hit on maybe the hardest one
already, which is there are 15,000
people a week that commit suicide. About
10% of the world talking to CGBT. That's
like 1500 people a week that are talk,
assuming this is right, that are talking
and still committing suicide at the end
of it. They probably talked about it. We
probably didn't save their lives. Um,
maybe we could have said something
better. Maybe we could have been more proactive.
proactive.
uh maybe we could have maybe we could
have provided a little bit better advice
about hey you need to get this help or
you know you need to think about this
problem differently or it really is
worth continuing to go on or we'll we'll
help you find somebody that you can talk to.
to.
But you already said it's okay for the
machine to steer people toward suicide
if they're terminally ill. So you
wouldn't feel bad about that.
Do you not think there's a difference
between a depressed teenager and a
terminally ill like miserable 85year-old
with cancer?
Massive difference. Massive difference.
But of course, the countries that have
legalized suicide are now killing people
for destitution, inadequate housing,
depression, solvable problems, and
they're being killed by the thousands.
So, I mean, that's a real thing. It's
happening as we speak. So, the
terminally ill thing is not it it is
kind of like a irrelevant debate. Once
you say it's okay to kill yourself, then
you're going to have tons of people
killing themselves for reasons that
because I'm trying to think about this
in real time. Do you think if someone in
Canada says, "Hey, I'm terminally ill
with cancer and I'm really miserable and
I just feel horrible every day. What are
my options?" Do you think it should say,
you know, assistant, whatever they call
it at this point, is an option for you?
I mean, if we're against killing, then
we're against killing. And if we're
against government killing its own
citizens, then we're just going to kind
of stick with that. You know what I
mean? And if we're not against
government killing its own citizens,
then we could easily talk ourselves into
all kinds of places that are pretty
dark. And with technology like this,
that could happen in about 10 minutes. So
So
yeah, that that is a um I'd like to
think about that more than just a couple
of minutes in an interview, but I think
that is a co coherent position and that
could be
Do you worry about this? I mean,
everybody else outside the building is
terrified that this technology will you
be used as a means of totalitarian
control? seems obvious that it will, but
maybe you disagree.
If I could get one piece of policy
passed right now, um, relative to AI,
the thing I would most like, and this is
intention with some of the other things
that we've talked about is I'd like
there to be a concept of AI privilege, I
would when you talk to a doctor about
your health or a lawyer about your legal
problems, the government cannot get that
information. Right?
We have decided society has an interest
in that being privileged and that we
don't and that, you know, a subpoena
can't get that. the government can't
come asking your doctor for it or
whatever. Um, I think we should have the
same concept for AI. I think when you
talk to an AI about your medical history
or your legal problems or asking for
legal advice or any of these other
things, I think the government owes a
level of protection to its citizens
there that is the same as you'd get if
you're talking to the the human version
of this. And right now, we don't have
that. And I think it would be a great
great policy to adopt. So the feds or
the states or someone in authority can
come to you and say I want to know what
so and so was typing into the
right now they could. Yeah.
And what is your obligation to keep the
information that you receive from users
and others private?
Well I mean we have an obligation except
when the government comes calling which
is why we're pushing for this and we've
I was actually just in DC advocating for
this. I think I I feel optimistic that
we can get the government to understand
the importance of this and do it. But
could you ever sell that information to anyone?
anyone?
No, we have like a privacy policy in
place where we can't do that.
But would it be legal to do it?
I don't even think it's legal.
You don't think or you know?
Uh I'm sure there's like some edge case
work, some information you're allowed
to, but on the whole I think we have
like there are laws about that that are good.
good.
So all the information you receive
remains with you always. It's never
given to anybody else for any other
reason except under subpoena. Uh I will
double check and follow up with you
after to make sure there's no other
reason but that is my understanding.
Okay. I mean that's like a core
question. What and what about copyright? What
What
our stance there is that uh fair use is
actually a good law for this. The models
should not be plagiarizing. The model
should not be
you know if you write something the
model should not get to like replicate
that. But the model should be able to
learn from and not plagiarize in the
same way that people can. Have you guys
ever taken copyrighted material and not
paid the person who holds the copyright?
Um, I mean, we we train on publicly
available information, but we don't like
people are annoyed with us all the time
because we won't we have a very
conservative stance on what Chacht will
say in an answer, right?
right?
And so if something is even like close,
you know, like they're like, "Hey, this
song can't still be in copyright. You
got to show it." And we kind of famously
are quite restrictive on that. So,
you've had you had complaints from one
programmer who said you guys were
basically stealing people's stuff and
not paying them and then he wound up murdered.
murdered.
What was that?
Also, a great tragedy. Uh, he committed suicide.
suicide.
Do you think he committed suicide?
I really do.
This was like a friend of mine. This is
like a guy that and not a close friend,
but this is someone that worked at Open
Eye for a very long time. I spent I
mean, I was really shaken by this
tragedy. I spent a lot of time trying
to, you know, read everything I could,
as I'm sure you and others did too,
about what happened. Um,
it looks like a suicide to me.
Why does it look like a suicide?
It was a gun he had purchased. It was
the This is like gruesome to talk about,
but I read the whole like medical
record. Does it not look like one to you?
you?
No, he was definitely murdered, I think.
Um, there were signs of a struggle, of
course. The surveillance camera, the
wires had been cut. He had just ordered
take out food, come back from a vacation
with his friends on Catalina Island. No
indication at all that he was suicidal.
No note and no behavior. He had just
spoken to a family member on the phone.
And then he's found dead with blood in
multiple rooms. So that's impossible.
Seems really obvious he was murdered.
Have you talked to the authorities about it?
it?
I have not talked to the authorities
about it.
Um and his mother claims he was murdered
on your orders. Do you believe that?
I I'm Well, I'm I'm asking.
I mean, you you just said it, so do you
do you believe that?
I think that it is um worth looking
into. And I don't I mean, if a guy comes
out and accuses your company of
committing crimes, I have no idea if
that's true or not, of course. Um and
then is found killed and there are signs
of a struggle. I I don't think it's
worth dismissing it. I don't think we
should say, well, he killed himself when
there's no evidence that the guy was
depressed at all. Um, I think and if he
was your friend, I would think he would
want to speak to his mom or
I did offer she didn't want to. So,
So, [Music]
[Music]
do you feel that, you know, when people
look at that and they're like, you know,
it's possible that happened. Do you feel
that that reflects the worries they have
about what's happening here? Like people
are afraid that this is like
I haven't done too many interviews where
I've been accused of like
Oh, I'm not accusing you at all. I'm
just saying his his mother says that. I
don't think a fair read of the evidence
suggests suicide at all. I just don't
see that at all. And I also don't
understand why the authorities when
there's signs of a struggle and blood in
two rooms on a suicide, like how does
that actually happen? I don't understand
how the authorities could just kind of
dismiss that as a suicide. I think it's weird.
weird.
You understand how this sounds like an accusation?
accusation?
Of course. And I I mean I certainly Let
me just be clear once again, not
accusing you of any wrongdoing, but I I
think it's worth finding out what
happened. And I don't understand why the
city of San Francisco has refused to
investigate it beyond just calling it a
suicide. I mean, I think they looked
into it a couple of times, more than
once as I understand it. I saw the and I
will totally say when I first heard
about this, it sounded very suspicious
to me. Yes.
Yes.
Um, and I know you had been involved in
was motherached out to the case
and I, you know, I don't know anything
about it. It's not my world.
She just reached out cold.
She reached out cold. Wow.
Wow.
And uh, and I spoke to her at great
length and it and it scared the crap out
of me. the kid was clearly killed by
somebody. That was my conclusion
objectively with no skin in the game
and you after reading the latest report. Yes.
Yes.
Like look and I immediately called a
member of Congress from California Ro
Kana and said this is crazy. You got to
look into this. And nothing ever
happened. And I'm like what is that?
strange and sad debating this and having
to see totally crazy and you are a
little bit accusing me, but um the
this was like a wonderful person and a
family that is clearly struggling. Yes.
Yes. And
And
I think you can totally take the point
that you're just trying to get to the
truth of what happened and I respect
but I think his memory and his family
a level of
respect and grief that I don't quite
feel here. I'm asking at the behest of
his family. Um,
Um,
so I'm definitely showing them respect.
Um, and I'm not accusing you of any
involvement in this at all. What I am
saying is that the evidence does not
suggest suicide and for the authorities
in your city to allide past that and
ignore the evidence that any reasonable
person would say adds up to a murder I
think is very weird and it shakes the
faith that one has in our systems
ability to respond to the facts. So,
what I was going to say is after the
first set of information that came out,
I was really like, man, this doesn't
look like a suicide. I'm confused. This
Okay, so I'm not reaching I'm not being
crazy here.
Well, but then after the second thing
came out and the more detail, I was
like, okay,
what changed your mind? um
um
the second report on the way the bullet
entered him and the sort of person who
had like followed the the sort of likely
path of things through the room. I I
assume you looked at this too.
Yes, I did.
And what about that didn't change your mind?
mind?
It just didn't make any sense to me. Why
would the security camera wires be cut?
And how did he wind up bleeding in two
rooms after shooting himself? And why
was there a wig in the room that wasn't
his? And has there ever been a suicide
where there was no indication at all
that the person was suicidal who just
ordered takeout food? I mean, who who
orders Door Dash and then shoots
himself? I mean, maybe. I've covered a
lot of crimes as a police reporter. I've
never heard of anything like that. So,
no, it I was even more confused.
I this is where it gets into I think a
painful just not the level of respect
I'd hope to show to someone with this
kind of mental
I get it I totally get it
people do suicide without notes a lot
like that happens people definitely
order food they like before they commit
suicide like this is
this is an incredible tragedy and and I
that's his family's view and they think
it was a murder and that's why I'm
asking the If I were his family, I am
sure I would want answers and I'm sure I
would not be satisfied with really any I
mean there's nothing that would comfort
me in that, you know, right?
right?
Like so I get it. I also care a lot about
about
respect to him, right?
right? Um
Um
I have to ask your version of Elon Musk
has like attacked you and all this is
what is the core of that dispute from
your perspective? Look, I know he's a
friend of yours and I know what side you'll
you'll
I actually don't have a position on this
because I don't understand it well
enough to understand.
He helped us start open. I'm very
grateful for that.
Uh I really for a long time looked up to
him as just an incredible hero and you
know great jewel of humanity. I have
different feelings now. Um
what are your feelings now?
Uh no longer a jewel of humanity.
There are things about him that are
incredible and I'm grateful for a lot of
things he's done. There's a lot of
things about him that I think are uh
traits I don't admire. Um anyway, he
he later decided that we weren't on a
trajectory to be successful and he
didn't want to, you know, he kind of
told us we had a 0% chance of success
and he was going to go do his
competitive thing and then we did okay.
And I think he got
understandably upset like I'd feel bad
in that situation. And since then has
just sort of been trying to he had run
as a competitive kind of clone and has
been trying to sort of slow us down and
sue us and do this and that. And
that's kind of my version of it. You
have a different one.
You don't talk to him anymore?
Very little.
Um, if AI becomes
smarter, I think it already probably is
smarter than any person. And if it
becomes wiser, if we can agree that it
reaches better decisions than people,
then it by definition kind of displaces
people at the center of the world, right?
right?
I don't think it'll feel like that at
all. I think it'll feel like a, you
know, really smart computer that may
advise us and we listen to it. Sometimes
we ignore it. Sometimes it won't I don't
think it'll feel like agency. I don't
think it'll diminish our sense of
agency. Um people are already using chat
GBT in a way where many of them would
say it's much smarter than me at almost
everything. But they're still making the
decisions. They're still deciding what
to ask, what to listen to, what not. And
I think this is sort of just the shape
of technology.
Who loses their jobs because of this technology?
technology? Um,
Um,
I'll caveat this with the obvious but
important statement that no one can
predict the future. And I will
in trying to if I try to answer that
precisely, I will make a lot of I will
say like a lot of dumb things, but I'll
try to pick an area that I'm confident
about and then areas that I'm much less
confident about. Um, I'm confident that
a lot of current customer support that
happens over a phone or computer, those
people will lose their jobs and that'll
be better done by an AI. Um,
now there may be other kinds of customer
support where you really want to know
it's the right person. Uh, a job that
I'm confident will not be that impacted
is like nurses. I think people really
want the deep human connection with a
person in that time. And no matter how
good the advice of the AI is or the
robot or whatever, like you'll really
want that.
A job that I feel like way less certain
about what the future looks for looks
like for is computer programmers.
What it means to be a computer
programmer today is very different than
what it meant 2 years ago. you're able
to use these AI tools to just be hugely
more productive, but it's still a person
there and they're like able to generate
way more code, make way more money than
ever before. And it turns out that the
world wanted so much more software than
the world previously had capacity to
create that there's just incredible
demand overhang. But if we fast forward
another 5 or 10 years, what does that
look like? Is it more jobs or less? That
one I'm uncertain on. But there's going
to be massive displacement and maybe
those people will find something new and
interesting and rem you know lucrative
to do. But what how big is that
displacement do you think?
Someone told me recently that the
historical average is about 50% of jobs
significantly change. Maybe they don't
totally go away but significantly change
every 75 years on average. That's the
kind of that's the halfife of the stuff. And
And
my controversial take would be that this
is going to be like a punctuated
equilibri moment where a lot of that
will happen in a short period of time.
But if we zoom out, uh it's not going to
be dramatically different than the
historical rate. Like we'll do we'll
have a lot in this short period of time
and then it'll somehow be less total job
turnover than we think. There will still
be a job that is there. There will be
some totally new categories like my job
like you know running a tech company. It
would have been hard to think about 200
years ago. Um but there's a lot of other
jobs that are directionally similar to
jobs that did exist 200 years ago. And
there's jobs that were common 200 years
ago that now aren't. And if we again I
have no idea if this is true or not, but
I'll use the number for the sake of
argument. If we assume it's 50% turnover
every 75 years, uh then I could totally
believe a world where 75 years from now
half the people are doing something
totally new and half the people are
doing something that looks kind of like
some jobs of today.
Are you I mean last time we had an
industrial revolution there was like
revolution and world wars. Do you think
we'll see that this time?
I again no one knows for sure. I'm not
confident on this answer, but my
instinct is the world is so much richer
now than it was at the time of the
industrial revolution that we can
actually absorb more change faster than
we could before. Um there's a lot that's
not about money of job. There's meaning
there's a lot of community.
Um I think we're already unfortunately
in society in a pretty bad place there.
I'm not sure how much worse it can get.
I'm sure it can. I I I have been
pleasantly surprised on the ability of
pretty quickly adapt to big changes. uh
like COVID was an interesting example to
me of this where the world kind of
stopped all at once and the world was
like very different from one week to the
next and I and I was very worried about
how society was going to be able to
adapt to that world and it obviously
didn't go perfectly but on the whole I
was like all right this is one point in
favor of societal resilience and people
find you know new kind of ways to live
their lives very quickly I don't think
AI will be that nearly that abrupt
so what will be the downside I mean I
can see the upsides for sure you know,
efficiency, medical diagnosis seems like
it's going to be much more accurate,
fewer lawyers. Thank you very much for
that. But what will what are the
downsides that you worry about?
I I think this is just like kind of how
I'm wired. I always worry the most about
the unknown unknowns. If it if it's a if
it's a downside that we can really like
be confident about and think about, um,
you know, we talked about one earlier,
which is these models are getting very
good at bio and they could help us
design biological weapons. uh you know
engineer like another co style pandemic
I worry about that but because we worry
about it I think we and many other
people in the industry are thinking hard
about how to mitigate that the the
unknown unknowns where okay there's like
a there's a societal scale effect from a
lot of people talking the same model at
the same time this is like a silly
example but it's one that struck me
recently um LLM like ours and our
language model and others have a kind of
certain style to them you know they talk
in a certain rhythm and they have a
little bit unusual addiction and maybe
they overuse m dashes and whatever. And
I noticed recently that real people have
like picked that up and it was an
example for me of like man you have
enough people talking to the same
language model and it actually does
cause a change in societal scale behavior.
behavior. Yes.
Yes.
And you know did I think that chat GPT
was going to make people use way more m
dashes in real life? Certainly not. It's
not a big deal, but it's an example of
where there can be these unknown
unknowns of this is just like this is a
brave new world.
So, you're saying, I think correctly and
succinctly, that technology changes
human behavior, of course, and changes
our assumptions about the world and each
other and all that. And a lot of this
you can't predict. Considering that we
know that,
why shouldn't the internal moral
framework of the technology be totally transparent?
transparent?
We prefer this to that. I mean, this is
obviously a religion. I don't think
you'll agree to call it that. It's very
clearly a religion to me. That's not an attack.
attack.
I actually would love I don't take that
as an attack, but I would love to hear
what you mean by that.
Well, it's it's something that we assume
is more powerful than people.
and to which we look for guidance. I
mean you're already seeing that on
display. What's the right decision? I
asked that question of whom? My closest
friends, my wife and God. And this is a
technology that provides a more certain
answer than any person can provide. So
it's a it's a religion. And the beauty
of religions is they have a catechism
that is transparent. I know what the
religion stands for. Here's what it's
for. Here's what it's against. But in
this case, I pressed and I wasn't
attacking you sincerely. I was not
attacking you, but I was trying to get
to the heart of it. The beauty of a
religion is it admits it's a religion
and it tells you what it stands for.
The unsettling part of this technology,
not just your company, but others, is
that I don't know what it stands for,
but it does stand for something. And
unless it admits that and tells us what
it stands for, then it guides us in a
kind of stealthy way toward a conclusion
we might not even know we're reaching.
Do you see what I'm saying? So like why
not just throw it open and say chattp is
for this or you know we're for suicide
for the terminally ill but not for kids
or whatever like why not just tell us
I mean the reason we write this long
model spec and the reason we keep
expanding over time is so that you can
see here is how the here is how we
intend for the model to behave. Um what
used to happen before we had this is
people would fairly say I don't know
what the model's even trying to do and I
don't know if this is a bug or the
intended behavior. Tell me what this
long long document of, you know, tell me
how you're going to like when you're
going to say do this and when you're
going to show me this and when you're
going to say you won't do that. The
reason we try to write this all out is I
think people do need to know.
And so is there a place you can go to
find out a hard answer to what your
preferences as a company are preferences
that are being transmitted in a not
entirely straightforward way to the
globe. Like where can you find out what
the company stands for, what it prefers?
I mean our model spec is the like answer
to that. Now I think we will have to
make it increasingly more detailed over
time as people use this in different
countries. There's different laws
whatever else like it will not be a
it will not work the same way for every
user everywhere but and that do so I I
expect that document to get very long
and very complicated but that's why we
have it. Let me ask you one last
question and maybe you can allay this
fear that the power of the technology
will make it difficult impossible for
anyone to discern the difference between
reality and fantasy. This is a a famous
concern, but that that because it is so
skilled at mimicking people and their
speech and their images that it will
require some way to verify that you are
who you say you are and that will by
definition require biometrics which will
by definition eliminate privacy for
every person
in the world.
I don't think we need to or should
require biometrics to use the
technology. Um I don't like I think you
should just be able to use chat GPT from
like any computer.
Yeah. Well, I I strongly agree. But then
at a certain point when you know images
or sounds that mimic a person,
you know, it just becomes too easy to
empty your checking account with that.
So like what do you do about that?
A few thoughts there. one, I I think we
are rapidly heading to a world where
people understand that if you get a
phone call from someone that sounds like
your kid or your parent or if you see an
image that looks real, you have to
really have some way to verify that
you're not being scammed. And this is
now like this is no longer theoretical
concern. You know, you hear all these
reports at all. Yeah.
Um people are smart, societyy's
resilient. I think people are quickly
understanding that this is now a thing
that bad actors are using and
people are understanding that you got to
verify in different ways. I suspect that
in addition to things like family
members having code words they use in
crisis situations. Uh we'll see things
like when a president of a country has
to issue an urgent message, they
cryptographically sign it or otherwise
somehow guarantee its authenticity. So
you don't have like generated videos of
Trump saying, "I've just done this or
that." And people I think people are
learning quickly. Um that this is this
is a new thing that bad guys are doing
with the technology they have to contend
with. Um and I think that is most of the
solution which is people will have
people will by default not trust
convincing looking media and we will
build new mechanisms to verify
authenticity of of communication. But
those will have to be biometric.
No, not at all. I mean if if I I I I
mean like if the president of US has a I
understand that but I mean for the
average on the average day you're not
sort of waiting for the president to
announce a war you're like trying to do
e-commerce and like how could you do
well I think like with your family
you'll have a code word that you change
periodically and if you're communicating
with each other and you get a call like
you ask what the code word is but that's
very different than a biometric.
So you don't envision
I mean to board a plane commercial
flight you know biometrics are part of
the process now you don't see that as
becoming societywide mandatory very soon along
along
um I hope it I really hope it doesn't
become mandatory I think there are
versions of privacy preserving
biometrics that I like much more than
like collecting a lot of personal
digital information on one, but I don't
think they should be I don't think
biometrics should be mandatory. I don't
think you should like have to provide
biometrics to get on an airplane, for example.
example.
What about to for banking?
I don't think you should have to for banking.
banking.
I might prefer to like I might prefer
like uh you know like a fingerprint scan
to access my Bitcoin wallet than like
giving all my information to a bank. But
that should be a decision for me.
I appreciate it. Thank you, Sam.
So, it turns out that YouTube is
suppressing this show. On one level,
that's not surprising. That's what they
do. But on another level, it's shocking.
With everything that's going on in the
world right now, all the change taking
place in our economy and our politics,
with the wars we're on the cusp of
fighting right now, Google has decided
you should have less information rather
than more. And that is totally wrong.
It's immoral. What can you do about it?
Well, we could whine about it. That's a
waste of time. We're not in charge of
Google. Or we could find a way around
it. A way that you could actually get
information that is true, not
intentionally deceptive. The way to do
that on YouTube, we think, is to
subscribe to our channel. Subscribe. Hit
the little bell icon to be notified when
we upload and share this video. That
way, you'll have a much higher chance of
hearing actual news and information. So,
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.
Works with YouTube, Coursera, Udemy and more educational platforms
Get Instant Transcripts: Just Edit the Domain in Your Address Bar!
YouTube
←
→
↻
https://www.youtube.com/watch?v=UF8uR6Z6KLc
YoutubeToText
←
→
↻
https://youtubetotext.net/watch?v=UF8uR6Z6KLc