Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
The AI Safety Expert: These Are The Only 5 Jobs That Will Remain In 2030! - Dr. Roman Yampolskiy | The Diary Of A CEO | YouTubeToText
YouTube Transcript: The AI Safety Expert: These Are The Only 5 Jobs That Will Remain In 2030! - Dr. Roman Yampolskiy
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
You've been working on AI safety for two
decades at least.
>> Yeah, I was convinced we can make safe
AI, but the more I looked at it, the
more I realized it's not something we
can actually do.
>> You have made a series of predictions
about a variety of different states. So,
what is your prediction for 2027? [Music]
[Music]
>> Dr. Roman Yimpolski is a globally
recognized voice on AI safety and
associate professor of computer science.
He educates people on the terrifying
truth of AI
>> and what we need to do to save humanity.
>> In 2 years, the capability to replace
most humans in most occupations will
come very quickly. I mean, in 5 years,
we're looking at a world where we have
levels of unemployment we never seen
before. Not talking about 10% but 99%.
And that's without super intelligence. A
system smarter than all humans in all
domains. So, it would be better than us
at making new AI. But it's worse than
that. We don't know how to make them
safe and yet we still have the smartest
people in the world competing to win the
race to super intelligence.
>> But what do you make of people like
Saman's journey with AI?
>> So decade ago we published guard rails
for how to do AI, right? They violated
every single one and he's gambling 8
billion lives on getting richer and more
powerful. So I guess some people want to
go to Mars, others want to control the
universe. But it doesn't matter who
builds it. The moment you switch to
super intelligence, we will most likely
regret it terribly.
>> And then by 2045,
>> now this is where it gets interesting.
>> Dr. Roman Gimpolski, let's talk about
simulation theory.
>> I think we are in one. And there is a
lot of agreement on this and this is
what you should be doing in it so we
don't shut it down. First,
>> I see messages all the time in the
comment section that some of you didn't
realize you didn't subscribe. So, if you
could do me a favor and double check if
you're a subscriber to this channel,
that would be tremendously appreciated.
It's the simple, it's the free thing
that anybody that watches this show
frequently can do to help us here to
keep everything going in this show in
the trajectory it's on. So, please do
double check if you've subscribed and uh
thank you so much because in a strange
way, you are you're part of our history
and you're on this journey with us and I
appreciate you for that. So, yeah, thank you,
>> Dr. Roman Yimpolski.
What is the mission that you're
currently on? Cuz it's quite clear to me
that you are on a bit of a mission and
you've been on this mission for I think
the best part of two decades at least.
>> I'm hoping to make sure that super
intelligence we are creating right now
>> Give me some give me some context on
that statement because it's quite a
shocking statement.
>> Sure. So in the last decade we actually
figured out how to make artificial
intelligence better. Turns out if you
add more compute, more data, it just
kind of becomes smarter. And so now
smartest people in the world, billions
of dollars, all going to create the best
possible super intelligence we can.
Unfortunately, while we know how to make
those systems much more capable, we
don't know how to make them safe.
how to make sure they don't do something
we will regret
and that's the state-of-the-art right
now. When we look at just prediction
markets, how soon will we get to
advanced AI? The timelines are very
short couple years two three years
according to prediction markets
according to CEOs of top labs
and at the same time we don't know how
to make sure that those systems are
aligned with our preferences.
So we are creating this alien
intelligence. If aliens were coming to
earth and you you have three years to prepare
prepare
you would be panicking right now.
But most people don't don't even realize
this is happening.
>> So some of the counterarguments might be
well these are very very smart people.
These are very big companies with lots
of money. They have a obligation and a
moral obligation but also just a legal
obligation to make sure they do no harm.
So I'm sure it'll be fine.
>> The only obligation they have is to make
money for the investors. That's the
legal obligation they have. They have no
moral or ethical obligations. Also,
according to them, they don't know how
to do it yet. The state-of-the-art
answers are we'll figure it out when we
get there, or AI will help us control
more advanced AI.
That's insane.
>> In terms of probability, what do you
think is the probability that something
goes catastrophically wrong?
>> So, nobody can tell you for sure what's
going to happen. But if you're not in
charge, you're not controlling it, you
will not get outcomes you want. The
space of possibilities is almost
infinite. The space of outcomes we will
like is tiny.
>> And who are you and how long have you
been working on this?
>> I'm a computer scientist by training. I
have a PhD in computer science and
engineering. I probably started work in
AI safety mildly defined as control of
bots at the time uh 15 years ago.
>> 15 years ago. So you've been working on
AI safety before it was cool.
>> Before the term existed, I coined the
term AI safety.
>> So you're the founder of the term AI safety.
safety.
>> The term? Yes. Not the field. There are
other people who did brilliant work
before I got there.
>> Why were you thinking about this 15
years ago? Because most people have only
been talking about the term AI safety
for the last two or three years.
>> Yeah. It started very mildly just as a
security project. I was looking at poker
bots and I realized that the bots are
getting better and better. And if you
just project this forward enough,
they're going to get better than us,
smarter, more capable. And it happened.
They are playing poker way better than
average players. But more generally, it
will happen with all other domains, all
the other cyber resources. I wanted to
make sure AI is a technology which is
beneficial for everyone. So I started to
work on making AI safer.
>> Was there a particular moment in your
career where you thought oh my god?
>> First 5 years at least I was working on
solving this problem. I was convinced we
can make this happen. We can make safe
AI and that was the goal. But the more I
looked at it, the more I realized every
single component of that equation is not
something we can actually do. And the
more you zoom in, it's like a fractal.
You go in and you find 10 more problems
and then 100 more problems. And all of
them are not just difficult. They're
impossible to solve. There is no seinal
work in this field where like we solved
this, we don't have to worry about this.
There are patches. There are little
fixes we put in place and quickly people
find ways to work around them. They
drill break whatever safety mechanisms
we have. So while progress in AI
capabilities is exponential or maybe
even hyper exponential, progress in AI
safety is linear or constant. The gap is increasing.
increasing.
>> The gap between the
>> how capable the systems are and how well
we can control them, predict what
they're going to do, explain their
decision making.
>> I think this is quite an important point
because you said that we're basically
patching over the issues that we find.
So, we're developing this this core
intelligence and then to stop it doing things
things
or to stop it showing some of its
unpredictability or its threats, the
companies that are developing this AI
are programming in code over the top to
say, "Okay, don't swear, don't say that
read word, don't do that bad thing."
>> Exactly. And you can look at other
examples of that. So, HR manuals, right?
We have those humans. They're general
intelligences, but you want them to
behave in a company. So they have a
policy, no sexual harassment, no this,
no that. But if you're smart enough, you
always find a workaround. So you're just
pushing behavior into a different not
yet restricted subdomain.
>> We we should probably define some terms
here. So there's narrow intelligence
which can play chess or whatever.
There's the artificial general
intelligence which can operate across
domains and then super intelligence
which is smarter than all humans in all
domains. And where are we? So that's a
very fuzzy boundary, right? We
definitely have many excellent narrow
systems, no question about it. And they
are super intelligent in that narrow
domain. So uh protein folding is a
problem which was solved using narrow AI
and it's superior to all humans in that
domain. In terms of AGI, again I said if
we showed what we have today to a
scientist from 20 years ago, they would
be convinced we have full-blown AGI. We
have systems which can learn. They can
perform in hundreds of domains and they
better than human in many of them. So
you can argue we have a weak version of hi.
hi.
Now we don't have super intelligence
yet. We still have brilliant humans who
are completely dominating AI especially
in science and engineering.
But that gap is closing so fast. You can
see especially in the domain of mathematics
mathematics
3 years ago large language models
couldn't do basic algebra multiplying
three-digit numbers was a challenge now
they helping with mathematical proofs
they winning mathematics olympiads
competitions they are working on solving
millennial problems hardest problems in
mathematics so in 3 years we closed the
gap from subhuman performance to better
than most mathematicians in the
And we see the same process happening in
science and in engineering.
>> You have made a series of predictions
and they correspond to a variety of
different dates. I have those dates in
front of me here.
What is your prediction for the year 2027?
2027?
>> We're probably looking at AGI as
predicted by prediction markets and tops
of the labs.
>> So we have artificial general
intelligence by 2027.
And how would that make the world
different to how it is now?
>> So if you have this concept of a drop in
employee, you have free labor, physical
and cognitive, trillions of dollars of
it. It makes no sense to hire humans for
most jobs. If I can just get, you know,
a $20 subscription or a free model to do
what an employee does. First, anything
on a computer will be automated.
And next, I think humanoid robots are
maybe 5 years behind. So in five years
all the physical labor can also be
automated. So we're looking at a world
where we have levels of unemployment we
never seen before. Not talking about 10%
unemployment which is scary but 99%. All
you have left is jobs where for whatever
reason you prefer another human would do
it for you.
But anything else can be fully
automated. It doesn't mean it will be
automated in practice. A lot of times
technology exists but it's not deployed.
Video phones were invented in the 70s.
Nobody had them until iPhones came around.
around.
So we may have a lot more time with jobs
and with world which looks like this.
>> But capability
to replace most humans and most
occupations will come very quickly.
>> H okay. So let's try and drill down into
that and and stress test it. So,
a podcaster like me.
>> Would you need a podcaster like me?
>> So, let's look at what you do. You
prepare. You
>> ask questions.
>> You ask follow-up questions. And you
look good on camera.
>> Thank you so much.
>> Let's see what we can do. Large language
model today can easily read everything I
wrote. Yeah.
>> And have very solid understanding
better. I I assume you haven't read
every single one of my books. Right?
>> That thing would do it. It can train on
every podcast you ever did. So, it knows
exactly your style, the types of
questions you ask. It can also
find correspondence between what worked
really well. Like this type of question
really increased views. This type of
topic was very promising. So, you can
optimize I think better than you can
because you don't have a data set. Of
course, visual simulation is trivial at
this point. So it can you can make a
video within seconds of me sat here and
>> so we can generate videos of you
interviewing anyone on any topic very
efficiently and you just have to get
likeness approval whatever
>> are there many jobs that you think would
remain in a world of AGI if you're
saying AGI is potentially going to be
here whether it's deployed or not by
2027 what kind and then okay so let's
take out of this any physical labor jobs
for a second are there any jobs that you
think a human would be able to do better
in a world of AGI still?
>> So that's the question I often ask
people in a world with AGI and I think
almost immediately we'll get super
intelligence as a side effect. So the
question really is in a world of super
intelligence which is defined as better
than all humans in all domains. What can
you contribute?
And so you know better than anyone what
it's like to be you. You know what ice
cream tastes to you? Can you get paid
for that knowledge? Is someone
interested in that?
Maybe not. Not a big market. There are
jobs where you want a human. Maybe
you're rich and you want a human
accountant for whatever historic reasons.
reasons.
Old people like traditional ways of
doing things. Warren Buffett would not
switch to AI. He would use his human accountant.
accountant.
But it's a tiny subset of a market.
Today we have products which are
man-made in US as opposed to
mass-produced in China and some people
pay more to have those but it's a small
subset. It's a almost a fetish. There is
no practical reason for it and I think
anything you can do on a computer could
be automated using that technology.
You must hear a lot of rebuttals to when
this when you say it because people
experience a huge amount of mental
discomfort when they hear that their
job, their career, the thing they got a
degree in, the thing they invested
$100,000 into is going to be taken away
from them. So, their natural reaction
some for some people is that cognitive
dissonance that no, you're wrong. AI
can't be creative. It's not this. It's
not that. It'll never be interested in
my job. I'll be fine because you hear
these arguments all the time, right?
It's really funny. I ask people and I
ask people in different occupations. I
ask my Uber driver, "Are you worried
about self-driving cars?" And they go,
"No, no one can do what I do. I know the
streets of New York. I can navigate like
no AI. I'm safe." And it's true for any
job. Professors are saying this to me.
Oh, nobody can lecture like I do. Like,
this is so special. But you understand
it's ridiculous. We already have
self-driving cars replacing drivers.
That is not even a question if it's
possible. It's like how soon before you fired.
fired.
>> Yeah. I mean, I've just been in LA
yesterday and uh my car drives itself.
So, I get in the car, I set put in where
I want to go and then I don't touch the
steering wheel or the brake pedals and
it takes me from A to B, even if it's an
hourong drive without any intervention
at all. I actually still park it, but
other than that, I'm not I'm not driving
the car at all. And obviously in LA we
also have Whimo now which means you
order it on your phone and it shows up
with no driver in it and takes you to
where you want to go.
>> Oh yeah.
>> So it's quite clear to see how that is
potentially a matter of time for those
people cuz we do have some of those
people listening to this conversation
right now that their occupation is
driving to offer them a and I think
driving is the biggest occupation in the
world if I'm correct. I'm pretty sure it
is the biggest occupation in the world.
>> One of the top ones. Yeah.
What would you say to those people? What
should they be doing with their lives?
What should they should they be
retraining in something or what time frame?
frame?
>> So that's the paradigm shift here.
Before we always said this job is going
to be automated, retrain to do this
other job. But if I'm telling you that
all jobs will be automated, then there
is no plan B. You cannot retrain.
Look at computer science.
Two years ago, we told people learn to
code. you are an artist, you cannot make
money. Learn to code. Then we realized,
oh, AI kind of knows how to code and
getting better. Become a prompt engineer.
engineer.
You can engineer prompts for AI. It's
going to be a great job. Get a four-year
degree in it. But then we're like, AI is
way better at designing prompts for
other AIs than any human. So that's
gone. So I can't really tell you right
now. The hardest thing is design AI
agents for practical applications. I
guarantee you in a year or two it's
going to be gone just as well.
So I don't think there is a this
occupation needs to learn to do this
instead. I think it's more like we as a
humanity then we all lose our jobs. What
do we do? What do we do financially?
Who's paying for us? And what do we do
in terms of meaning? What do I do with
my extra 60 80 hours a week?
>> You've thought around this corner,
haven't you? a little bit.
>> What is around that corner in your view?
>> So the economic part seems easy. If you
create a lot of free labor, you have a
lot of free wealth, abundance, things
which are right now not very affordable
become dirt cheap and so you can provide
for everyone basic needs. Some people
say you can provide beyond basic needs.
You can provide very good existence for
everyone. The hard problem is what do
you do with all that free time? For a
lot of people, their jobs are what gives
them meaning in their life. So they
would be kind of lost. We see it with
people who uh retire or do early
retirement. And for so many people who
hate their jobs, they'll be very happy
not working. But now you have people who
are chilling all day. What happens to
society? How does that impact crime
rate, pregnancy rate, all sorts of
issues? Nobody thinks about. governments
don't have programs prepared to deal
with 99% unemployment.
>> What do you think that world looks like?
>> Again, I I think you very important part
to understand here is the
unpredictability of it. We cannot
predict what a smarter than us system
will do. And the point when we get to
that is often called singularity by
analogy with physical singularity. You
cannot see beyond the event horizon. I
can tell you what I think might happen,
but that's my prediction. It is not what
actually is going to happen because I
just don't have cognitive ability to
predict a much smarter agent impacting
this world.
Then you read science fiction. There is
never a super intelligence in it
actually doing anything because nobody
can write believable science fiction at
that level. They either banned AI like
Dune because this way you can avoid
writing about it or it's like Star Wars.
You have this really dumb bots but not
nothing super intelligent ever cuz by
definition you cannot predict at that level
level
>> because by definition of it being super
intelligent it will make its own mind up.
up.
>> By definition if it was something you
could predict you would be operating at
the same level of intelligence violating
our assumption that it is smarter than
you. If I'm playing chess with super
intelligence and I can predict every
move, I'm playing at that level.
>> It's kind of like my French bulldog
trying to predict exactly what I'm
thinking and what I'm going to do.
>> That's a good cognitive gap. And it's
not just he can predict you're going to
work, you're coming back, but he cannot
understand why you're doing a podcast.
That is something completely outside of
his model of the world.
>> Yeah. He doesn't even know that I go to
work. He just sees that I leave the
house and doesn't know where I go.
>> Buy food for him. What's the most
persuasive argument against your own
perspective here?
>> That we will not have unemployment due
to advanced technology
>> that there won't be this French bulldog
human gap in understanding and
I guess like power and control.
>> So some people think that we can enhance
human minds either through combination
with hardware. So something like
Neurolink or through genetic
re-engineering to where we make smarter humans.
humans. >> Yeah,
>> Yeah,
>> it may give us a little more
intelligence. I don't think we are still
competitive in biological form with
silicon form. Silicon substrate is much
more capable for intelligence. It's
faster. It's more resilient, more energy
efficient in many ways,
>> which is what computers are made out of
versus the brain. Yeah. So I don't think
we can keep up just with improving our
biology. Some people think maybe and
this is very speculative we can upload
our minds into computers. So scan your
brain connect of your brain and have a
simulation running on a computer and you
can speed it up give it more
capabilities. But to me that feels like
you no longer exist. We just created
software by different means and now you
have AI based on biology and AI based on
some other forms of training. You can
have evolutionary algorithms. You can
have many paths to reach AGI but at the
end none of them are humans.
>> I have a another date here which is
2030. What's your prediction for 2030?
What will the world look like?
So we probably will have uh humanoid
robots with enough flexibility,
dexterity to compete with humans in all
domains including plumbers. We can make
artificial plumbers.
>> Not the plumbers where that was that
felt like the last bastion of uh human
employment. So 2030, 5 years from now,
humanoid robots, so many of the
companies, the leading companies
including Tesla are developing humanoid
robots at light speed and they're
getting increasingly more effective. And
these humanoid robots will be able to
move through physical space for, you
know, make an omelette, do anything
humans can do, but obviously have be
connected to AI as well. So they can
think, talk,
>> right? They're controlled by AI. They
always connected to the network. So they
are already dominating in many ways.
>> Our world will look remarkably different
when humanoid robots are functional and
effective because that's really when you
know I start think like the combination
of intelligence and physical ability
is really really doesn't leave much does
it for us um
human beings
>> not much. So today if you have
intelligence through internet you can
hire humans to do your bidding for you.
You can pay them in bitcoin. So you can
have bodies just not directly
controlling them. So it's not a huge
game changer to add direct control of
physical bodies. Intelligence is where
it's at. The important component is
definitely higher ability to optimize to
solve problems to find patterns people
cannot see. And then by 2045,
I guess the world looks even even more um
um
which is 20 years from now.
>> So if it's still around,
>> if it's still around,
>> Ray Kurszswe predicts that that's the
year for the singularity. That's the
year where progress becomes so fast. So
this AI doing science and engineering
work makes improvements so quickly we
cannot keep up anymore. That's the
definition of singularity. point beyond
which we cannot see, understand, predict,
predict,
>> see, understand, predict the
intelligence itself or
>> what is happening in the world, the
technology is being developed. So right
now if I have an iPhone, I can look
forward to a new one coming out next
year and I'll understand it has slightly
better camera. Imagine now this process
of researching and developing this phone
is automated. It happens every 6 months,
every 3 months, every month, week, day,
hour, minute, second.
You cannot keep up with 30 iterations of
iPhone in one day. You don't understand
what capabilities it has, what
proper controls are. It just escapes
you. Right now, it's hard for any
researcher and AI to keep up with the
state-of-the-art. While I was doing this
interview with you, a new model came out
and I no longer know what the
state-of-the-art is. Every day, as a
percentage of total knowledge, I get
dumber. I may still know more because I
keep reading. But as a percentage of
overall knowledge, we're all getting dumber.
dumber.
And then you take it to extreme values,
you have zero knowledge, zero
understanding of the world around you.
Some of the arguments against this
eventuality are that when you look at
other technologies like the industrial
revolution, people just found new ways
to to work and new careers that we could
never have imagined at the time were
created. How do you respond to that in a
world of super intelligence?
>> It's a paradigm shift. We always had
tools, new tools which allowed some job
to be done more efficiently. So instead
of having 10 workers, you could have two
workers and eight workers had to find a
new job. And there was another job. Now
you can supervise those workers or do
something cool. If you creating a meta
invention, you're inventing
intelligence. You're inventing a worker,
an agent, then you can apply that agent
to the new job. There is not a job which
cannot be automated. That never happened before.
before.
All the inventions we previously had
were kind of a tool for doing something.
So we invented fire. Huge game changer.
But that's it. It stops with fire. We
invent the wheel. Same idea. Huge
implications. But wheel itself is not an
inventor. Here we're inventing
a replacement for human mind. A new
inventor capable of doing new
inventions. It's the last invention we
ever have to make. At that point it
takes over and the process of doing
science research even ethics research
morals all that is automated at that point.
point.
>> Do you sleep well at night?
>> Really well.
>> Even though you you spent the last what
15 20 years of your life working on AI
safety and it's suddenly
among us in a in a way that I don't
think anyone could have predicted 5
years ago. When I say among us, I really
mean that the amount of funding and
talent that is now focused on reaching
super intelligence faster has made it
feel more inevitable and more soon
than any of us could have possibly imagined.
imagined.
>> We as humans have this built-in bias
about not thinking about really bad
outcomes and things we cannot prevent.
So all of us are dying.
Your kids are dying, your parents are
dying, everyone's dying, but you still
sleep well. you still go on with your
day. Even 95 year olds are still doing
games and playing golf and whatnot cuz
we have this ability to not think about
the worst outcomes especially if we
cannot actually modify the outcome. So
that's the same infrastructure being
used for this. Yeah, there is humanity
level deathlike event. We're happening
to be close to it probably, but unless I
can do something about it, I I can just
keep enjoying my life. In fact, maybe
knowing that you have limited amount of
time left gives you more reason to have
a better life. You cannot waste any.
>> And that's the survival trait of
evolution, I guess, because those of my
ancestors that spent all their time
worrying wouldn't have spent enough time
having babies and hunting to survive.
>> Suicidal ideiation. People who really
start thinking about how horrible the
>> One of the you co-authored this paper um
analyzing the key arguments people make
against the importance of AI safety. And
one of the arguments in there is that
there's other things that are of bigger
importance right now. It might be world
wars. It could be nuclear containment.
It could be other things. There's other
things that the governments and
podcasters like me should be talking
about that are more important. What's
your rebuttal to that argument?
>> So, super intelligence is a meta
solution. If we get super intelligence
right, it will help us with climate
change. It will help us with wars. It
can solve all the other existential
risks. If we don't get it right, it
dominates. If climate change will take a
hundred years to boil us alive and super
intelligence kills everyone in five, I
don't have to worry about climate
change. So either way, either it solves
it for me or it's not an issue.
>> So you think it's the most important
thing to be working on?
>> Without question, there is nothing more
And I know everyone says it. you take
any class with you take English
professor's class and he tells you this
is the most important class you'll ever
take but u you can see the meta level
differences with this one
>> another argument in that paper is that
we all be in control and that the danger
is not AI um this particular argument
asserts that AI is just a tool humans
are the real actors that present danger
and we can always m maintain control by
simply turning it off can't we just pull
the plug out I see that every time we
have a conversation on the show about
AI, someone says, "Can't we just unplug it?"
it?"
>> Yeah, I get those comments on every
podcast I make and I always want to like
get in touch with a guy and say, "This
is brilliant. I never thought of it.
We're going to write a paper together
and get a noble price for it. This is
like, let's do it." Because it's so
silly. Like, can you turn off a virus?
You have a computer virus. You don't
like it. Turn it off. How about Bitcoin?
Turn off Bitcoin network. Go ahead. I'll
wait. This is silly. Those are
distributed systems. You cannot turn
them off. And on top of it, they're
smarter than you. They made multiple
backups. They predicted what you're
going to do. They will turn you off
before you can turn them off. The idea
that we will be in control applies only
to preup intelligence levels. Basically
what we have today, today humans with AI
tools are dangerous. They can be
hackers, malevolent actors. Absolutely.
But the moment super intelligence
becomes smarter, dominates, they no
longer the important part of that
equation. It is the higher intelligence
I'm concerned about, not the human who
may add additional malevolent payload,
but at the end still doesn't control it.
>> It is tempting
to follow your the next argument that I
saw in that paper, which basically says,
listen, this is inevitable.
So, there's no point fighting against it
because there's really no hope here. So,
we should probably give up even trying
and be faithful that it'll work itself
out because everything you've said
sounds really inevitable. And if with
with China working on it, I'm sure
Putin's got some secret division. I'm
sure Iran are doing some bits and
pieces. Every European country's trying
to get ahead of AI. The United States is
leading the way. So, it's it's
inevitable. So, we probably should just
have faith and pray.
>> Well, praying is always good, but
incentives matter. If you are looking at
what drives this people, so yes, money
is important. So there is a lot of money
in that space and so everyone's trying
to be there and develop this technology.
But if they truly understand the
argument, they understand that you will
be dead. No amount of money will be
useful to you, then incentive switch.
They would want to not be dead. A lot of
them are young people, rich people. They
have their whole lives ahead of them. I
think they would be better off not
building advanced super intelligence
concentrating on narrow AI tools for
solving specific problems. Okay, my
company cures breast cancer. That's all.
We make billions of dollars. Everyone's
happy. Everyone benefits.
It's a win. We are still in control
today. It's not over until it's over. We
can decide not to build general super intelligences.
intelligences.
I mean the United States might be able
to conjure up enough enthusiasm for that
but if the United States doesn't build
general super intelligences then China
are going to have the big advantage
right so right now at those levels
whoever has more advanced AI has more
advanced military no question we see it
with existing conflicts but the moment
you switch to super intelligence
uncontrolled super intelligence it
doesn't matter who builds it us or them
and if they understand this argument
they also would not build it. It's a
mutually assured destruction on both ends.
ends.
>> Is this technology different than say
nuclear weapons which require a huge
amount of investment and you have to
like enrich the uranium and you need
billions of dollars potentially to even
build a nuclear weapon.
But it feels like this technology is
much cheaper to get to super
intelligence potentially or at least it
will become cheaper. I wonder if it's
possible that some some guy some startup
is going to be able to build super
intelligence in you know a couple of
years without the need of you know
billions of dollars of compute or or
electricity power.
>> That's a great point. So every year it
becomes cheaper and cheaper to train
sufficiently large model. If today it
would take a trillion dollars to build
super intelligence, next year it could
be a hundred billion and so on at some
point a guy in a laptop could do it.
But you don't want to wait four years
for make it affordable. So that's why so
much money is pouring in. Somebody wants
to get there this year and lucky and all
the winnings lite cone level award. So
in that regard they both very expensive
projects like Manhattan level projects
>> which was the nuclear bomb project.
>> The difference between the two
technologies is that nuclear weapons are
still tools.
some dictator, some country, someone has
to decide to use them, deploy them.
Whereas super intelligence is not a is
not a tool. It's an agent. It makes its
own decisions and no one is controlling
it. I cannot take out this dictator and
now super intelligence is safe. So
that's a fundamental difference to me.
>> But if you're saying that it is going to
get incrementally cheaper, like I think
it's Mo's law, isn't it? the technology
gets cheaper
>> then there is a future where some guy on
his laptop is going to be able to create
super intelligence without oversight or
regulation or employees etc.
>> Yeah that's why a lot of people
suggesting we need to build something
like an
surveillance planet where you are
monitoring who's doing what and you're
trying to prevent people from doing it.
Do I think it's feasible? No. At some
point it becomes so affordable and so
trivial that it just will happen. But at
this point we're trying to get more
time. We don't want it to happen in five
years. We want it to happen in 50 years.
>> I mean that's not very hopeful. See
>> depends on how old you are.
>> Depends on how old you are.
I mean if you're saying that you believe
in the future people will be able to
make super intelligence
without the resources that are required
today then it is just a matter of time.
>> Yeah. But so will be true for many other
technologies. We're getting much better
in synthetic biology where today someone
with a bachelor's degree in biology can
probably create a new virus. This will
also become cheaper other technologies
like that. So we are approaching a point
where it's very difficult to make sure
no technological
breakthrough is the last one. So
essentially in many directions we have
this uh pattern of making it easier in
terms of resources in terms of
intelligence to destroy the world. If
you look at uh I don't know 500 years
ago the worst dictator with all the
resources could kill a couple million
people. He couldn't destroy the world.
Now we know nuclear weapons we can blow
up the whole planet multiple times over.
Synthetic biology we saw with CO you can
very easily create a combination virus
which impacts billions of people and all
of those things becoming easier to do
>> in the near term. You talk about
extinction being a real risk, human
extinction being a real risk. Of all the
the pathways to human extinction that
you think are most likely, what what is
the leading pathway? because I know you
talk about there being some issue
pre-eployment of these AI tools like you
know someone makes a mistake um when
they're designing a model or other
issues post deployment when I say post-
deployment I mean once a chat or
something an an agent's released into
the world and someone hacking into it
and changing it and reprogram
reprogramming it to be malicious of all
these potential paths to human
extinction which one do you think is the
highest probability So I can only talk
about the ones I can predict myself. So
I can predict even before we get to
super intelligence someone will create a
very advanced biological tool create a
novel virus and that virus gets everyone
or most everyone I can envision it. I
can understand the pathway. I can say that.
that.
>> So just to zoom in on that then that
would be using an AI to make a virus and
then releasing it.
>> Yeah. And would that be intentional or
>> There is a lot of psychopaths, a lot of
terrorists, a lot of doomsday cults. We
seen historically again they try to kill
as many people as they can. They usually
fail. They kill hundreds of thousands.
But if they get technology to kill
millions of billions, they would do that gladly.
The point I'm trying to emphasize is
that it doesn't matter what I can come
up with. I am not a malevolent actor
you're trying to defeat here. It's a
super intelligence which can come up
with completely novel ways of doing it.
Again, you brought up example of your dog.
dog.
Your dog cannot understand all the ways
you can take it out.
It can maybe think you'll bite it to
death or something, but that's all.
Whereas you have infinite supply of resources.
resources.
So if I asked your dog exactly how you
going to take it out, it would not give
you a meaningful answer. It can talk
about biting. And this is what we know.
We know viruses. We experienced viruses.
We can talk about them. But what
an AI system capable of doing novel
physics research can come up with is
beyond me.
>> One of the things that I think most
people don't understand is how little we
understand about how these AIs are
actually working. Because one would
assume, you know, with computers, we
kind of understand how a computer works.
We we know that it's doing this and then
this and it's running on code, but from
reading your work, you describe it as
being a black box. We actually So, in
the context of something like ChatBT or
an AI, we know you're telling me that
the people that have built that tool
don't actually know what's going on
inside there.
>> That's exactly right. So even people
making those systems have to run
experiments on their product to learn
what it's capable of. So they train it
by giving it all of data. Let's say all
of internet text. They run it on a lot
of computers to learn patterns in that
text and then they start experimenting
with that model. Oh, do you speak
French? Oh, can you do mathematics? Oh,
are you lying to me now? And so maybe it
takes a year to train it and then 6
months to get some fundamentals about
what it's capable of some safety
overhead. But we still discover new
capabilities and old models. If you ask
a question in a different way, it
becomes smarter.
So it's no longer
engineering how it was the first 50
years where someone was a knowledge
engineer programming an expert system AI
to do specific things. It's a science.
We are creating this artifact growing
it. It's like a alien plant and then we
study it to see what it's doing. And
just like with plants we don't have 100%
accurate knowledge of biology. We don't
have full knowledge here. We kind of
know some patterns. We know okay if we
add more compute it gets smarter most of
the time but nobody can tell you
precisely what the outcome is going to
be given a set of inputs.
>> I've watched so many entrepreneurs treat
sales like a performance problem. When
it's often down to visibility because
when you can't see what's happening in
your pipeline, what stage each
conversation is at, what's stalled,
what's moving, you can't improve
anything and you can't close the deal.
Our sponsor, Pipe Drive, is the number
one CRM tool for small to medium
businesses. Not just a contact list, but
an actual system that shows your entire
sales process, end to end, everything
that's live, what's lagging, and the
steps you need to take next. All of your
teams can move smarter and faster. Teams
using Pipe Drive are on average closing
three times more deals than those that
aren't. It's the first CRM made by
salespeople for salespeople that over
100,000 companies around the world rely
on, including my team who absolutely
love it. Give Piperive a try today by
visiting piperive.com/ceo.
And you can get up and running in a
couple of minutes with no payment
needed. And if you use this link, you'll
get a 30-day free trial. What do you
make of OpenAI and Sam Alman and what
they're doing? And obviously you're
aware that one of the co-founders was it
um was it Ilia Jack?
>> Ilia Ilia. Yeah. Ilia left and he
started a new company called
>> Super Intelligent Safety.
>> Super AI safety wasn't challenging
enough. He decided to just jump right to
the hard problem.
as an onlooker when you see that people
are leaving OpenAI to to start super
intelligent safety companies.
What was your read on that situation?
>> So, a lot of people who worked with Sam
said that maybe he's not the most direct
person in terms of being honest with
them and they had concerns about his
views on safety. That's part of it. So,
they wanted more control. They wanted
more concentration on safety. But also,
it seems that anyone who leaves that
company and starts a new one gets a $20
billion valuation just for having it
started. You don't have a product, you
don't have customers, but if you want to
make many billions of dollars, just do
that. So, it seems like a very rational
thing to do for anyone who can. So, I'm
not surprised that there is a lot of attrition
attrition
meeting him in person. He's super nice,
very smart. absolutely
absolutely
perfect public interface. You see him
testify in the Senate, he says the right
thing to the senators. You see him talk
to the investors, they get the right
message. But if you look at what people
who know him personally are saying, it's
probably not the right person to be
controlling a project of that impact. >> Why?
>> Why?
Second to
>> winning this race to super intelligence,
being the guy who created Godic and
controlling light corn of the universe.
He's worse.
>> Do you suspect that's what he's driven
by is by the the legacy of being an
impactful person that did a remarkable
thing versus the consequence that that
might have on for society. Because it's
interesting that he's his other startup
is Worldcoin which is ba basically a
platform to create universal basic
income i.e. a platform to give us income
in a world where people don't have jobs
anymore. So in one hand you're creating
an AI company and the other hand you're
creating a company that is preparing for
people not to have employment.
>> It also has other properties. It keeps
track of everyone's biometrics.
it uh keeps you in charge of the world's
economy, world's wealth. They're
retaining a large portion of world
coins. So I I think it's kind of very
reasonable part to integrate with world
dominance. If you have a super
intelligence system and you control money,
money,
>> Why would someone want world dominance?
People have different levels of
ambition. Then you a very young person
with billions of dollars fame. You start
looking for more ambitious projects.
Some people want to go to Mars. Others
want to control Litecoin of the universe.
universe.
>> What did you say? Litecoin of the universe.
universe. >> Litecoin.
>> Litecoin.
>> Every part of the universe light can
reach from this point. Meaning anything
accessible you want to grab and bring
into your control. Do you think Sam
Alman wants to control every part of the universe?
universe? I
I
>> I suspect he might. Yes.
>> It doesn't mean he doesn't want a side
effect of it being a very beneficial
technology which makes all the humans
happy. Happy humans are good for control.
control.
If you had to guess
what the world looks like in 2,100,
2,100,
if you had to guess,
it's either free of human existence or
it's completely not comprehensible to
someone like us.
It's one of those extremes. So there's
either no humans.
>> It's basically the world is destroyed or
it's so different that I cannot envision
those predictions.
>> What can be done to turn this ship to a
more certain positive outcome at this
point? Is is there still things that we
can do or is it too late?
>> So I believe in personal self-interest.
If people realize that doing this thing
is really bad for them personally, they
will not do it. So our job is to
convince everyone with any power in this
space creating this technology working
for those companies they are doing
something very bad for them. Not just
forget our 8 billion people you
experimenting on with no permission, no
consent. You will not be happy with the
outcome. If we can get everyone to
understand that's a default and it's not
just me saying it. You had Jeff Hinton,
Nobel Prize winner, founder of a whole
machine learning space. He says the same
thing. Benjio, dozens of others, top
scholars. We had a statement about
dangers of AI signed by thousands of
scholars, computer scientists. This is
basically what we think right now. And
we need to make it a universal. No one
should disagree with this. And then we
may actually make good decisions about
what technology to build. It doesn't
guarantee long-term safety for humanity,
but it means we're not trying to get
there as soon as possible to the worst
possible outcome. And do are you hopeful
that that's even possible?
>> I want to try. We have no choice but to try.
try.
>> And what would need to happen and who
would need to act? What is it government
legislation? Is it
>> Unfortunately, I don't think making it
illegal is sufficient. There are
different jurisdictions. There is, you
know, loopholes. And what are you going
to do if somebody does it? You going to
find them for destroying humanity? Like
very steep fines for it? Like what are
you going to do? It's not enforceable.
If they do create it, now the super
intelligence is in charge. So the
judicial system we have is not
impactful. And all the punishments we
have are designed for punishing humans.
Prisons capital punishment doesn't apply
to AI. You know, the problem I have is
when I have these conversations, I never
feel like I walk away with
I hope that something's going to go
well. And what I mean by that is I never
feel like I walk away with clear some
kind of clear set of actions that can
course correct what might happen here.
So what should what should I do? What
should the person sat at home listening
to this do?
>> You you talk to a lot of people who are
building this technology. >> Mhm.
>> Mhm.
Ask them precisely to explain some of
those things they claim to be
impossible. How they solved it or going
to solve it before they get to where
they going. Do
>> you know? I don't think Sam Orman wants
to talk to me.
>> I don't know. He seems to go on a lot of
podcasts. Maybe he does.
>> He wants to go online.
I I wonder why that is. I wonder why
that is. I'd love to speak to him, but I
don't I don't think he wants to I don't
think he wants me to uh interview him.
>> Have an open challenge. Maybe money is
not the incentive, but whatever attracts
people like that. Whoever can convince
you that it's possible to control and
make safe super intelligence gets the
prize. They come on your show and prove
their case.
anyone. If no one claims the price or
even accepts the challenge after a few
years, maybe we don't have anyone with
solutions. We have companies valued
again at billions and billions of
dollars working on safe super
intelligence. We haven't seen their
>> Yeah, I'd like to speak to Ilia as well
because I know he's he's working on safe
super intelligence. So like
>> notice a pattern too. If you look at
history of AI safety organizations
or departments within companies, they
usually start well, very ambitious, and
then they fail and disappear. So, Open
AI had super intelligence alignment
team. The day they announced it, I think
they said we're going to solve it in 4
years. Like half a year later, they
canled the team. And there is dozens of
similar examples. Creating a perfect
safety for super intelligence, perpetual
safety as it keeps improving, modifying,
interacting with people, you're never
going to get there. It's impossible.
There's a big difference between
difficult problems in computer science
and be complete problems and impossible
problems. And I think control,
indefinite control of super intelligence
is such a problem.
>> So what's the point trying then if it's
impossible? Well, I'm trying to prove
that it is specifically that once we
establish something is impossible, fewer
people will waste their time claiming
they can do it and find looking for
money. So many people going, "Give me a
billion dollars in 2 years and I'll
solve it for you." Well, I don't think
you will.
>> But people aren't going to stop striving
towards it. So, if there's no attempts
to make it safe and there's more people
increasingly striving towards it, then
it's inevitable.
>> But it changes what we do. If we know
that it's impossible to make it right,
to make it safe, then this direct path
of just build it as soon as you can
become suicide mission hopefully fewer
people will pursue that they may go in
other directions like again I'm a
scientist I'm an engineer I love AI I
love technology I use it all the time
build useful tools stop building agents
build narrow super intelligence not a
general one I'm not saying you shouldn't
make billions of dollars I love billions
of dollars
But uh don't kill everyone, yourself included.
>> They don't think they're going to though.
though.
>> Then tell us why. I hear things about
intuition. I hear things about we'll
solve it later. Tell me specifically in
scientific terms. Publish a
peer-reviewed paper explaining how
you're going to control super intelligence.
intelligence.
>> Yeah, it's strange. It's strange to it's
strange to even bother if there was even
a 1% chance of human extinction. strange
to do something like if there was a 1%
chance someone told me there was a 1%
chance that if I got in a car I might
not I might not be alive. I would not
get in the car. If you told me there was
a 1% chance that if I drank whatever
liquid is in this cup right now I might
die. I would not drink the liquid. Even
if there was
a billion dollars if I survived. So the
99% chance I get a billion dollars. The
1% is I die. I wouldn't drink it. I
wouldn't take the chance.
>> It's worse than that. Not just you die.
Everyone dies.
>> Yeah. Yeah.
>> Now, would we let you drink it at any
odds? That's for us to decide. You don't
get to make that choice for us. To get
consent from human subjects, you need
them to comprehend what they are
consenting to. If those systems are
unexplainable, unpredictable, how can
they consent? They don't know what they
are consenting to.
So, it's impossible to get consent by
definition. So, this experiment can
never be run ethically. By definition
they are doing unethical experimentation
on human subjects.
>> Do you think people should be protesting?
protesting?
>> There are people protesting. There is
stop AI, there is pause AI. They block
offices of open AI. They do it weekly,
monthly, quite a few actions and they're
recruiting new people. Do
>> you think more people should be
protesting? Do you think that's an
effective solution?
>> If you can get it to a large enough
scale to where majority of population is
participating, it would be impactful. I
don't know if they can scale from
current numbers to that. But uh I
support everyone trying everything
peacefully and legally.
>> And for the for the person listening at
home, what should they what should they
be doing? What what what cuz they they
don't want to feel powerless. None of us
want to feel powerless.
>> So it depends on what scale we're asking
about time scale. Are we saying like
this year your kid goes to college, what
major to pick? Should they go to college
at all? >> Yeah.
>> Yeah.
>> Should you switch jobs? Should you go
into certain industries? Those questions
we can answer. We can talk about
immediate future. What should you do in
5 years with uh this being created for
an average person? Not much. Just like
they can't influence World War II,
nuclear, holocaust, anything like that.
It's not something anyone's going to ask
them about. Today, if you want to be a
part of this movement, yeah, join POSAI,
join Stop AI. those uh organizations
currently trying to build up momentum to
bring democratic powers to influence
those individuals.
So in the near term, not a huge amount.
I was wondering if there there are any
interesting strategies in the near term.
Like should I be thinking differently
about my family about I mean you've got
kids, right? You got three kids
>> that I know about. Yeah.
>> Three kids.
>> How are you thinking about parenting in
this world that you see around the
corner? How are you thinking about what
to say to them, the advice to give them,
what they should be learning?
>> So there is general advice uh outside of
this domain that you should live your
every day as if it's your last. It's a
good advice no matter what. If you have
three years left or 30 years left, you
lived your best life. So
try to not do things you hate for too long.
long.
Do interesting things. Do impactful
things. If you can do all that while
helping people do that. Simulation
theory is a interesting uh sort of
adjacent subject here because as
computers begin to accelerate and get
more intelligent and we're able to
you know, do things with AI that we
could never have imagined in terms of
like can imagine the world that we could
create with virtual reality. I think it
was Google that recently released what
was it called? Um like the AI worlds.
>> You take a picture and it generates a
whole world.
>> Yeah. And you can move through the
world. I'll put it on the screen for
people to see. Google have released this
technology which allows you I think with
a simple prompt actually to make a
threedimensional world that you can then
navigate through and in that world it
has memory. So in the world if you paint
on a wall and turn away you look back
the wall
>> it's persistent.
>> Yeah it's persistent. And when I saw
that I go jeez bloody hell this is
>> this is like the foothills of being able
to create a simulation that's
indistinguishable from everything I see here.
here.
>> Right. That's why I think we are in one.
That's exactly the reason AI is getting
to the level of creating human agents,
human level agents, and virtual reality
is getting to the level of being
indistinguishable from ours.
>> So, you think this is a simulation?
>> I'm pretty sure we are in a simulation. Yeah.
Yeah.
>> For someone that isn't familiar with the
simulation arguments, what are what are
the first principles here that convince
you that we are currently living in a simulation?
simulation?
>> So, you need certain technologies to
make it happen. If you believe we can
create human level AI, >> yeah,
>> yeah,
>> and you believe we can create virtual
reality as good as this in terms of
resolution, haptics, whatever properties
it has, then I commit right now the
moment this is affordable, I'm going to
run billions of simulations of this
exact moment, making sure you are
statistically in one.
>> Say that last part again. You're going
to run, you're going to run,
>> I'm going to commit right now and it's
very affordable. It's like 10 bucks a
month to run it. I'm going to run a
billion simulations of this interview. >> Why?
>> Why?
>> Because statistically that means you are
in one right now. The chances of you
being in a real one is one in a billion.
>> Okay. So to make sure I'm clear on this,
>> it's a retroactive placement.
>> Yeah. So the minute it's affordable,
then you can run billions of them and
they would feel and appear to be exactly
like this interview right now. Yeah. So
assuming the AI has internal states,
experiences, qualia, some people argue
that they don't. Some say they already
have it. That's a separate philosophical
question. But if we can simulate this, I will.
will.
>> Some people might misunderstand. You're
not you're not saying that you will.
You're saying that someone will. I
>> I can also do it. I don't mind. >> Okay.
>> Okay.
>> Of course, others will do it before I
get there. If I'm getting it for $10,
somebody got it for a,000. That's not
the point. If you have technology, we're
definitely running a lot of simulations
for research, for entertainment, games,
uh, all sorts of reasons. And the number
of those greatly exceeds the number of
real worlds we're in. Look at all the
video games kids are playing. Every kid
plays 10 different games. There's, you
know, billion kids in the world. So
there is 10 billion simulations in one
real world. Mhm.
Even more so when we think about
advanced AI super intelligent systems,
their thinking is not like ours. They
think in a lot more detail. They run
experiments. So running a detailed
simulation of some problem at the level
of creating artificial humans and
simulating the whole planet would be
something they'll do routinely. So there
is a good chance this is not me doing it
for $10. It's a future simulation
thinking about something in this world. H.
So it could be the case that a species of humans or a species of
a species of humans or a species of intelligence in some form got to this
intelligence in some form got to this point where they could affordably run
point where they could affordably run simulations that are in
simulations that are in indistinguishable from this and they
indistinguishable from this and they decided to do it and this is it right
decided to do it and this is it right now.
And it would make sense that they would run simulations as experiments or for
run simulations as experiments or for games or for entertainment. And also
games or for entertainment. And also when we think about time in the world
when we think about time in the world that I'm in in this simulation that I
that I'm in in this simulation that I could be in right now, time feels long
could be in right now, time feels long relatively you know I have 24 hours in a
relatively you know I have 24 hours in a day but on their in their world it could
day but on their in their world it could be
be >> time is relative.
>> time is relative. >> Relative yeah it could be a second. My
>> Relative yeah it could be a second. My whole life could be a millisecond in
whole life could be a millisecond in there.
there. >> Right. You can change speed of
>> Right. You can change speed of simulations you're running for sure.
simulations you're running for sure. So your belief is that this is probably
So your belief is that this is probably a simulation
a simulation >> most likely and there is a lot of
>> most likely and there is a lot of agreement on that. If you look again
agreement on that. If you look again returning to religions, every religion
returning to religions, every religion basically describes what a super
basically describes what a super intelligent being, an engineer, a
intelligent being, an engineer, a programmer creating a fake world for
programmer creating a fake world for testing purposes or for whatever. But if
testing purposes or for whatever. But if you took the simulation hypothesis
you took the simulation hypothesis paper, you go to jungle, you talk to
paper, you go to jungle, you talk to primitive people, a local tribe and in
primitive people, a local tribe and in their language you tell them about it.
their language you tell them about it. Go back two generations later. They have
Go back two generations later. They have religion. That's basically what the
religion. That's basically what the story is.
story is. >> Religion. Yeah. Describes a simulation
>> Religion. Yeah. Describes a simulation the theory. Basically somebody created.
the theory. Basically somebody created. >> So by default that was the first theory
>> So by default that was the first theory we had. And now with science more and
we had. And now with science more and more people are going like I'm giving it
more people are going like I'm giving it non-trivial probability. A few people
non-trivial probability. A few people are as high as I am, but a lot of people
are as high as I am, but a lot of people give it some credence.
give it some credence. >> What percentage are you at in terms of
>> What percentage are you at in terms of believing that we are currently living
believing that we are currently living in a simulation?
in a simulation? >> Very close to certainty.
>> Very close to certainty. >> And what does that mean for the nature
>> And what does that mean for the nature of your life? If you're close to 100%
of your life? If you're close to 100% certain that we are currently living in
certain that we are currently living in a simulation, does that change anything
a simulation, does that change anything in your life?
in your life? >> So all the things you care about are
>> So all the things you care about are still the same. Pain still hurts. Love
still the same. Pain still hurts. Love still love, right? Like those things are
still love, right? Like those things are not different. So it doesn't matter.
not different. So it doesn't matter. They're still important. That's what
They're still important. That's what matters. The little 1% different is that
matters. The little 1% different is that I care about what's outside the
I care about what's outside the simulation. I want to learn about it. I
simulation. I want to learn about it. I write papers about it. So that's the
write papers about it. So that's the only impact.
only impact. >> And what do you think is outside of the
>> And what do you think is outside of the simulation?
simulation? >> I don't know. But we can look at this
>> I don't know. But we can look at this world and derive some properties of the
world and derive some properties of the simulators. So clearly brilliant
simulators. So clearly brilliant engineer, brilliant scientist, brilliant
engineer, brilliant scientist, brilliant artist, not so good with morals and
artist, not so good with morals and ethics.
ethics. Room for improvement
Room for improvement >> in our view of what morals and ethics
>> in our view of what morals and ethics should be.
should be. >> Well, we know there is suffering in the
>> Well, we know there is suffering in the world. So unless you think it's ethical
world. So unless you think it's ethical to torture children, then I'm
to torture children, then I'm questioning your approach.
questioning your approach. >> But in terms of incentives to create a
>> But in terms of incentives to create a positive incentive, you probably also
positive incentive, you probably also need to create negative incentives.
need to create negative incentives. suffering seems to be one of the
suffering seems to be one of the negatives and incentives built into our
negatives and incentives built into our design to stop me doing things I
design to stop me doing things I shouldn't do. So like put my hand in a
shouldn't do. So like put my hand in a fire, it's going to hurt.
fire, it's going to hurt. >> But it's all about levels, levels of
>> But it's all about levels, levels of suffering, right? So unpleasant stimuli,
suffering, right? So unpleasant stimuli, negative feedback doesn't have to be at
negative feedback doesn't have to be at like negative infinity hell levels. You
like negative infinity hell levels. You don't want to burn alive and feel it.
don't want to burn alive and feel it. You want to be like, "Oh, this is
You want to be like, "Oh, this is uncomfortable. I'm going to stop."
uncomfortable. I'm going to stop." It's interesting because we we assume
It's interesting because we we assume that they don't have great moral mor
that they don't have great moral mor morals and ethics but we too would we
morals and ethics but we too would we take animals and cook them and eat them
take animals and cook them and eat them for dinner and we also conduct
for dinner and we also conduct experiments on mice and rats
experiments on mice and rats >> but to get university approval to
>> but to get university approval to conduct an experiment you submit a
conduct an experiment you submit a proposal and there is a panel of
proposal and there is a panel of efficists who would say you can't
efficists who would say you can't experiment on humans you can't burn
experiment on humans you can't burn babies you can't eat animals alive all
babies you can't eat animals alive all those things would be banned
those things would be banned >> in most parts of the world
>> in most parts of the world >> where they have ethical boards.
>> where they have ethical boards. >> Yeah.
>> Yeah. >> Some places don't bother with it, so
>> Some places don't bother with it, so they have easier approval process.
they have easier approval process. >> It's funny when you talk about the
>> It's funny when you talk about the simulation theory, there's there's an
simulation theory, there's there's an element of the conversation that makes
element of the conversation that makes life feel less meaningful in a weird
life feel less meaningful in a weird way. it like it I know it doesn't matter
way. it like it I know it doesn't matter but whenever I have this conversation
but whenever I have this conversation with people not on the podcast about are
with people not on the podcast about are we living in a simulation you almost see
we living in a simulation you almost see a little bit of meaning come out of
a little bit of meaning come out of their life for a second and then they
their life for a second and then they forget and then they carry on but the
forget and then they carry on but the the the thought that this is a
the the thought that this is a simulation almost posits that it's not
simulation almost posits that it's not important or that I think humans want to
important or that I think humans want to believe that this is the highest level
believe that this is the highest level and we're at the most important and
and we're at the most important and we're the it's It's all about us. We're
we're the it's It's all about us. We're quite egotistical by design.
quite egotistical by design. And I just an interesting observation
And I just an interesting observation I've always had when I have these
I've always had when I have these conversations with people that it it
conversations with people that it it seems to strip something out of their
seems to strip something out of their life.
life. >> Do you feel religious people feel that
>> Do you feel religious people feel that way? They know there is another world
way? They know there is another world and the one that matters is not this
and the one that matters is not this one. Do you feel they don't value their
one. Do you feel they don't value their lives the same? I guess in some
lives the same? I guess in some religions I
religions I >> think um they think that this world is
>> think um they think that this world is being created for them and that they are
being created for them and that they are going to go to this heaven or or hell
going to go to this heaven or or hell and that still puts them at the very
and that still puts them at the very center of it. But but if it's a
center of it. But but if it's a simulation, you know, we could just be
simulation, you know, we could just be some computer game that four-year-old
some computer game that four-year-old alien has is messing around with while
alien has is messing around with while he's got some time to burn.
he's got some time to burn. >> But maybe there is, you know, a test and
>> But maybe there is, you know, a test and there is a better simulation you go to
there is a better simulation you go to and a worse one. Maybe there are
and a worse one. Maybe there are different difficulty levels. Maybe you
different difficulty levels. Maybe you want to play it on a harder setting next
want to play it on a harder setting next time.
time. >> I've just invested millions into this
>> I've just invested millions into this and become a co-owner of the company.
and become a co-owner of the company. It's a company called Ketone IQ. And the
It's a company called Ketone IQ. And the story is quite interesting. I started
story is quite interesting. I started talking about ketosis on this podcast
talking about ketosis on this podcast and the fact that I'm very low carb,
and the fact that I'm very low carb, very very low sugar, and my body
very very low sugar, and my body produces ketones which have made me
produces ketones which have made me incredibly focused, have improved my
incredibly focused, have improved my endurance, have improved my mood, and
endurance, have improved my mood, and have made me more capable at doing what
have made me more capable at doing what I do here. And because I was talking
I do here. And because I was talking about it on the podcast, a couple of
about it on the podcast, a couple of weeks later, these showed up on my desk
weeks later, these showed up on my desk in my HQ in London, these little shots.
in my HQ in London, these little shots. And oh my god, the impact this had on my
And oh my god, the impact this had on my ability to articulate myself, on my
ability to articulate myself, on my focus, on my workouts, on my mood, on
focus, on my workouts, on my mood, on stopping me crashing throughout the day
stopping me crashing throughout the day was so profound that I reached out to
was so profound that I reached out to the founders of the company, and now I'm
the founders of the company, and now I'm a co-owner of this business. I highly,
a co-owner of this business. I highly, highly recommend you look into this. I
highly recommend you look into this. I highly recommend you look at the science
highly recommend you look at the science behind the product. If you want to try
behind the product. If you want to try it for yourself, visit
it for yourself, visit ketone.com/stephven
ketone.com/stephven for 30% off your subscription order. And
for 30% off your subscription order. And you'll also get a free gift with your
you'll also get a free gift with your second shipment. That's
second shipment. That's ketone.com/stephven.
And I'm so honored that once again, a company I own can sponsor my podcast.
company I own can sponsor my podcast. I've built companies from scratch and
I've built companies from scratch and backed many more. And there's a blind
backed many more. And there's a blind spot that I keep seeing in early stage
spot that I keep seeing in early stage founders. They spend very little time
founders. They spend very little time thinking about HR. And it's not because
thinking about HR. And it's not because they're reckless or they don't care.
they're reckless or they don't care. It's because they're obsessed with
It's because they're obsessed with building their companies. And I can't
building their companies. And I can't fault them for that. At that stage,
fault them for that. At that stage, you're thinking about the product, how
you're thinking about the product, how to attract new customers, how to grow
to attract new customers, how to grow your team, really, how to survive. And
your team, really, how to survive. And HR slips down the list because it
HR slips down the list because it doesn't feel urgent. But sooner or
doesn't feel urgent. But sooner or later, it is. And when things get messy,
later, it is. And when things get messy, tools like our sponsor today, Just
tools like our sponsor today, Just Works, go from being a nice to have to
Works, go from being a nice to have to being a necessity. Something goes
being a necessity. Something goes sideways and you find yourself having
sideways and you find yourself having conversations you did not see coming.
conversations you did not see coming. This is when you learn that HR really is
This is when you learn that HR really is the infrastructure of your company and
the infrastructure of your company and without it things wobble and just work
without it things wobble and just work stops you learning this the hard way. It
stops you learning this the hard way. It takes care of the stuff that would
takes care of the stuff that would otherwise drain your energy and your
otherwise drain your energy and your time automating payroll, health
time automating payroll, health insurance benefits and it gives your
insurance benefits and it gives your team human support at any hour. It grows
team human support at any hour. It grows with your small business from startup
with your small business from startup through to growth even when you start
through to growth even when you start hiring team members abroad. So if you
hiring team members abroad. So if you want HR support that's there through the
want HR support that's there through the exciting times and the challenging times
exciting times and the challenging times head to justworks.com now. That's just
head to justworks.com now. That's just works.com.
works.com. And do you think much about longevity?
And do you think much about longevity? >> A lot. Yeah. It's probably the second
>> A lot. Yeah. It's probably the second most important problem because if AI
most important problem because if AI doesn't get us, that will.
doesn't get us, that will. >> What do you mean?
>> What do you mean? >> You're going to die of old age.
>> You're going to die of old age. >> Which is fine.
>> Which is fine. >> That's not good. You want to die?
>> That's not good. You want to die? >> I mean,
>> I mean, >> you don't have to. It's just a disease.
>> you don't have to. It's just a disease. We can cure it.
We can cure it. Nothing stops you from living forever
Nothing stops you from living forever as long as universe exists. Unless we
as long as universe exists. Unless we escape the simulation.
escape the simulation. >> But we wouldn't want a world where
>> But we wouldn't want a world where everybody could live forever, right?
everybody could live forever, right? That would be
That would be >> Sure, we do. Why? Who do you want to
>> Sure, we do. Why? Who do you want to die?
die? >> Well, I don't know. I mean, I say this
>> Well, I don't know. I mean, I say this because it's all I've ever known that
because it's all I've ever known that people die. But wouldn't the world
people die. But wouldn't the world become pretty overcrowded if
become pretty overcrowded if >> No, you stop reproducing if you live
>> No, you stop reproducing if you live forever. You have kids because you want
forever. You have kids because you want a replacement for you if you live
a replacement for you if you live forever. You're like, I'll have kids in
forever. You're like, I'll have kids in a million years. That's cool. I'll go
a million years. That's cool. I'll go explore universe first. Plus, if you
explore universe first. Plus, if you look at actual population dynamics
look at actual population dynamics outside of like one continent, we're all
outside of like one continent, we're all shrinking. We're not growing.
shrinking. We're not growing. >> Yeah. This is crazy. It's crazy that the
>> Yeah. This is crazy. It's crazy that the more rich people get, the less kids they
more rich people get, the less kids they they have, which aligns with what you're
they have, which aligns with what you're saying. And I do actually think I think
saying. And I do actually think I think if I'm going to be completely honest
if I'm going to be completely honest here, I think if I knew that I was going
here, I think if I knew that I was going to live to a thousand years old, there's
to live to a thousand years old, there's no way I'd be having kids at 30.
no way I'd be having kids at 30. >> Right. Exactly. Biological clocks are
>> Right. Exactly. Biological clocks are based on terminal points. Whereas if
based on terminal points. Whereas if your biological clock is infinite,
your biological clock is infinite, you'll be like one day.
you'll be like one day. >> And you think that's close being able to
>> And you think that's close being able to extend our lives?
extend our lives? >> It's one breakthrough away. I think
>> It's one breakthrough away. I think somewhere in our genome, we have this
somewhere in our genome, we have this rejuvenation loop and it's set to
rejuvenation loop and it's set to basically give us at most 120. I think
basically give us at most 120. I think we can reset it to something bigger.
we can reset it to something bigger. >> AI is probably going to accelerate that.
>> AI is probably going to accelerate that. >> That's one very important application
>> That's one very important application area. Yes, absolutely.
area. Yes, absolutely. >> So maybe Brian Johnson's right when he
>> So maybe Brian Johnson's right when he says don't die now. He keeps saying to
says don't die now. He keeps saying to me, he's like don't die now.
me, he's like don't die now. >> Don't die ever.
>> Don't die ever. >> But you know, he's saying like don't die
>> But you know, he's saying like don't die before we get to the technology,
before we get to the technology, >> right? Longevity escape velocity. You
>> right? Longevity escape velocity. You want to long live long enough to live
want to long live long enough to live forever. If at some point we every year
forever. If at some point we every year of your existence at 2 years to your
of your existence at 2 years to your existence through medical breakthroughs,
existence through medical breakthroughs, then you live forever. You just have to
then you live forever. You just have to make it to that point of longevity,
make it to that point of longevity, escape, velocity. And he thinks that
escape, velocity. And he thinks that long longevity escape velocity
long longevity escape velocity especially in a world of AI is pretty is
especially in a world of AI is pretty is pretty is decades away minimum which
pretty is decades away minimum which means
means >> as soon as we fully understand human
>> as soon as we fully understand human genome I think we'll make amazing
genome I think we'll make amazing breakthroughs very quickly because we
breakthroughs very quickly because we know some people have genes for living
know some people have genes for living way longer. We have generations of
way longer. We have generations of people who are centarians. So if we can
people who are centarians. So if we can understand that and copy that or copy it
understand that and copy that or copy it from some animals which will live
from some animals which will live forever we'll get there.
forever we'll get there. >> Would you want to live forever?
>> Would you want to live forever? >> Of course.
>> Of course. Reverse reverse the question. Let's say
Reverse reverse the question. Let's say we lived forever and you ask me, "Do you
we lived forever and you ask me, "Do you want to die in 40 years?" Why would I
want to die in 40 years?" Why would I say yes?
say yes? >> I don't know. Maybe
>> I don't know. Maybe >> you're just used to the default.
>> you're just used to the default. >> Yeah, I am used to the default.
>> Yeah, I am used to the default. >> And nobody wants to die. Like no matter
>> And nobody wants to die. Like no matter how old you are, nobody goes, "Yeah, I
how old you are, nobody goes, "Yeah, I want to die this year." Everyone's like,
want to die this year." Everyone's like, "Oh, I want to keep living."
"Oh, I want to keep living." >> I wonder if life and everything would be
>> I wonder if life and everything would be less special if I lived for 10,000
less special if I lived for 10,000 years. I wonder if going to Hawaii for
years. I wonder if going to Hawaii for the first time or I don't know a
the first time or I don't know a relationship all of these things would
relationship all of these things would be way less special to me if they were
be way less special to me if they were less scarce and if that I just you know
less scarce and if that I just you know >> it could be individually less special
>> it could be individually less special but there is so much more you can do
but there is so much more you can do right now you can only make plans to do
right now you can only make plans to do something for a decade or two. You
something for a decade or two. You cannot have an ambitious plan of working
cannot have an ambitious plan of working in this project for 500 years. Imagine
in this project for 500 years. Imagine possibilities open to you with infinite
possibilities open to you with infinite time in the infinite universe.
time in the infinite universe. Gosh.
Gosh. >> Well, you can
>> Well, you can >> feels exhausting.
>> feels exhausting. >> It's a big amount of time. Also, I don't
>> It's a big amount of time. Also, I don't know about you, but I don't remember
know about you, but I don't remember like 99% of my life in detail. I
like 99% of my life in detail. I remember big highlights. So, even if I
remember big highlights. So, even if I enjoyed Hawaii 10 years ago, I'll enjoy
enjoyed Hawaii 10 years ago, I'll enjoy it again.
it again. >> Are you thinking about that really
>> Are you thinking about that really practically as as in terms of, you know,
practically as as in terms of, you know, if in the same way that Brian Johnson
if in the same way that Brian Johnson is, Brian Johnson is convinced that
is, Brian Johnson is convinced that we're like maybe two decades away from
we're like maybe two decades away from being able to extend life. Are you
being able to extend life. Are you thinking about that practically and are
thinking about that practically and are you doing anything about it?
you doing anything about it? >> Diet, nutrition. I try to think about
>> Diet, nutrition. I try to think about investment strategies which pay out in a
investment strategies which pay out in a million years. Yeah.
million years. Yeah. >> Really?
>> Really? >> Yeah. Of course.
>> Yeah. Of course. >> What do you mean? Of course. Of course.
>> What do you mean? Of course. Of course. >> Why wouldn't you? If you think this is
>> Why wouldn't you? If you think this is what's going to happen, you you should
what's going to happen, you you should try that. So, if we get AI right now,
try that. So, if we get AI right now, what happens to economy? We talked about
what happens to economy? We talked about world coin. We talked about free labor.
world coin. We talked about free labor. What's money? Is it now Bitcoin? Do you
What's money? Is it now Bitcoin? Do you invest in that? Is there something else
invest in that? Is there something else which becomes the only resource we
which becomes the only resource we cannot fake? So those things are very
cannot fake? So those things are very important research topics.
important research topics. >> So you're investing in Bitcoin, aren't
>> So you're investing in Bitcoin, aren't you?
you? >> Yeah,
>> Yeah, >> because it's a
>> because it's a >> it's the only scarce resource. Nothing
>> it's the only scarce resource. Nothing else has scarcity. Everything else if
else has scarcity. Everything else if price goes up will make more. I can make
price goes up will make more. I can make as much gold as you want given a proper
as much gold as you want given a proper price point. You cannot make more
price point. You cannot make more Bitcoin.
Bitcoin. Some people say Bitcoin is just this
Some people say Bitcoin is just this thing on a computer that we all agreed
thing on a computer that we all agreed was value.
was value. >> We are a thing on a computer,
>> We are a thing on a computer, remember?
remember? >> Okay. So, I mean, not investment advice,
>> Okay. So, I mean, not investment advice, but investment advice.
but investment advice. >> It's hilarious how that's one of those
>> It's hilarious how that's one of those things where they tell you it's not, but
things where they tell you it's not, but you know it is immediately. There is a
you know it is immediately. There is a your call is important to us. That means
your call is important to us. That means your call is of zero importance. And
your call is of zero importance. And investment is like that.
investment is like that. >> Yeah. Yeah. When they say no investment
>> Yeah. Yeah. When they say no investment advice, it's definitely investment
advice, it's definitely investment advice. Um but it's not investment
advice. Um but it's not investment advice. Okay. So you're bullish on
advice. Okay. So you're bullish on Bitcoin because it's it can't be messed
Bitcoin because it's it can't be messed with.
with. >> It is the only thing which we know how
>> It is the only thing which we know how much there is in the universe. So gold
much there is in the universe. So gold there could be an asteroid made out of
there could be an asteroid made out of pure gold heading towards us devaluing
pure gold heading towards us devaluing it. Well also killing all of us. But
it. Well also killing all of us. But Bitcoin I know exactly the numbers and
Bitcoin I know exactly the numbers and even the 21 million is an upper limit.
even the 21 million is an upper limit. How many are lost? Passwords forgotten.
How many are lost? Passwords forgotten. I don't know what Satoshi is doing with
I don't know what Satoshi is doing with his million. It's getting scarcer every
his million. It's getting scarcer every day while more and more people are
day while more and more people are trying to accumulate it.
trying to accumulate it. >> Some people worry that it could be
>> Some people worry that it could be hacked with a supercomput.
hacked with a supercomput. >> A quantum computer can break that
>> A quantum computer can break that algorithm. There is uh strategies for
algorithm. There is uh strategies for switching to quantum resistant
switching to quantum resistant cryptography for that. And quantum
cryptography for that. And quantum computers are still kind of weak.
computers are still kind of weak. Do you think there's any changes to my
Do you think there's any changes to my life that I should make following this
life that I should make following this conversation? Is there anything that I
conversation? Is there anything that I should do differently the minute I walk
should do differently the minute I walk out of this door?
out of this door? >> I assume you already invest in Bitcoin
>> I assume you already invest in Bitcoin heavily.
heavily. >> Yes, I'm an investor in Bitcoin.
>> Yes, I'm an investor in Bitcoin. >> Business financial advice. Uh, no. Just
>> Business financial advice. Uh, no. Just you seem to be winning. Maybe it's your
you seem to be winning. Maybe it's your simulation. You're rich, handsome, you
simulation. You're rich, handsome, you have famous people hang out with you.
have famous people hang out with you. Like that's pretty good.
Like that's pretty good. Keep it up.
Robin Hansen has a paper about how to live in a simulation, what you should be
live in a simulation, what you should be doing in it. And your goal is to do
doing in it. And your goal is to do exactly that. You want to be
exactly that. You want to be interesting. You want to hang out with
interesting. You want to hang out with famous people so they don't shut it
famous people so they don't shut it down. So you are part of a part
down. So you are part of a part someone's actually watching on
someone's actually watching on pay-per-view or something like that.
pay-per-view or something like that. >> Oh, I don't know if you want to be
>> Oh, I don't know if you want to be watched on pay-per-view because then it
watched on pay-per-view because then it would be the same.
would be the same. >> Then they shut you down. If no one's
>> Then they shut you down. If no one's watching, why would they play it?
>> I'm saying, don't you want to fly under the radar? Don't you want to be the the
the radar? Don't you want to be the the guy just living a normal life that the
guy just living a normal life that the the masters?
the masters? >> Those are NPCs. Nobody wants to be an
>> Those are NPCs. Nobody wants to be an NPC.
NPC. >> Are you religious?
>> Are you religious? >> Not in any traditional sense, but I
>> Not in any traditional sense, but I believe in simulation hypothesis which
believe in simulation hypothesis which has a super intelligent being. So,
has a super intelligent being. So, >> but you don't believe in the like you
>> but you don't believe in the like you know the religious books.
know the religious books. >> So different religions. This religion
>> So different religions. This religion will tell you don't work Saturday. This
will tell you don't work Saturday. This one don't work Sunday, don't eat pigs,
one don't work Sunday, don't eat pigs, don't eat cows. They just have local
don't eat cows. They just have local traditions on top of that theory. That's
traditions on top of that theory. That's all it is. They all the same religion.
all it is. They all the same religion. They all worship super intelligent
They all worship super intelligent being. They all think this world is not
being. They all think this world is not the main one.
the main one. And they argue about which animal not to
And they argue about which animal not to eat.
eat. Skip the local flavors. Concentrate on
Skip the local flavors. Concentrate on what do all the religions have in
what do all the religions have in common.
common. And that's the interesting part. They
And that's the interesting part. They all think there is something greater
all think there is something greater than humans. Very capable, all knowing,
than humans. Very capable, all knowing, all powerful. Then I run a computer
all powerful. Then I run a computer game. Four of those characters in a
game. Four of those characters in a game. I am that I can change the whole
game. I am that I can change the whole world. I can shut it down. I know
world. I can shut it down. I know everything in a world.
everything in a world. >> It's funny. I was thinking earlier on
>> It's funny. I was thinking earlier on when we started talking about the
when we started talking about the simulation theory that there's there
simulation theory that there's there might be something innate in us that is
might be something innate in us that is been left from the creator almost like a
been left from the creator almost like a clue like a like an intuition cuz that's
clue like a like an intuition cuz that's what we we tend to have through history.
what we we tend to have through history. Humans have this intuition.
Humans have this intuition. >> Yeah.
>> Yeah. >> That all the things you said are true.
>> That all the things you said are true. that there's this somebody above and
that there's this somebody above and >> we have generations of people who were
>> we have generations of people who were religious who believed God told them and
religious who believed God told them and was there and give them books and that
was there and give them books and that has been passed on for many generations.
has been passed on for many generations. This is probably one of the earliest
This is probably one of the earliest generations not to have universal
generations not to have universal religious belief.
religious belief. >> Wonder if those people are telling the
>> Wonder if those people are telling the truth. I wonder if those people those
truth. I wonder if those people those people that say God came to them and
people that say God came to them and said something. Imagine that. Imagine if
said something. Imagine that. Imagine if that was part of this.
that was part of this. >> I'm looking at the news today. Something
>> I'm looking at the news today. Something happened an hour ago and I'm getting
happened an hour ago and I'm getting different conflicting results. I can't
different conflicting results. I can't even get with cameras, with drones, with
even get with cameras, with drones, with like guy on Twitter there. I still don't
like guy on Twitter there. I still don't know what happened. And you think 3,000
know what happened. And you think 3,000 years ago we have accurate record of
years ago we have accurate record of translations and no of course not.
translations and no of course not. >> You know these conversations you have
>> You know these conversations you have around AI safety, do you think they make
around AI safety, do you think they make people feel good?
people feel good? >> I don't know if they feel good or bad,
>> I don't know if they feel good or bad, but people find it interesting. It's one
but people find it interesting. It's one of those topics. So I can have a
of those topics. So I can have a conversation about different cures for
conversation about different cures for cancer with an average person, but
cancer with an average person, but everyone has opinions about AI. Everyone
everyone has opinions about AI. Everyone has opinions about simulation. It's
has opinions about simulation. It's interesting that you don't have to be
interesting that you don't have to be highly educated or a genius to
highly educated or a genius to understand those concepts.
understand those concepts. >> Cuz I tend to think that it makes me
>> Cuz I tend to think that it makes me feel
feel not positive.
not positive. And I understand that, but I've always
And I understand that, but I've always been of the opinion that
you shouldn't live in a world of delusion where you're just seeking to be
delusion where you're just seeking to be positive, have sort of uh positive
positive, have sort of uh positive things said and avoid uncomfortable
things said and avoid uncomfortable conversations. Actually, progress often
conversations. Actually, progress often in my life comes from like having
in my life comes from like having uncomfortable conversations, becoming
uncomfortable conversations, becoming aware about something, and then at least
aware about something, and then at least being informed about how I can do
being informed about how I can do something about it. And so
something about it. And so I think that's why that's why I asked
I think that's why that's why I asked the question because I I assume most
the question because I I assume most people will should if they're you know
people will should if they're you know if they're normal human beings listen to
if they're normal human beings listen to these conversations and gosh
these conversations and gosh that's scary and this is concerning
that's scary and this is concerning and and then I keep coming back to this
and and then I keep coming back to this point which is like what what do I do
point which is like what what do I do with that energy?
with that energy? >> Yeah. But I'm trying to point out this
>> Yeah. But I'm trying to point out this is not different than so many
is not different than so many conversations we can talk about. Oh,
conversations we can talk about. Oh, there is starvation in this region,
there is starvation in this region, genocide in this region, you're all
genocide in this region, you're all dying, cancer is spreading, autism is
dying, cancer is spreading, autism is up. You can always find something to be
up. You can always find something to be very depressed about and nothing you can
very depressed about and nothing you can do about it. And we are very good at
do about it. And we are very good at concentrating on what we can change,
concentrating on what we can change, what we are good at, and uh basically
what we are good at, and uh basically not trying to embrace the whole world as
not trying to embrace the whole world as a local environment. So historically,
a local environment. So historically, you grew up with a tribe, you had a
you grew up with a tribe, you had a dozen people around you. If something
dozen people around you. If something happened to one of them, it was very
happened to one of them, it was very rare. It was an accident. Now if I go on
rare. It was an accident. Now if I go on the internet, somebody gets killed
the internet, somebody gets killed everywhere all the time. Somehow
everywhere all the time. Somehow thousands of people are reported to me
thousands of people are reported to me every day. I don't even have time to
every day. I don't even have time to notice.
notice. It's just too much. So I have to put
It's just too much. So I have to put filters in place. And I think this topic
filters in place. And I think this topic is what people are very good at
is what people are very good at filtering as like this was this
filtering as like this was this entertaining
entertaining talk I went to kind of like a show and
talk I went to kind of like a show and the moment I exit it ends. So usually I
the moment I exit it ends. So usually I would go give a keynote at a conference
would go give a keynote at a conference and I tell them basically you're all
and I tell them basically you're all going to die you have two years left any
going to die you have two years left any questions and people be like will I lose
questions and people be like will I lose my job? How do I lubricate my sex robot?
my job? How do I lubricate my sex robot? like all sorts of nonsense clearly not
like all sorts of nonsense clearly not understanding what I'm trying to say
understanding what I'm trying to say there and those are good questions
there and those are good questions interesting questions but not fully
interesting questions but not fully embracing the result they still in their
embracing the result they still in their bubble of local versus global
bubble of local versus global >> and the people that disagree with you
>> and the people that disagree with you the most as it relates to AI safety what
the most as it relates to AI safety what is it that they say
is it that they say what are their counterarguments
what are their counterarguments typically
typically >> so many don't engage at all like they
>> so many don't engage at all like they have no background knowledge in a
have no background knowledge in a subject. They never read a single book,
subject. They never read a single book, single paper, not just by me, by anyone.
single paper, not just by me, by anyone. They may be even working in a field. So
They may be even working in a field. So they are doing some machine learning
they are doing some machine learning work for some company maximizing ad
work for some company maximizing ad clicks and to them those systems are
clicks and to them those systems are very narrow and then they hear that oh
very narrow and then they hear that oh this guy is going to take over of the
this guy is going to take over of the world like it has no hands. How would it
world like it has no hands. How would it do that? It it's nonsense. This guy is
do that? It it's nonsense. This guy is crazy. He has a beard. Why would I
crazy. He has a beard. Why would I listen to him? Right? That's uh then
listen to him? Right? That's uh then they start reading a little bit. They
they start reading a little bit. They go, "Oh, okay. So maybe AI can be
go, "Oh, okay. So maybe AI can be dangerous. Yeah, I see that. But we
dangerous. Yeah, I see that. But we always solve problems in the past. We're
always solve problems in the past. We're going to solve them again. I mean at
going to solve them again. I mean at some point we fixed a computer virus or
some point we fixed a computer virus or something. So it's the same." And uh
something. So it's the same." And uh basically the more exposure they have,
basically the more exposure they have, the less likely they are to keep that
the less likely they are to keep that position. I know many people who went
position. I know many people who went from super careless developer to safety
from super careless developer to safety researcher. I don't know anyone who went
researcher. I don't know anyone who went from I worry about AI safety to like
from I worry about AI safety to like there is nothing to worry about.
>> What are your closing statements? >> Uh let's make sure there is not a
>> Uh let's make sure there is not a closing statement we need to give for
closing statement we need to give for humanity. Let's make sure we stay in
humanity. Let's make sure we stay in charge in control. Let's make sure we
charge in control. Let's make sure we only build things which are beneficial
only build things which are beneficial to us. Let's make sure people who are
to us. Let's make sure people who are making those decisions are remotely
making those decisions are remotely qualified to do it. They are good not
qualified to do it. They are good not just at science, engineering and
just at science, engineering and business but also have moral and ethical
business but also have moral and ethical standards.
standards. And uh if you doing something which
And uh if you doing something which impacts other people, you should ask
impacts other people, you should ask their permission before you do that. If
their permission before you do that. If there was one button in front of you and
there was one button in front of you and it would
it would shut down every AI company in the world
shut down every AI company in the world right now permanently with the inability
right now permanently with the inability for anybody to start a new one, would
for anybody to start a new one, would you press the button?
you press the button? >> Are we losing narrow AI or just super
>> Are we losing narrow AI or just super intelligent AGI part?
intelligent AGI part? >> Losing all of AI.
>> Losing all of AI. >> That's a hard question because AI is
>> That's a hard question because AI is extremely important. It controls stock
extremely important. It controls stock market power plants. It controls
market power plants. It controls hospitals. It would be a devastating
hospitals. It would be a devastating accident. Millions of people would lose
accident. Millions of people would lose their lives.
their lives. >> Okay, we can keep narrow AI.
>> Okay, we can keep narrow AI. >> Oh yeah, that's what we want. We want
>> Oh yeah, that's what we want. We want narrow AI to do all this for us, but not
narrow AI to do all this for us, but not God we don't control doing things to us.
God we don't control doing things to us. >> So you would stop it. You would stop AGI
>> So you would stop it. You would stop AGI and super intelligence.
and super intelligence. >> We have AGI. What we have today is great
>> We have AGI. What we have today is great for almost everything. We can make
for almost everything. We can make secretaries out of it. 99% of economic
secretaries out of it. 99% of economic potential of current technology has not
potential of current technology has not been deployed. We make AI so quickly it
been deployed. We make AI so quickly it doesn't have time to propagate through
doesn't have time to propagate through the industry through technology.
the industry through technology. Something like half of all jobs are
Something like half of all jobs are considered BS jobs. They don't need to
considered BS jobs. They don't need to be done. jobs. So those can be
be done. jobs. So those can be not even automated. They can be just
not even automated. They can be just gone. But I'm saying we can replace 60%
gone. But I'm saying we can replace 60% of jobs today with existing models.
of jobs today with existing models. We're not done that. So if the goal is
We're not done that. So if the goal is to grow economy to develop we can do it
to grow economy to develop we can do it for decades without having to create
for decades without having to create super intelligence as soon as possible.
super intelligence as soon as possible. >> Do you think globally especially in the
>> Do you think globally especially in the western world unemployment is only going
western world unemployment is only going to go up from here? Do you think
to go up from here? Do you think relatively this is the low of
relatively this is the low of unemployment?
unemployment? >> I mean it fluctuates a lot with other
>> I mean it fluctuates a lot with other factors. There are wars there is
factors. There are wars there is economic cycles but overall the more
economic cycles but overall the more jobs you automate and the higher is the
jobs you automate and the higher is the intellectual necessity to start a job
intellectual necessity to start a job the fewer people qualify.
the fewer people qualify. So if we plotted it on a graph over the
So if we plotted it on a graph over the next 20 years, you're assuming
next 20 years, you're assuming unemployment is gradually going to go up
unemployment is gradually going to go up over that time.
over that time. >> I think so. Fewer and fewer people would
>> I think so. Fewer and fewer people would be able to contribute already. We kind
be able to contribute already. We kind of understand it because we created
of understand it because we created minimum wage. We understood some people
minimum wage. We understood some people don't contribute enough economic value
don't contribute enough economic value to get paid anything really. So we had
to get paid anything really. So we had to force employers to pay them more than
to force employers to pay them more than they worth.
they worth. >> Mhm.
>> Mhm. >> And we haven't updated it. It's what 725
>> And we haven't updated it. It's what 725 federally in US. If you keep up with
federally in US. If you keep up with economy, it should be like $25 an hour
economy, it should be like $25 an hour now, which means all these people making
now, which means all these people making less are not contributing enough
less are not contributing enough economic output to justify what they
economic output to justify what they getting paid.
getting paid. >> We have a closing tradition on this
>> We have a closing tradition on this podcast where the last guest leaves a
podcast where the last guest leaves a question for the next guest not knowing
question for the next guest not knowing who they're leaving it for. And the
who they're leaving it for. And the question left for you is what are what
question left for you is what are what are the most important
are the most important characteristics
characteristics for a friend, colleague
for a friend, colleague or mate?
or mate? >> Those are very different types of
>> Those are very different types of people.
people. >> But for all of them, loyalty is number
>> But for all of them, loyalty is number one.
one. >> And what does loyalty mean to you?
>> And what does loyalty mean to you? >> Not betraying you, not screwing you, not
>> Not betraying you, not screwing you, not cheating on you.
despite the temptation, >> despite the world being as it is,
>> despite the world being as it is, situation, environment.
situation, environment. >> Dr. Roman, thank you so much. Thank you
>> Dr. Roman, thank you so much. Thank you so much for doing what you do because
so much for doing what you do because you're you're starting a conversation
you're you're starting a conversation and pushing forward a conversation and
and pushing forward a conversation and doing research that is incredibly
doing research that is incredibly important and you're doing it in the
important and you're doing it in the face of a lot of um a lot of skeptics.
face of a lot of um a lot of skeptics. I'd say there's a lot of people that
I'd say there's a lot of people that have a lot of incentives to discredit
have a lot of incentives to discredit what you're saying and what you do
what you're saying and what you do because they have their own incentives
because they have their own incentives and they have billions of dollars on the
and they have billions of dollars on the line and they have their jobs on the
line and they have their jobs on the line potentially as well. So, it's
line potentially as well. So, it's really important that there are people
really important that there are people out there that are willing to,
out there that are willing to, I guess, stick their head above the
I guess, stick their head above the parapit and come on shows like this and
parapit and come on shows like this and go on big platforms and talk about the
go on big platforms and talk about the unexplainable, unpredictable,
unexplainable, unpredictable, uncontrollable future that we're heading
uncontrollable future that we're heading towards. So, thank you for doing that.
towards. So, thank you for doing that. This book, which which I think everybody
This book, which which I think everybody should should check out if they want a
should should check out if they want a continuation of this conversation, I
continuation of this conversation, I think was published in 2024,
think was published in 2024, gives a holistic view on many of the
gives a holistic view on many of the things we've talked about today. Um,
things we've talked about today. Um, preventing AI failures and much, much
preventing AI failures and much, much more, and I'm going to link it below for
more, and I'm going to link it below for anybody that wants to read it. If people
anybody that wants to read it. If people want to learn more from you, if they
want to learn more from you, if they want to go further into your work,
want to go further into your work, what's the best thing for them to do?
what's the best thing for them to do? Where do they go?
Where do they go? >> They can follow me. Follow me on
>> They can follow me. Follow me on Facebook. Follow me on X. Just don't
Facebook. Follow me on X. Just don't follow me home. Very important.
follow me home. Very important. >> Follow you home. Okay. Okay, so I'll put
>> Follow you home. Okay. Okay, so I'll put your Twitter, your ex account um as well
your Twitter, your ex account um as well below so people can follow you there and
below so people can follow you there and yeah, thank you so much for doing what
yeah, thank you so much for doing what you do. remarkably eye opening and it's
you do. remarkably eye opening and it's given me so much food for thought and
given me so much food for thought and it's actually convinced me more that we
it's actually convinced me more that we are living in a simulation but it's also
are living in a simulation but it's also made me think quite differently of
made me think quite differently of religion I have to say because um you're
religion I have to say because um you're right all the religions when you get
right all the religions when you get away from the sort of the local
away from the sort of the local traditions they do all point at the same
traditions they do all point at the same thing and actually if they are all
thing and actually if they are all pointing at the same thing then maybe
pointing at the same thing then maybe the fundamental truths that exist across
the fundamental truths that exist across them should be something I pay more
them should be something I pay more attention to things like loving thy
attention to things like loving thy neighbor things like the fact that we
neighbor things like the fact that we are all one that there's a a divine
are all one that there's a a divine creator and maybe also they all seem to
creator and maybe also they all seem to consequence beyond this life. So maybe I
consequence beyond this life. So maybe I should be thinking more about
should be thinking more about how I behave in this life and and where
how I behave in this life and and where I might end up thereafter. Roman, thank
I might end up thereafter. Roman, thank you.
you. >> Amen.
>> Amen. [Music]
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.