Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
Godfather of AI WARNS: "You Have No Idea What's Coming" | The Diary Of A CEO Clips | YouTubeToText
YouTube Transcript: Godfather of AI WARNS: "You Have No Idea What's Coming"
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Are you at all hopeful that anything can
be done to slow down the pace and
acceleration of AI?
>> Okay, there's two issues. One is can you
slow it down? >> Yeah.
>> Yeah.
>> And the other is can you make it so it
will be safe in the end. It won't wipe
us all out. I don't believe we're going
to slow it down. >> Yeah.
>> Yeah.
>> And the reason I don't believe we're
going to slow it down is because there's
competition between countries and
competition between companies within a
country and all of that is making it go
faster and faster. And if the US slowed
it down, China wouldn't slow it down.
>> Does Ilia think it's possible to make AI safe?
safe?
>> I think he does. He won't tell me what
his secret source is. I I'm not sure how
many people know what his secret source
is. I think a lot of the investors don't
know what his secret source is, but
they've given him billions of dollars
anyway because they have so much faith
in Asia, which isn't foolish. I mean he
was very important in alexnet which got
object recognition working well. He was
the main the main force behind the
things like GPG2
which then led to chat GPT.
So I think having a lot of faith in IA
is a very reasonable decision. There's
something quite haunting about the guy
that made and was the main force behind
GPT2 which led rise to this whole
revolution left the company because of
safety reasons. He knows something that
I don't know about what might happen next.
next.
>> Well, the company had now I don't know
the precise details. Um, but I'm fairly
sure the company had indicated that
would it would use a significant
fraction of its resources of the compute
time for doing safety research and then
it kept then it reduced that fraction. I
think that's one of the things that happened.
happened.
>> Yeah, that was reported publicly. >> Yes.
>> Yes. >> Yeah.
>> Yeah.
>> We've gotten to the autonomous weapons
part of the risk framework.
>> Right. So the next one is joblessness.
>> Yeah. In the past, new technologies have
come in which didn't lead to
joblessness. New jobs were created. So
the classic example people use is
automatic teller machines. When
automatic tele machines came in, a lot
of bank tellers didn't lose their jobs.
They just got to do more interesting
things. But here, I think this is more
like when they got machines in the
industrial revolution. And
you can't have a job digging ditches now
because a machine can dig ditches much
better than you can.
>> And I think for mundane intellectual
labor, AI is just going to replace
everybody. Now, it will may well be in
the form of you have fewer people using
AI assistance. So, it's a combination of
a person and an AI assistant and now
doing the work that 10 people could do
previously. People say that it will
create new jobs though, so we'll be fine.
fine.
>> Yes. And that's been the case for other
technologies, but this is a very
different kind of technology. If it can
do all mundane human intellectual labor,
then what new jobs is it going to
create? You'd you'd have to be very
skilled to have a job that it couldn't
just do. So, I don't I don't think
they're right. I think you can try and
generalize from other technologies that
have come in like computers or automatic
tele machines, but I think this is
different. People use this phrase. They
say AI won't take your job. A human
using AI will take your job.
>> Yes, I think that's true. But for many
jobs, that'll mean you need far fewer
people. My niece answers letters of
complaint to a health service. It used
to take her 25 minutes. She'd read the
complaint and she'd think how to reply
and she'd write a letter. And now she
just scans it into um a chatbot and it
writes the letter. She just checks the
letter. Occasionally she tells it to
revise it in some ways. The whole
process takes her five minutes. That
means she can answer five times as many
letters. And that means they need five
times fewer of her so she can do the job
that five of her used to do. Now, that
will mean they need less people. In
other jobs, like in health care, they're
much more elastic. So, if you could make
doctors five times as efficient, we
could all have five times as much health
care for the same price, and that would
be great. There's there's almost no
limit to how much health care people can absorb.
absorb.
>> They always want more healthare if
there's no cost to it. There are jobs
where you can make a person with an AI
assistant much more efficient and you
won't lead to less people because you'll
just have much more of that being done.
But most jobs I think are not like that.
>> Am I right in thinking the sort of
industrial revolution
played a role in replacing muscles?
>> Yes. Exactly.
>> And this revolution in AI replaces
intelligence the brain.
>> Yeah. So,
>> so mundane intellectual labor is like
having strong muscles and it's not worth
much anymore.
>> So, muscles have been replaced. Now we
intelligence is being replaced. >> Yeah.
>> Yeah.
>> So, what remains?
>> Maybe for a while some kinds of
creativity but the whole idea of super
intelligence is nothing remains. Um
these things will get to be better than
us at everything.
>> So, what what do we end up doing in such
a world? Well, if they work for us, we
end up getting lots of goods and
services for not much effort.
>> Okay. But that sounds tempting and nice,
but I don't know. There's a cautionary
tale in creating more and more ease for
humans in in it going badly.
>> Yes. And we need to figure out if we can
make it go well. So the the nice
scenario is imagine a company with a CEO
who is very dumb, probably the son of
the former CEO.
>> And he has an executive assistant who's
very smart and he says, "I think we
should do this." And the executive
assistant makes it all work. The CEO
feels great. He doesn't understand that
he's not really in control. And in in
some sense, he is in control. He
suggests what the company should do. She
just makes it all work. Everything's
great. That's the good scenario.
>> And the bad scenario,
>> the bad scenario, she thinks, "Why do we
need him?" >> Yeah.
>> Yeah.
I mean, in a world where we have super
intelligence, which you don't believe is
that far away.
>> Yeah, I think it might not be that far
away. It's very hard to predict, but I
think we might get it in like 20 years
or even less.
>> So, what's the difference between what
we have now and super intelligence?
because it seems to be really
intelligent to me when I use like chatbt
330 3 or Gemini or
>> okay so it's already AI is already
better than us at a lot of things in
particular areas like chess for example
>> AI is so much better than us that people
will never beat those things again maybe
the occasional win but basically they'll
never be comparable again obviously the
same in go in terms of the amount of
knowledge they have um something like
GBT4 knows thousands of times more than
you do. There's a few areas in which
your knowledge is better than its and in
almost all areas it just knows more than
you do.
>> What areas am I better than it?
>> Probably in interviewing CEOs. You're
probably better at that. You've got a
lot of experience at it. You're a good
interviewer. You know a lot about it. If
you tried if you got GPT4 to interview a
CEO, probably do a worse job. Okay.
I'm trying to think if that if I agree
with that statement. Uh GPT4 I think for sure.
sure. >> Yeah.
>> Yeah.
>> Um but I but I guess you could
>> but it may not be long before
>> Yeah. I guess you could train one on
this how I ask questions and what I do and
and >> Sure.
>> Sure.
>> And if you took a general purpose sort
of foundation model and then you trained
it up on not just you but every every
interviewer you could find doing
interviews like this. Mhm.
>> But especially you, you'll probably get
to be quite good at doing your job, but
probably not as good as you for a while.
>> Okay. So, there's a few areas left and
then super intelligence becomes when
it's better than us at all things.
>> When it's much smarter than you and
almost all things, it's better than you. Yeah.
Yeah.
>> And you you you say that this might be a
decade away or so.
>> Yeah, it might be. It might be even
closer. Some people think it's even
closer and might well be much further.
It might be 50 years away. That's still
a possibility. It might be that somehow
training on human data limits you to not
being much smarter than humans. My guess
is between 10 and 20 years we'll have
super intelligence. On this point of
joblessness, it's something that I've
been thinking a lot about in particular
because I started messing around with AI
agents and we released an episode on the
podcast actually this morning where we
had a debate about AI agents with some a
CEO of a big AI agent company and a few
other people and it was the first moment
where I had no it was another moment
where I had a Eureka moment about what
the future might look like when I was
able in the interview to tell this agent
to order all of us drinks and then 5
minutes later in the interview you see
the guy show up with the drinks and I
didn't touch anything. I just told it to
order us drinks to the studio.
>> And you didn't know about who you
normally got your drinks from. It
figured that out from the web.
>> Yeah, figured out cuz it went on Uber
Eats. It has my my my data, I guess. And
it I we put it on the screen in real
time so everyone at home could see the
agent going through the internet,
picking the drinks, adding a tip for the
driver, putting my address in, putting
my credit card details in, and then the
next thing you see is the drinks show up.
up.
>> So that was one moment. And then the
other moment was when I used a tool
called Replet and I built software by
just telling the agent what I wanted.
>> Yes. It's amazing, right?
>> It's amazing and terrifying at the same time.
time. >> Yes.
>> Yes. >> Because
>> Because
>> and if it can build software like that, right?
right? >> Yeah.
>> Yeah.
>> Remember that the AI when it's training
is using code and if it can modify its
own code
then it gets quite scary, right? because
it can modify.
>> It can change itself in a way we can't
change ourselves. We can't change our
innate endowment, right?
There's nothing about itself that it
couldn't change.
>> On this point of joblessness, you have kids.
kids.
>> I do.
>> And they have kids. >> No,
>> No,
>> they don't have kids. No grandkids yet.
What would you be saying to people about
their career prospects in a world of
super intelligence? What should we we be
thinking about? Um, in the meantime, I'd
say it's going to be a long time before
it's as good at physical manipulation as us.
us. >> Okay.
>> Okay.
>> And so, a good bet would be to be a plumber.
until the humanoid robots show up in
such a world where there is mass
joblessness which is not something that
you just predict but this is something
that Sam Alman open AI I've heard him
predict and many of the CEOs Elon Musk I
watched an interview which I'll play on
screen of him being asked this question
and it's very rare that you see Elon
Musk silent for 12 seconds or whatever
it was
>> and then he basically says something
about he actually is living in suspended
disbelief i.e. He's basically just not
thinking about it.
>> When you think about advising your
children on a career with so much that
is changing, what do you tell them is
Well,
that is a tough question to answer. I
would just say, you know, to to sort of
follow their heart in terms of what they
they find um interesting to do or
fulfilling to do. I mean, if I think
about it too hard, frankly, it can be uh
dispariting and uh demotivating. Um
because I mean, I I go through I mean I
I I've put a lot of blood, sweat, and
tears into building the companies and
then it and then I'm like, wait well,
should I be doing this? Because if I'm
sacrificing time with friends and family
that I would prefer to to to but but
then ultimately the AI can do all these
things. Does that make sense? I I don't
know. Um to some extent I have to have
deliberate suspension of disbelief in
order to to remain motivated. Um, so I I
work on things that you find
interesting, fulfilling, and um and and
that contribute uh some good to the rest
of society.
>> Yeah. A lot of these threats, it's very
hard to intellectually you can see the
threat, but it's very hard to come to
terms with it emotionally. >> Yeah.
>> Yeah.
>> I haven't come to terms with it
emotionally yet.
>> What do you mean by that?
I haven't come to terms with what the
development of super intelligence could
do to my children's future.
I'm okay. I'm 77.
I'm going to be out of here soon. But
for my children and my my younger
friends, my nephews and nieces and their
children, um
I just don't like to think about what
Why?
>> In In what way?
>> Well, if I ever decided to take over. I
mean, it would need people for a while
to run the power stations until it
designed better analog machines to run
the power stations. There's so many ways
it could get rid of people, all of which
would of course be very nasty.
Is that part of the reason you do what
you do now?
>> Yeah. I I mean, I think we should be
making a huge effort right now to try
and figure out if we can develop it safely.
safely.
>> Are you concerned about the midterm
impact potentially on your nephews and
your your kids in terms of their jobs as well?
well?
>> Yeah, I'm concerned about all that.
>> Are there any particular industries that
you think are most at risk? People talk
about the creative industries a lot and
sort of knowledge work. They talk about
lawyers and accountants and stuff like that.
that.
>> Yeah. So, that's why I mentioned
plumbers. I think plumbers are less at risk.
risk.
>> Okay, I'm going to become a plumber.
>> Someone like a legal assistant, a parallegal.
parallegal.
>> Um they're not going to be needed for
very long.
>> And is there a wealth inequality issue
here that will will arise from this?
>> Yeah, I think in a society which shared
out things fairly, if you get a big
increase in productivity, everybody
should be better off.
But if you can replace lots of people by AIS,
AIS,
then the people who get replaced will be
worse off
and the company that supplies the AIS
will be much better off
and the company that uses the AIS. So
it's going to increase the gap between
rich and poor. And we know that if you
look at that gap between rich and poor,
that basically tells you how nice the
society is. If you have a big gap, you
get very nasty societies in which people
live in walled communities and put other
people in mass jails. It's not good to
increase the gap between rich and poor.
>> The International Monetary Fund has
expressed profound concerns that
generative AI could cause massive labor
disruptions and rising inequality and
has called for policies that prevent
this from happening. I read that in the
business insider.
>> So, have they given any of what the
policies should look like?
>> No. Yeah, that's the problem. I mean, if
AI can make everything much more
efficient and get rid of people for most
jobs or have a person assisted by I
doing many many people's work, it's not
obvious what to do about it.
>> It's universal basic income.
>> Give everybody money.
>> Yeah, I I I think that's a good start
and it stops people starving. But for a
lot of people, their dignity is tied up
with their job. I mean, who you think
you are is tied up with you doing this
job, right? >> Yeah.
>> Yeah.
>> And if we said, "We'll give you the same
money just to sit around," that would
impact your dignity.
>> You said something earlier about it
surpassing or being superior to human
intelligence. A lot of people, I think,
like to believe that AI is is on a
computer and it's something you can just
turn off if you don't like it.
>> Well, let me tell you why I think it's superior.
superior. >> Okay.
>> Okay.
>> Um, it's digital. And because it's
digital, you can have you can simulate a
neural network on one piece of hardware. >> Yeah.
>> Yeah.
>> And you can simulate exactly the same
neural network on a different piece of hardware.
hardware.
>> So you can have clones of the same intelligence.
intelligence.
Now you could get this one to go off and
look at one bit of the internet and this
other one to look at a different bit of
the internet. And while they're looking
at these different bits of the internet,
they can be syncing with each other. So
they keep their weights the same, the
connection strengths the same. Weights
are connection strengths. >> Mhm.
>> Mhm.
>> So this one might look at something on
the internet and say, "Oh, I'd like to
increase this strength of this
connection a bit." And it can convey
that information to this one. So it can
increase the strength of that connection
a bit based on this one's experience.
>> And when you say the strength of the
connection, you're talking about learning.
learning.
>> That's learning. Yes. Learning consists
of saying instead of this one giving 2.4
four votes for whether that one should
turn on. We'll have this one give 2.5
votes for whether this one should turn on.
on.
>> And that will be a little bit of learning.
learning.
>> So these two different copies of the
same neural net
are getting different experiences.
They're looking at different data, but
they're sharing what they've learned by
averaging their weights together. >> Mhm.
>> Mhm.
>> And they can do that averaging at like a
you can average a trillion weights. When
you and I transfer information, we're
limited to the amount of information in
a sentence. And the amount of
information in a sentence is maybe a 100
bits. It's very little information.
We're lucky if we're transferring like
10 bits a second.
>> These things are transferring trillions
of bits a second. So, they're billions
of times better than us at sharing information.
information.
And that's because they're digital. And
you can have two bits of hardware using
the connection strengths in exactly the
same way. We're analog and you can't do
that. Your brain's different from my
brain. And if I could see the connection
strengths between all your neurons, it
wouldn't do me any good because my
neurons work slightly differently and
they're connected up slightly differently.
differently. >> Mhm.
>> Mhm.
>> So when you die, all your knowledge dies
with you. When these things die, suppose
you take these two digital intelligences
that are clones of each other and you
destroy the hardware they run on. As
long as you've stored the connection
strength somewhere, you can just build
new hardware that executes the same
instructions. So, it'll know how to use
those connection strengths and you've
recreated that intelligence. So, they're
immortal. We've actually solved the
problem of immortality, but it's only
for digital things.
>> So, it knows it will essentially know
everything that humans know but more
because it will learn new things.
>> It will learn new things. It would also
see all sorts of analogies that people
probably never saw.
So, for example, at the point when GPT4
couldn't look on the web, I asked it,
"Why is a compost heap like an atom bomb?"
bomb?"
Off you go.
>> I have no idea.
>> Exactly. Excellent. Most that's exactly
what most people would say. It said,
"Well, the time scales are very
different and the energy scales are very
different." But then I went on to talk
about how a compost he as it gets hotter
generates heat faster and an atom bomb
as it produces more neutrons generates
neutrons faster.
>> And so they're both chain reactions but
at very different time in energy scales.
And I believe GPT4 had seen that during
its training.
It had understood the analogy between a
compost heap and an atom bomb. And the
reason I believe that is if you've only
got a trillion connections, remember you
have 100 trillion.
>> And you need to have thousands of times
more knowledge than a person, you need
to compress information into those
connections. And to compress
information, you need to see analogies
between different things. In other
words, it needs to see all the things
that are chain reactions and understand
the basic idea of a chain reaction and
code that code the ways in which they're
different. And that's just a more
efficient way of coding things than
coding each of them separately.
>> So it's seen many many analogies
probably many analogies that people have
never seen. That's why I also think that
people who say these things will never
be creative. They're going to be much
more creative than us because they're
going to see all sorts of analogies we
never saw. And a lot of creativity is
about seeing strange analogies. If you
love the D CEO brand and you watch this
channel, please do me a huge favor.
become part of the 15% of the viewers on
this channel that have hit the subscribe
button. It helps us tremendously and the
bigger the channel gets, the bigger the guests.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.