0:02 Are you at all hopeful that anything can
0:04 be done to slow down the pace and
0:05 acceleration of AI?
0:08 >> Okay, there's two issues. One is can you
0:09 slow it down? >> Yeah.
0:10 >> Yeah.
0:12 >> And the other is can you make it so it
0:14 will be safe in the end. It won't wipe
0:17 us all out. I don't believe we're going
0:18 to slow it down. >> Yeah.
0:18 >> Yeah.
0:20 >> And the reason I don't believe we're
0:21 going to slow it down is because there's
0:23 competition between countries and
0:25 competition between companies within a
0:27 country and all of that is making it go
0:30 faster and faster. And if the US slowed
0:33 it down, China wouldn't slow it down.
0:37 >> Does Ilia think it's possible to make AI safe?
0:38 safe?
0:41 >> I think he does. He won't tell me what
0:44 his secret source is. I I'm not sure how
0:45 many people know what his secret source
0:47 is. I think a lot of the investors don't
0:49 know what his secret source is, but
0:50 they've given him billions of dollars
0:52 anyway because they have so much faith
0:55 in Asia, which isn't foolish. I mean he
0:58 was very important in alexnet which got
1:00 object recognition working well. He was
1:03 the main the main force behind the
1:05 things like GPG2
1:09 which then led to chat GPT.
1:11 So I think having a lot of faith in IA
1:13 is a very reasonable decision. There's
1:15 something quite haunting about the guy
1:17 that made and was the main force behind
1:19 GPT2 which led rise to this whole
1:23 revolution left the company because of
1:25 safety reasons. He knows something that
1:28 I don't know about what might happen next.
1:28 next.
1:31 >> Well, the company had now I don't know
1:34 the precise details. Um, but I'm fairly
1:36 sure the company had indicated that
1:37 would it would use a significant
1:40 fraction of its resources of the compute
1:43 time for doing safety research and then
1:45 it kept then it reduced that fraction. I
1:46 think that's one of the things that happened.
1:46 happened.
1:48 >> Yeah, that was reported publicly. >> Yes.
1:48 >> Yes. >> Yeah.
1:50 >> Yeah.
1:52 >> We've gotten to the autonomous weapons
1:55 part of the risk framework.
1:58 >> Right. So the next one is joblessness.
2:00 >> Yeah. In the past, new technologies have
2:03 come in which didn't lead to
2:05 joblessness. New jobs were created. So
2:07 the classic example people use is
2:09 automatic teller machines. When
2:12 automatic tele machines came in, a lot
2:14 of bank tellers didn't lose their jobs.
2:15 They just got to do more interesting
2:19 things. But here, I think this is more
2:21 like when they got machines in the
2:25 industrial revolution. And
2:27 you can't have a job digging ditches now
2:29 because a machine can dig ditches much
2:31 better than you can.
2:33 >> And I think for mundane intellectual
2:36 labor, AI is just going to replace
2:40 everybody. Now, it will may well be in
2:43 the form of you have fewer people using
2:45 AI assistance. So, it's a combination of
2:48 a person and an AI assistant and now
2:50 doing the work that 10 people could do
2:52 previously. People say that it will
2:54 create new jobs though, so we'll be fine.
2:55 fine.
2:57 >> Yes. And that's been the case for other
2:58 technologies, but this is a very
3:00 different kind of technology. If it can
3:04 do all mundane human intellectual labor,
3:05 then what new jobs is it going to
3:08 create? You'd you'd have to be very
3:10 skilled to have a job that it couldn't
3:12 just do. So, I don't I don't think
3:14 they're right. I think you can try and
3:17 generalize from other technologies that
3:19 have come in like computers or automatic
3:21 tele machines, but I think this is
3:23 different. People use this phrase. They
3:25 say AI won't take your job. A human
3:26 using AI will take your job.
3:29 >> Yes, I think that's true. But for many
3:31 jobs, that'll mean you need far fewer
3:34 people. My niece answers letters of
3:37 complaint to a health service. It used
3:39 to take her 25 minutes. She'd read the
3:41 complaint and she'd think how to reply
3:44 and she'd write a letter. And now she
3:49 just scans it into um a chatbot and it
3:51 writes the letter. She just checks the
3:53 letter. Occasionally she tells it to
3:55 revise it in some ways. The whole
3:58 process takes her five minutes. That
3:59 means she can answer five times as many
4:03 letters. And that means they need five
4:06 times fewer of her so she can do the job
4:10 that five of her used to do. Now, that
4:12 will mean they need less people. In
4:15 other jobs, like in health care, they're
4:18 much more elastic. So, if you could make
4:20 doctors five times as efficient, we
4:22 could all have five times as much health
4:24 care for the same price, and that would
4:26 be great. There's there's almost no
4:28 limit to how much health care people can absorb.
4:29 absorb.
4:31 >> They always want more healthare if
4:33 there's no cost to it. There are jobs
4:36 where you can make a person with an AI
4:38 assistant much more efficient and you
4:41 won't lead to less people because you'll
4:43 just have much more of that being done.
4:46 But most jobs I think are not like that.
4:47 >> Am I right in thinking the sort of
4:49 industrial revolution
4:52 played a role in replacing muscles?
4:53 >> Yes. Exactly.
4:55 >> And this revolution in AI replaces
4:56 intelligence the brain.
4:57 >> Yeah. So,
4:59 >> so mundane intellectual labor is like
5:02 having strong muscles and it's not worth
5:03 much anymore.
5:05 >> So, muscles have been replaced. Now we
5:07 intelligence is being replaced. >> Yeah.
5:08 >> Yeah.
5:10 >> So, what remains?
5:12 >> Maybe for a while some kinds of
5:14 creativity but the whole idea of super
5:16 intelligence is nothing remains. Um
5:18 these things will get to be better than
5:19 us at everything.
5:20 >> So, what what do we end up doing in such
5:24 a world? Well, if they work for us, we
5:26 end up getting lots of goods and
5:29 services for not much effort.
5:32 >> Okay. But that sounds tempting and nice,
5:34 but I don't know. There's a cautionary
5:36 tale in creating more and more ease for
5:39 humans in in it going badly.
5:42 >> Yes. And we need to figure out if we can
5:45 make it go well. So the the nice
5:49 scenario is imagine a company with a CEO
5:52 who is very dumb, probably the son of
5:54 the former CEO.
5:56 >> And he has an executive assistant who's
6:00 very smart and he says, "I think we
6:02 should do this." And the executive
6:05 assistant makes it all work. The CEO
6:07 feels great. He doesn't understand that
6:10 he's not really in control. And in in
6:11 some sense, he is in control. He
6:13 suggests what the company should do. She
6:15 just makes it all work. Everything's
6:18 great. That's the good scenario.
6:19 >> And the bad scenario,
6:21 >> the bad scenario, she thinks, "Why do we
6:23 need him?" >> Yeah.
6:25 >> Yeah.
6:27 I mean, in a world where we have super
6:28 intelligence, which you don't believe is
6:30 that far away.
6:32 >> Yeah, I think it might not be that far
6:34 away. It's very hard to predict, but I
6:36 think we might get it in like 20 years
6:37 or even less.
6:38 >> So, what's the difference between what
6:39 we have now and super intelligence?
6:40 because it seems to be really
6:43 intelligent to me when I use like chatbt
6:45 330 3 or Gemini or
6:47 >> okay so it's already AI is already
6:49 better than us at a lot of things in
6:53 particular areas like chess for example
6:56 >> AI is so much better than us that people
6:58 will never beat those things again maybe
7:01 the occasional win but basically they'll
7:03 never be comparable again obviously the
7:05 same in go in terms of the amount of
7:08 knowledge they have um something like
7:11 GBT4 knows thousands of times more than
7:13 you do. There's a few areas in which
7:16 your knowledge is better than its and in
7:18 almost all areas it just knows more than
7:19 you do.
7:22 >> What areas am I better than it?
7:26 >> Probably in interviewing CEOs. You're
7:28 probably better at that. You've got a
7:30 lot of experience at it. You're a good
7:33 interviewer. You know a lot about it. If
7:36 you tried if you got GPT4 to interview a
7:41 CEO, probably do a worse job. Okay.
7:42 I'm trying to think if that if I agree
7:45 with that statement. Uh GPT4 I think for sure.
7:46 sure. >> Yeah.
7:46 >> Yeah.
7:47 >> Um but I but I guess you could
7:49 >> but it may not be long before
7:50 >> Yeah. I guess you could train one on
7:52 this how I ask questions and what I do and
7:53 and >> Sure.
7:54 >> Sure.
7:56 >> And if you took a general purpose sort
7:58 of foundation model and then you trained
8:01 it up on not just you but every every
8:03 interviewer you could find doing
8:05 interviews like this. Mhm.
8:07 >> But especially you, you'll probably get
8:08 to be quite good at doing your job, but
8:12 probably not as good as you for a while.
8:14 >> Okay. So, there's a few areas left and
8:16 then super intelligence becomes when
8:18 it's better than us at all things.
8:20 >> When it's much smarter than you and
8:22 almost all things, it's better than you. Yeah.
8:22 Yeah.
8:24 >> And you you you say that this might be a
8:26 decade away or so.
8:28 >> Yeah, it might be. It might be even
8:29 closer. Some people think it's even
8:32 closer and might well be much further.
8:33 It might be 50 years away. That's still
8:37 a possibility. It might be that somehow
8:39 training on human data limits you to not
8:42 being much smarter than humans. My guess
8:43 is between 10 and 20 years we'll have
8:45 super intelligence. On this point of
8:47 joblessness, it's something that I've
8:48 been thinking a lot about in particular
8:50 because I started messing around with AI
8:52 agents and we released an episode on the
8:53 podcast actually this morning where we
8:56 had a debate about AI agents with some a
8:58 CEO of a big AI agent company and a few
9:00 other people and it was the first moment
9:03 where I had no it was another moment
9:05 where I had a Eureka moment about what
9:06 the future might look like when I was
9:09 able in the interview to tell this agent
9:11 to order all of us drinks and then 5
9:12 minutes later in the interview you see
9:14 the guy show up with the drinks and I
9:16 didn't touch anything. I just told it to
9:18 order us drinks to the studio.
9:19 >> And you didn't know about who you
9:20 normally got your drinks from. It
9:21 figured that out from the web.
9:23 >> Yeah, figured out cuz it went on Uber
9:26 Eats. It has my my my data, I guess. And
9:27 it I we put it on the screen in real
9:29 time so everyone at home could see the
9:30 agent going through the internet,
9:32 picking the drinks, adding a tip for the
9:34 driver, putting my address in, putting
9:35 my credit card details in, and then the
9:37 next thing you see is the drinks show up.
9:38 up.
9:39 >> So that was one moment. And then the
9:41 other moment was when I used a tool
9:45 called Replet and I built software by
9:46 just telling the agent what I wanted.
9:48 >> Yes. It's amazing, right?
9:51 >> It's amazing and terrifying at the same time.
9:51 time. >> Yes.
9:52 >> Yes. >> Because
9:52 >> Because
9:54 >> and if it can build software like that, right?
9:55 right? >> Yeah.
9:55 >> Yeah.
9:58 >> Remember that the AI when it's training
10:01 is using code and if it can modify its
10:03 own code
10:05 then it gets quite scary, right? because
10:06 it can modify.
10:07 >> It can change itself in a way we can't
10:10 change ourselves. We can't change our
10:13 innate endowment, right?
10:14 There's nothing about itself that it
10:16 couldn't change.
10:18 >> On this point of joblessness, you have kids.
10:18 kids.
10:19 >> I do.
10:20 >> And they have kids. >> No,
10:21 >> No,
10:22 >> they don't have kids. No grandkids yet.
10:24 What would you be saying to people about
10:26 their career prospects in a world of
10:28 super intelligence? What should we we be
10:30 thinking about? Um, in the meantime, I'd
10:32 say it's going to be a long time before
10:35 it's as good at physical manipulation as us.
10:36 us. >> Okay.
10:36 >> Okay.
10:39 >> And so, a good bet would be to be a plumber.
10:45 until the humanoid robots show up in
10:46 such a world where there is mass
10:48 joblessness which is not something that
10:49 you just predict but this is something
10:52 that Sam Alman open AI I've heard him
10:54 predict and many of the CEOs Elon Musk I
10:55 watched an interview which I'll play on
10:57 screen of him being asked this question
10:59 and it's very rare that you see Elon
11:00 Musk silent for 12 seconds or whatever
11:01 it was
11:03 >> and then he basically says something
11:06 about he actually is living in suspended
11:07 disbelief i.e. He's basically just not
11:09 thinking about it.
11:10 >> When you think about advising your
11:12 children on a career with so much that
11:14 is changing, what do you tell them is
11:30 Well,
11:33 that is a tough question to answer. I
11:35 would just say, you know, to to sort of
11:36 follow their heart in terms of what they
11:38 they find um interesting to do or
11:40 fulfilling to do. I mean, if I think
11:43 about it too hard, frankly, it can be uh
11:48 dispariting and uh demotivating. Um
11:51 because I mean, I I go through I mean I
11:55 I I've put a lot of blood, sweat, and
11:57 tears into building the companies and
11:59 then it and then I'm like, wait well,
12:02 should I be doing this? Because if I'm
12:04 sacrificing time with friends and family
12:06 that I would prefer to to to but but
12:08 then ultimately the AI can do all these
12:11 things. Does that make sense? I I don't
12:15 know. Um to some extent I have to have
12:17 deliberate suspension of disbelief in
12:21 order to to remain motivated. Um, so I I
12:27 work on things that you find
12:29 interesting, fulfilling, and um and and
12:31 that contribute uh some good to the rest
12:32 of society.
12:35 >> Yeah. A lot of these threats, it's very
12:38 hard to intellectually you can see the
12:40 threat, but it's very hard to come to
12:43 terms with it emotionally. >> Yeah.
12:43 >> Yeah.
12:44 >> I haven't come to terms with it
12:45 emotionally yet.
12:48 >> What do you mean by that?
12:52 I haven't come to terms with what the
12:53 development of super intelligence could
12:57 do to my children's future.
12:59 I'm okay. I'm 77.
13:02 I'm going to be out of here soon. But
13:05 for my children and my my younger
13:08 friends, my nephews and nieces and their
13:13 children, um
13:14 I just don't like to think about what
13:20 Why?
13:27 >> In In what way?
13:30 >> Well, if I ever decided to take over. I
13:32 mean, it would need people for a while
13:35 to run the power stations until it
13:37 designed better analog machines to run
13:40 the power stations. There's so many ways
13:42 it could get rid of people, all of which
13:45 would of course be very nasty.
13:47 Is that part of the reason you do what
13:48 you do now?
13:50 >> Yeah. I I mean, I think we should be
13:53 making a huge effort right now to try
13:54 and figure out if we can develop it safely.
13:56 safely.
13:57 >> Are you concerned about the midterm
13:59 impact potentially on your nephews and
14:01 your your kids in terms of their jobs as well?
14:01 well?
14:03 >> Yeah, I'm concerned about all that.
14:05 >> Are there any particular industries that
14:06 you think are most at risk? People talk
14:08 about the creative industries a lot and
14:10 sort of knowledge work. They talk about
14:12 lawyers and accountants and stuff like that.
14:12 that.
14:13 >> Yeah. So, that's why I mentioned
14:15 plumbers. I think plumbers are less at risk.
14:16 risk.
14:17 >> Okay, I'm going to become a plumber.
14:19 >> Someone like a legal assistant, a parallegal.
14:21 parallegal.
14:23 >> Um they're not going to be needed for
14:24 very long.
14:25 >> And is there a wealth inequality issue
14:28 here that will will arise from this?
14:30 >> Yeah, I think in a society which shared
14:34 out things fairly, if you get a big
14:36 increase in productivity, everybody
14:38 should be better off.
14:41 But if you can replace lots of people by AIS,
14:43 AIS,
14:46 then the people who get replaced will be
14:47 worse off
14:50 and the company that supplies the AIS
14:54 will be much better off
14:56 and the company that uses the AIS. So
14:58 it's going to increase the gap between
15:01 rich and poor. And we know that if you
15:03 look at that gap between rich and poor,
15:04 that basically tells you how nice the
15:07 society is. If you have a big gap, you
15:09 get very nasty societies in which people
15:12 live in walled communities and put other
15:16 people in mass jails. It's not good to
15:18 increase the gap between rich and poor.
15:19 >> The International Monetary Fund has
15:21 expressed profound concerns that
15:23 generative AI could cause massive labor
15:25 disruptions and rising inequality and
15:27 has called for policies that prevent
15:29 this from happening. I read that in the
15:31 business insider.
15:32 >> So, have they given any of what the
15:33 policies should look like?
15:36 >> No. Yeah, that's the problem. I mean, if
15:37 AI can make everything much more
15:39 efficient and get rid of people for most
15:42 jobs or have a person assisted by I
15:46 doing many many people's work, it's not
15:48 obvious what to do about it.
15:50 >> It's universal basic income.
15:51 >> Give everybody money.
15:54 >> Yeah, I I I think that's a good start
15:58 and it stops people starving. But for a
16:00 lot of people, their dignity is tied up
16:02 with their job. I mean, who you think
16:04 you are is tied up with you doing this
16:05 job, right? >> Yeah.
16:06 >> Yeah.
16:08 >> And if we said, "We'll give you the same
16:11 money just to sit around," that would
16:13 impact your dignity.
16:15 >> You said something earlier about it
16:17 surpassing or being superior to human
16:19 intelligence. A lot of people, I think,
16:22 like to believe that AI is is on a
16:23 computer and it's something you can just
16:25 turn off if you don't like it.
16:27 >> Well, let me tell you why I think it's superior.
16:28 superior. >> Okay.
16:28 >> Okay.
16:31 >> Um, it's digital. And because it's
16:35 digital, you can have you can simulate a
16:37 neural network on one piece of hardware. >> Yeah.
16:37 >> Yeah.
16:39 >> And you can simulate exactly the same
16:40 neural network on a different piece of hardware.
16:42 hardware.
16:43 >> So you can have clones of the same intelligence.
16:45 intelligence.
16:48 Now you could get this one to go off and
16:50 look at one bit of the internet and this
16:51 other one to look at a different bit of
16:53 the internet. And while they're looking
16:55 at these different bits of the internet,
16:57 they can be syncing with each other. So
16:59 they keep their weights the same, the
17:00 connection strengths the same. Weights
17:01 are connection strengths. >> Mhm.
17:02 >> Mhm.
17:03 >> So this one might look at something on
17:05 the internet and say, "Oh, I'd like to
17:06 increase this strength of this
17:09 connection a bit." And it can convey
17:11 that information to this one. So it can
17:12 increase the strength of that connection
17:14 a bit based on this one's experience.
17:16 >> And when you say the strength of the
17:17 connection, you're talking about learning.
17:18 learning.
17:19 >> That's learning. Yes. Learning consists
17:22 of saying instead of this one giving 2.4
17:24 four votes for whether that one should
17:26 turn on. We'll have this one give 2.5
17:27 votes for whether this one should turn on.
17:28 on.
17:29 >> And that will be a little bit of learning.
17:30 learning.
17:32 >> So these two different copies of the
17:34 same neural net
17:36 are getting different experiences.
17:38 They're looking at different data, but
17:39 they're sharing what they've learned by
17:41 averaging their weights together. >> Mhm.
17:42 >> Mhm.
17:44 >> And they can do that averaging at like a
17:47 you can average a trillion weights. When
17:49 you and I transfer information, we're
17:51 limited to the amount of information in
17:52 a sentence. And the amount of
17:54 information in a sentence is maybe a 100
17:57 bits. It's very little information.
17:58 We're lucky if we're transferring like
18:00 10 bits a second.
18:01 >> These things are transferring trillions
18:04 of bits a second. So, they're billions
18:05 of times better than us at sharing information.
18:07 information.
18:09 And that's because they're digital. And
18:11 you can have two bits of hardware using
18:13 the connection strengths in exactly the
18:15 same way. We're analog and you can't do
18:16 that. Your brain's different from my
18:19 brain. And if I could see the connection
18:21 strengths between all your neurons, it
18:22 wouldn't do me any good because my
18:24 neurons work slightly differently and
18:25 they're connected up slightly differently.
18:26 differently. >> Mhm.
18:26 >> Mhm.
18:29 >> So when you die, all your knowledge dies
18:32 with you. When these things die, suppose
18:34 you take these two digital intelligences
18:36 that are clones of each other and you
18:38 destroy the hardware they run on. As
18:40 long as you've stored the connection
18:41 strength somewhere, you can just build
18:44 new hardware that executes the same
18:47 instructions. So, it'll know how to use
18:48 those connection strengths and you've
18:51 recreated that intelligence. So, they're
18:53 immortal. We've actually solved the
18:55 problem of immortality, but it's only
18:57 for digital things.
19:00 >> So, it knows it will essentially know
19:02 everything that humans know but more
19:04 because it will learn new things.
19:06 >> It will learn new things. It would also
19:08 see all sorts of analogies that people
19:10 probably never saw.
19:14 So, for example, at the point when GPT4
19:17 couldn't look on the web, I asked it,
19:19 "Why is a compost heap like an atom bomb?"
19:21 bomb?"
19:22 Off you go.
19:23 >> I have no idea.
19:25 >> Exactly. Excellent. Most that's exactly
19:27 what most people would say. It said,
19:29 "Well, the time scales are very
19:31 different and the energy scales are very
19:33 different." But then I went on to talk
19:35 about how a compost he as it gets hotter
19:38 generates heat faster and an atom bomb
19:41 as it produces more neutrons generates
19:43 neutrons faster.
19:45 >> And so they're both chain reactions but
19:47 at very different time in energy scales.
19:49 And I believe GPT4 had seen that during
19:52 its training.
19:53 It had understood the analogy between a
19:55 compost heap and an atom bomb. And the
19:57 reason I believe that is if you've only
19:59 got a trillion connections, remember you
20:01 have 100 trillion.
20:02 >> And you need to have thousands of times
20:05 more knowledge than a person, you need
20:07 to compress information into those
20:09 connections. And to compress
20:11 information, you need to see analogies
20:13 between different things. In other
20:15 words, it needs to see all the things
20:16 that are chain reactions and understand
20:18 the basic idea of a chain reaction and
20:20 code that code the ways in which they're
20:22 different. And that's just a more
20:24 efficient way of coding things than
20:26 coding each of them separately.
20:29 >> So it's seen many many analogies
20:30 probably many analogies that people have
20:33 never seen. That's why I also think that
20:34 people who say these things will never
20:36 be creative. They're going to be much
20:38 more creative than us because they're
20:39 going to see all sorts of analogies we
20:41 never saw. And a lot of creativity is
20:44 about seeing strange analogies. If you
20:46 love the D CEO brand and you watch this
20:48 channel, please do me a huge favor.
20:51 become part of the 15% of the viewers on
20:52 this channel that have hit the subscribe
20:54 button. It helps us tremendously and the
20:56 bigger the channel gets, the bigger the guests.