0:02 You got really two choices. You can
0:04 either be a spectator or a participant.
0:06 We're talking about an economy that is
0:09 thousands of times, maybe millions of
0:12 times bigger than the economy today. I
0:16 went to Russia in like 2001 and 2002 to
0:19 buy ICBMs. This was to get to space.
0:21 Yeah. As a rocket, not to not to nuke
0:23 anyone, but like I think I think we're
0:24 quite close to digital super
0:27 intelligence. It may happen this year.
0:29 digital super intelligence defined as
0:32 smarter than any human at anything. And
0:33 I'm somewhat troubled by the foamy
0:35 paradox. Like why have we not seen any
0:43 we bring you Elon Musk's latest
0:45 unfiltered interview where he shares
0:47 exclusive predictions about artificial
0:50 intelligence, the race to Mars, and
0:51 building companies that could save
0:53 humanity from extinction. In this
0:56 supercut, we work on reducing pauses and
0:58 filler words while preserving every
1:00 crucial insight, saving you over 15
1:03 minutes of your valuable time. After
1:05 months focused on politics, this
1:07 conversation returns to what Elon does
1:10 best, engineering the future. We've
1:12 structured this into three focused
1:14 chapters. First, the foundations that
1:17 shaped his thinking. Second, building
1:18 the companies to make life multilanetary.
1:20 multilanetary.
1:22 And third, his predictions for AI and
1:25 what comes next? Chapter one, first
1:28 principles. How Elon got started. Was
1:30 there ever a moment in your life before
1:33 all this where you felt I have to build
1:35 something great? And what flipped that
1:36 switch for you? Well, I didn't
1:37 originally think I would build something
1:39 great. I wanted to try to build
1:41 something useful, but I didn't think I
1:43 would build anything particularly great.
1:45 You said probabilistically seemed
1:47 unlikely, but I wanted to at least try.
1:49 So you're talking to a room full of
1:52 people who are all technical engineers
1:53 uh often you know some of the most
1:56 eminent AI researchers coming up in the
1:58 game. Okay. I think we should I think I
2:00 I like the term engineer better than
2:02 researcher. I mean I suppose if there's
2:04 some fundamental algorithmic
2:06 breakthrough it's it's a research but
2:08 otherwise it's engineering. Maybe let's
2:10 start way back. I mean when you were
2:13 this is a room full of 18 to 25 year
2:16 olds. It skews younger because the
2:18 founder set is younger and younger. Can
2:20 you put yourself back into their shoes
2:24 when you know you were 18, 19, you know,
2:26 learning to code, even coming up with a
2:29 first idea for Zip 2. What was that like
2:32 for you? Yeah. Back in '95, I I was
2:34 faced with a choice of either do, you
2:36 know, grad studies, PhD at Stanford in
2:38 in material science, actually working on
2:40 ultra capacitors for potential use in
2:41 electric vehicles, essentially trying to
2:43 solve the range problem for electric
2:45 vehicles or try to do something in this
2:46 thing that most people have never heard
2:48 of called the internet. And uh I talked
2:50 to my professor who was Bill Nyx in the
2:52 material science department and uh said
2:54 like can I like defer for a quarter
2:57 because this will probably fail and then
2:59 I'll need to come back to college and
3:01 and then he said this is probably the
3:03 last conversation we'll have and he was
3:04 right but I I thought things would most
3:05 likely fail not that they would most
3:09 likely succeed. And then in 95 I wrote I
3:11 think the first or close to the first
3:14 maps directions white pages and yellow
3:15 pages on the internet. I just wrote I
3:17 just wrote that personally and I didn't
3:18 even use a web server. I just read the
3:20 port directly because I couldn't afford
3:22 and I couldn't couldn't afford a T1.
3:25 Original office was on Sherman Avenue in
3:28 Palo Alto. There was like an ISP on the
3:30 floor below. So I drilled I drilled a
3:32 hole through the floor and just ran a
3:34 land cable directly to the ISP and you
3:37 know my brother joined me and another
3:38 co-founder Greg Curry who passed away
3:41 and we at the time we couldn't even
3:43 afford a place to stay so we just the
3:45 office was 500 bucks a month so we just
3:46 slept in the office and and then
3:48 showered at the YMCA on page Milan El
3:51 Camino and yeah and we I guess we ended
3:54 up doing a little bit of a useful
3:56 company as of two in the beginning and
3:59 we did build a lot of really good
4:01 software technology, but we were
4:04 somewhat captured by the legacy media
4:06 companies in that nighter, New York
4:09 Times, no host, whatnot were investors
4:12 and customers and and also on the board.
4:14 So they kept they kept wanting to use
4:16 our software in ways that made no sense.
4:18 So I I wanted to go direct to consumers.
4:20 Anyway, it's a long story dwelling too
4:22 much on Z2, but the I really just wanted
4:23 to do something useful on the internet
4:26 because I like two choices like do a do
4:28 a PhD and watch people build the
4:30 internet or help build the internet in
4:31 some small way. And I was like, well, I
4:33 guess I can always try and fail and then
4:34 go back to grad studies. Anyway, that
4:36 ended up being like reasonably
4:38 successful. Sold for like $300 million,
4:40 which was a lot at the time. These days,
4:42 that's like I think minimum impulse bud
4:44 for an AI startup is like a billion
4:45 dollars. It's like there's so many
4:47 freaking unicorns. just like a herd of
4:48 unicorns at this point. You know, if
4:50 unicorn is a billion dollar situation.
4:52 There's been inflation since. So quite a
4:55 bit more money. Yeah. I mean like 90
4:57 1995 you could probably buy a burger for
4:59 a nickel. Well, not quite, but I mean
5:00 yeah, there has been a lot of inflation,
5:04 but I mean the hype level in AI is is is
5:06 pretty intense as you've seen. you know,
5:08 you see companies that are, I don't
5:10 know, less than a year old getting
5:11 sometimes billion dollar or
5:14 multi-billion dollar valuations, which I
5:16 guess could could pan out and probably
5:18 will pan out in some cases, but it is
5:20 eyewatering to see some of these
5:23 valuations. Um, yeah. What do you think?
5:25 I mean, we I'm pretty bullish
5:26 personally. I'm pretty bullish,
5:28 honestly. So I I think the people in
5:30 this room are going to create a lot of
5:32 the value that you know a billion people
5:34 in the world should be using this stuff
5:36 and that we're not even worried
5:38 scratching the surface of it. I love the
5:41 internet story in that even back then
5:43 you know you are a lot like the people
5:46 in this room back then and that you know
5:49 this the heads of all the the CEOs of
5:51 all the legacy media companies look to
5:53 you as the person who understood the
5:55 internet and a lot of the world the you
5:57 know the corporate world like the world
5:59 at large that does not understand what's
6:00 happening with AI they're going to look
6:02 to the people in this room for exactly
6:04 that it sounds like you know what are
6:05 some of the tangible lessons it sounds
6:07 like one of them is don't give up board
6:10 control or be careful about have a
6:12 really good lawyer. Uh I guess for the
6:15 first my first startup the the big the
6:17 really the mistake was having too much
6:20 shareholder and board control from
6:22 legacy media companies who then
6:24 necessarily see things through the lens
6:28 of legacy media and that they'll kind of
6:30 make you do things that seem sensible to
6:32 them but but aren't really don't make
6:34 sense with the new technology. I you
6:36 know I should point out that I that I I
6:38 didn't actually at first intend to start
6:40 a company. I like I tried to get a job
6:42 at Netscape and I sent my resume into
6:44 Netscape but I don't think he ever saw
6:47 my resume and then nobody responded. So
6:48 and then I tried hanging out in the
6:50 lobby of Netscape to see if I could like
6:52 bump into someone but I was like too shy
6:54 to talk to anyone. So I'm like man this
6:55 is ridiculous. So I'll just write
6:57 software myself and see how it goes. So
6:58 it wasn't actually from the standpoint
6:59 of like I want to start a company. I
7:01 just wanted to be part of building you
7:03 know the internet in some way. and and
7:04 since I couldn't get a job at an
7:06 internet company, I had to start a
7:09 internet company. AI will so profoundly
7:11 change the future, it's difficult to
7:14 fathom how much. But assuming we don't
7:16 things don't go ary and and like AI
7:19 doesn't kill us all and itself then you
7:22 you'll see ultimately an economy that is
7:24 not not 10 times more than the current
7:26 economy ultimately like if we become say
7:28 or whatever our future machine
7:30 descendants or but mostly machine
7:34 descend descendants become like a a caut
7:36 scale 2 civilization or beyond. We're
7:39 talking about an economy that is
7:41 thousands of times maybe millions of
7:44 times bigger than the economy today. I I
7:46 I did sort of feel a bit like, you know,
7:47 when I was in DC taking a lot of flack
7:50 for like getting rid of waste and fraud,
7:52 which was an interesting side quest as
7:54 side quests go. fixing the government is
7:56 kind of like there's like say the beach
7:57 is dirty and there's like some needles
8:00 and feces and like trash and you want to
8:02 clean up the beach but then there's also
8:04 this like thousand ft wall of water
8:08 which is a tsunami of AI like and how
8:09 much does cleaning the beach really
8:11 matter if you got a thousand foot
8:13 tsunami about to hit not that much if
8:16 you're trying to build a rocket or cars
8:18 or you're trying to have software that
8:21 compiles and runs reliably then you have
8:23 to be maximally truth seeking or your
8:26 software or your hardware won't work. Um
8:28 like there's not you can't fool like
8:30 math and physics are rigorous judges. So
8:32 I'm used to being in like a maximally
8:34 truth seeeking environment and and
8:36 that's definitely not politics. So
8:37 anyway, I'm I'm good glad to be back in
8:40 you know technology. I guess I'm kind of
8:42 curious going back to the Zip 2 moment.
8:44 You had hundreds of millions of dollars
8:46 or you had an exit of worth of millions
8:48 of dollars. I mean I I got $20 million,
8:50 right? And you basically took it and you
8:53 rolled you kept rolling with X.com which
8:56 became PayPal and Conffinity. Yes. I
8:58 kept the chips on the table. What drove
9:00 you to jump back into the ring? Well, I
9:02 I think I I felt for with with Zip 2, we
9:03 built like incredible technology, but it
9:05 never really got used. You know, I think
9:07 at least from my perspective, we had
9:09 better technology than say Yahoo or
9:11 anyone else, but it was constrained by
9:13 our customers. And so I wanted to do
9:15 something that where okay, we wouldn't
9:16 be constrained by our customers. Go
9:18 direct to consumer. And that's what
9:21 ended up being like X.com, PayPal.
9:23 Essentially X.com merging with
9:25 Confinity, which together created PayPal
9:28 and and then that that actually the the
9:30 sort of PayPal diaspora has it might
9:33 have created more companies than so more
9:35 companies than probably any anything in
9:37 the 21st century. You know, so so many
9:39 talented people were at the combination
9:42 of of Confinity and and X.com. So, I I
9:44 just wanted to like I felt like we we
9:46 kind of got our wings clipped somewhat
9:48 with Zip 2 and it's like, okay, what if
9:49 our wings aren't clipped and we go
9:51 direct to consumer and that's that's
9:54 what PayPal ended up being. Um, but
9:57 yeah, with I got that like $20 million
10:00 check for for my share of Zip 2. At the
10:03 time, I was living with in a house with
10:06 four housemates and had like 10 grand in
10:08 the bank. And then the this check
10:10 arrives in the mail of all places and
10:12 it's in the mail. Um and then and then
10:14 my bank balance went from 10,000 to 20
10:16 million and 10,000. You're like, well,
10:17 okay. Still have to pay taxes on that
10:19 and all, but then I ended up putting
10:22 almost all of that into X.com and as you
10:24 said, like just kind of keeping almost
10:27 all the chips on the table. From coding
10:29 his first software to nearly losing
10:31 everything on Tesla and SpaceX, Elon's
10:33 early struggles reveal the cost of
10:36 betting on breakthrough technologies.
10:38 Now, he explains how these companies
10:40 aren't separate ventures. They're
10:42 interconnected pieces of a larger
10:44 mission to preserve human consciousness
10:47 across multiple worlds.
10:50 Chapter 2, Engineering a Multilanet
10:53 Civilization. Then after PayPal, I was
10:54 like I was kind of curious as to why we
10:56 had not sent anyone to Mars. And I went
10:58 on the went on the NASA website to find
11:00 out when we're sending people to Mars.
11:02 And there was no date. I thought maybe
11:05 it was just hard to find on the website,
11:07 but in fact there there was no real plan
11:09 to send people to Mars. And I'm I'm I'm
11:11 definitely summarizing a lot here, but I
11:14 I I my first idea was to do a
11:16 philanthropic mission to Mars called
11:20 Life to Mars where we send a a small
11:22 greenhouse with seas and dehydrated
11:25 nutrient gel, land land that on Mars and
11:27 grow, you know, hydrate the gel and then
11:30 you'd have this this great sort of money
11:32 shot of green plants on a red
11:33 background. For the longest time, I by
11:35 the way I didn't realize money shot I
11:36 think is a porn reference. But but
11:38 anyway, the point is that that would be
11:40 the great shot of green plants on a red
11:42 background and to try to inspire, you
11:44 know, NASA and the public to to send
11:47 astronauts to to Mars. And along the
11:49 way, by the way, I went to Russia in
11:53 like 2001 and 2002 to buy ICBMs, which
11:55 is like that's an adventure, you know,
11:56 you go and meet with Russian high
11:57 command and say, I'd like to buy some
12:01 ICBMs. This was to get to space. Yeah.
12:04 Not to not to nuke anyone, but but they
12:06 had they had to as a result of arms
12:08 reduction talks, they had to actually
12:10 destroy a bunch of their their big
12:12 nuclear missiles. So I was like, well,
12:13 how about if we take two of those, you
12:16 know, minus the nuke, add an additional
12:18 upper stage for for Mars. But it was
12:20 kind of trippy, you know, being in
12:23 Moscow in 2001 negotiating with like the
12:26 Russian military to buy ICVMs. Like
12:27 that's crazy. I was like, man, these
12:29 things are getting really expensive. and
12:30 and then I I came to realize that
12:32 actually the problem was not that there
12:34 was insufficient will to go to Mars but
12:36 that there was no way to do so without
12:38 breaking the budget you know even
12:41 breaking the NASA budget so that's where
12:44 I decided to start SpaceX SpaceX to
12:46 advance rocket technology to the point
12:49 where we could send people to Mars and
12:51 that was in 2002. So that wasn't, you
12:53 know, you didn't start out wanting to
12:56 start a business. You wanted to start
12:58 just something that was interesting to
13:01 you that you thought humanity needed. It
13:03 turns out this is could be a very
13:05 profitable business. I mean it it is
13:07 now, but it there had been no prior
13:09 example of really a rocket startup
13:11 succeeding. There have been various
13:13 attempts to do commercial rocket
13:15 companies and they all all failed. So
13:17 again, with with SpaceX, starting SpaceX
13:18 was really from the standpoint of like I
13:20 I think there's like a less than 10%
13:23 chance of being successful. If if a
13:25 startup doesn't do something to advance
13:26 rocket technology, it's definitely not
13:28 coming from from the big defense
13:30 contractors because they just impeded
13:32 match to the government and the
13:33 government just wants to do very
13:36 conventional things. So there's it's
13:37 either coming from a startup or it's not
13:40 happening at all. So So like a small
13:42 chance of success is better than no
13:44 chance of success. And even like when
13:46 recruiting people, I didn't like try to,
13:47 you know, make out that I said we're
13:49 probably going to die, but small chance
13:52 we might not die. And if but this is the
13:54 only way to get people to Mars and
13:56 advance the state-of-the-art. And then I
13:57 ended up being chief engineer of the
13:59 rocket. Not because I wanted to, but
14:01 because I couldn't hire anyone who was
14:03 good. So like none of the good sort of
14:05 chief engineers would join because
14:06 they're like this is too risky. You were
14:08 going to die. And so then I ended up
14:10 being chief engineer of the rocket. And
14:12 you know, the first three flights did
14:14 fail. So, it's a bit of a learning
14:17 exercise there. And uh fourth one
14:18 fortunately worked. But if the fourth
14:20 one hadn't worked, I had no money left
14:22 and that would have been it would have
14:24 been curtains. So, it was a pretty close
14:26 thing. If if the fourth launch of Falcon
14:28 not worked, it would have been just
14:29 curtains and we would have just been
14:32 joined the graveyard of prior rocket
14:34 startups. So, it's like like my estimate
14:36 of success was not far off. We just we
14:38 made it by the skin of our teeth. Tesla
14:40 was happening sort of simultaneously.
14:43 Like 2008 was a rough year because at
14:45 mid 2008 the third launch of SpaceX had
14:47 failed. A third failure in a row. The
14:50 Tesla financing round had failed and so
14:53 Tesla was going bankrupt fast. It was
14:55 just a a tale of warning an exercise in
14:58 hubris. Probably throughout that period
15:00 a lot of people were saying you know
15:02 Elon is a software guy. Why is he
15:05 working on hardware? Yeah 100%. So you
15:07 can look at the like the because it's
15:08 still the you know the press of that
15:10 time is still online. and you could just
15:12 search it and and they kept calling me
15:15 internet guy. So like internet guy aka
15:18 fool is attempting to build a rocket
15:20 company and it does sound pretty absurd
15:22 like internet guy starts rocket company
15:25 doesn't sound like a recipe for success
15:27 frankly. So I didn't hold it against
15:28 them. I was like yeah you know
15:30 admittedly it does sound improbable and
15:32 I agree that it's improbable. But
15:34 fortunately the fourth launch worked and
15:36 and and NASA awarded us contract to
15:38 resupply the space station. It was like
15:40 right before Christmas because even the
15:42 fourth launch working wasn't enough to
15:44 succeed. It NASA also needed we also
15:46 needed a big contract to keep us alive.
15:48 So So I got I got that call from like
15:51 the NASA team and I literally they said
15:53 we're we're awarding you one of the
15:54 contracts to resupply the space station.
15:56 I like literally blurted out I love you
15:58 guys. Which is not normally you know
16:00 what they hear. And then we closed the
16:03 the Tesla financing round on the last
16:05 hour of the last day that it was
16:07 possible, which was 6 p.m. December
16:09 24th, 2008. We would have bounced
16:11 payroll 2 days after Christmas if that
16:12 round hadn't hadn't closed. It feels
16:14 like one of the through lines was being
16:17 able to find and eventually attract the
16:19 smartest possible people in those
16:21 particular fields. What would you tell
16:23 to, you know, the Elon who's never had
16:25 to do that yet? I I generally think to
16:27 try to try to be as useful as possible.
16:29 It's so hard to be useful, especially to
16:31 be useful to a lot of people where say
16:33 the area under the curve of total
16:34 utility is like how much how useful have
16:36 you been to your fellow human beings
16:37 times how many people. It's almost like
16:39 like the physics definition of true
16:40 work. It's incredibly difficult to do
16:42 that. And I think if you aspire to do
16:44 true work, your your probability of
16:47 success is much higher. Like don't
16:49 aspire to glory. Aspire to work. How can
16:53 you tell that it's true work? Like is it
16:54 external? Is it like what happens with
16:56 other people or you know what the
16:57 product does for people? like what you
17:00 know what is that for you? I mean, in
17:01 terms of of of your end product, you
17:03 just have to say like, well, if this
17:04 thing is successful, how useful will it
17:06 be to how many people? That that's
17:08 that's what I mean. And, you know,
17:10 whether you're CEO or or any role in a
17:12 startup, you do whatever it takes to
17:13 succeed. And just always be smash
17:15 smashing your ego. Internalize
17:17 responsibility. A major failure mode is
17:20 when ego to ability ratio is double
17:21 greater than sign one. If your ego to
17:24 ability ratio is it gets too high, then
17:26 you're you're you're going to basically
17:28 break the feedback loop to reality. In
17:31 AI terms, you're you'll break your RL
17:33 loop. So you you want you don't want to
17:34 break your you want to have a strong RL
17:36 loop, which means internalizing
17:38 responsibility and minimizing ego. And
17:40 you do whatever the task is, no matter
17:42 whether it's, you know, grand or humble.
17:44 I prefer the term engineering as opposed
17:47 to research. And and I I don't I
17:48 actually don't want it to call XAI a
17:50 lab. I just want to be a company. It's
17:52 like whatever the whatever the simplest,
17:54 most straightforward, ideally lowest ego
17:56 terms are th those are generally a good
17:58 way to go. You want to just close the
17:59 loop on reality hard. That's that's a
18:01 that's a super big deal. I think
18:03 everyone in this room is really looks up
18:06 to everything you've done around being
18:08 sort of a paragon of first principles
18:10 and you know thinking about the stuff
18:13 you've done how do you actually
18:15 determine your reality? people who have
18:18 never made anything, non-engineers, they
18:21 will criticize you. But then clearly you
18:23 have another set of people who are
18:25 builders who are in your circle like you
18:27 know how should people approach that you
18:29 need to make your way in this world
18:31 here. You know here's how to construct a
18:33 reality that is predictive from first
18:35 principles. Well, the the tools of
18:38 physics are incredibly helpful to
18:40 understand and make progress in any
18:42 field. The first principles mean just
18:43 obviously just means you know break
18:45 things down to the fundamental axiomatic
18:46 elements that are most likely to be true
18:48 and then reason up from there as
18:50 cogently as possible as opposed to
18:53 reasoning by analysis or metaphor and
18:54 then you just simple things like like
18:56 thinking in the limit like if you
18:59 extrapolate you know minimize this thing
19:00 or maximize that thing thinking in the
19:02 limit is is very very helpful. I use all
19:05 the tools of physics. They apply to any
19:07 field. This is like a superpower
19:08 actually. So you can take say take take
19:10 for example like rockets. You can say
19:12 well how how much should a rocket rocket
19:14 cost. The typical approach to to that
19:16 people would take how much rocket should
19:18 cost is they look historically at what
19:20 the cost of rockets are and assume that
19:22 any new rocket must be somewhat similar
19:24 to the prior cost of rockets. A first
19:25 principles approach would be you look at
19:27 the materials that the rocket is
19:29 comprised of. So if that's aluminum,
19:31 copper, carbon fiber, steel, whatever
19:34 the case may be, and say what how much
19:35 does that rocket weigh and and and what
19:37 are the constituent elements and how
19:39 much do they weigh? What is the material
19:41 price per kilogram of those constituent
19:43 elements? And that sets the actual floor
19:46 on what a rocket can cost. It's it can
19:48 asmtoically approach the cost of the raw
19:50 materials. And then you realize, oh,
19:51 actually a rocket, the raw materials of
19:55 a rocket are only maybe one or 2% of the
19:58 historical cost of a rocket. So the
20:00 manufacturing must necessarily be very
20:02 inefficient if the raw material cost is
20:05 only 1 or 2%. That would be a first
20:06 first principles analysis of the
20:09 potential for cost optimization of a
20:10 rocket and that's before you get to
20:12 reusability. you know to give an AI sort
20:15 of AI example. I guess last year where
20:18 for XEI when we were trying to build a a
20:21 training supercluster we we we went to
20:23 the various suppliers to ask that we
20:26 needed 100,000 H100s to be able to train
20:28 coherently. Their estimates for how long
20:30 it would take to complete that were 18
20:32 to 24 months. It's like well we need to
20:34 get that done in 6 months or we won't be
20:36 competitive. So so then if you break
20:37 that down what well what are the things
20:39 you need? Well you need a building you
20:41 need power. you need cooling. We didn't
20:43 have enough time to build a building
20:44 from scratch. So, we had to find an
20:46 existing building. So, we found factory
20:48 that was no longer in use in Memphis
20:50 that used to build Electrolux products.
20:52 But then the the input power was 15
20:55 megawatt and we needed 150 megawatt. So,
20:57 we rented generators and had generators
20:58 on one side of the building and then we
21:00 have to have cooling. So, we rented
21:01 about a quarter of the mobile cooling
21:03 capacity of the US and put the chillers
21:04 on the other side of the building. But
21:05 that didn't fully solve the problem
21:06 because the voltage v the power
21:09 variations during training are are very
21:11 big. So you can have power can drop by
21:13 50% in 100 milliseconds which the
21:15 generators can't keep up with. So then
21:17 we combi we added Tesla mega packs and
21:19 modified the software in the mega packs
21:21 to be able to smooth out the uh the
21:23 power variation during the training run.
21:24 almost it sounds like almost any of
21:26 those things you mentioned I could
21:28 imagine someone telling you very
21:30 directly no you can't have that you
21:32 can't have that power you can't have
21:33 this and it sounds like one of the
21:35 salient pieces of first principles
21:38 thinking is actually let's ask why let's
21:40 you know figure that out and actually
21:42 let's challenge the person across the
21:45 table and if they if I don't get an
21:47 answer that I feel good about I'm going
21:50 to you know not allow that to be I'm not
21:52 going to let that no to stand I think
21:53 these general principles
21:55 of first principal thinking applied to
21:56 software and hardware applied to
21:58 anything really. I'm just using kind of
22:00 a hardware example of of how we were
22:02 told something is impossible, but once
22:04 we broke it down into the constituent
22:05 elements of we need a building, we need
22:07 power, we need cooling, we need we we
22:10 need power smoothing and then and then
22:11 we could solve those constituent
22:13 elements. But it it was and then we and
22:15 then we just ran the the networking
22:17 operation to to do all the cabling
22:20 everything in four shifts 247
22:22 and and I was like sleeping in the data
22:24 center and also doing cabling myself
22:26 with rockets built and electric vehicles
22:28 scaling. The final challenge isn't
22:32 mechanical, it's intelligence itself.
22:34 In this final chapter, Elon shares his
22:36 timeline for artificial general
22:39 intelligence. Why trueing AI matters and
22:41 his predictions for technologies that
22:44 could reshape everything we know about
22:46 human capability.
22:49 Chapter 3, the future of AI, robots and
22:52 human evolution. Is it your view that
22:55 you know training still working and you
22:58 larger the scaling laws still hold and
23:00 whoever wins this race will have
23:03 basically the biggest smartest possible
23:04 model that you could distill. Well,
23:07 there's of the various elements that
23:10 side competitiveness for for large AI,
23:12 there's there's for sure the the talent
23:14 of the people, the scale of the hardware
23:16 matters and how well you able to bring
23:17 that hardware to bear. So, you can't
23:19 just order a whole bunch of GPUs and
23:21 then you can't just plug them in. So,
23:22 you've got to you've got to get a lot of
23:24 GPUs and have them train coherently and
23:27 stably. Then, it's like what unique
23:29 access to data do you have? I guess
23:30 distribution matters to some degree as
23:32 well. Like how do people get exposed to
23:34 your AI? Those those are those are
23:36 critical factors for if it's going to be
23:38 like a large foundation model that's
23:40 competitive and like right now we're
23:43 we're training Grock 3.5 which is a
23:45 heavy focus on reasoning. What I heard
23:48 for reasoning is hard science
23:50 particularly physics textbooks are very
23:52 useful for reasoning whereas I think
23:54 researchers have told me that social
23:57 sciences totally useless reasoning. Uh
23:59 yes that's probably true. You know,
24:00 something that's going to be very
24:03 important in the future is combining
24:06 Deep AI, the data center or supercluster
24:08 with robotics. You know, things like
24:11 like the Optimus humanoid robot. Yeah,
24:13 Optimus is awesome. There's going to be
24:15 so many humanoid robots and and robots
24:17 of all robots of all sizes and shapes,
24:19 but my prediction is that there will be
24:22 more humanoid robots by far than all
24:24 other robots combined by maybe an order
24:26 of magnitude. Like a a big difference.
24:28 Is it true that you you're planning a
24:31 robot army of a sort whether we do it or
24:33 or or you know whether Tesla does it you
24:36 know Tesla works closely with XAI like
24:38 you've seen how many humanoid robot
24:39 startups are there like it's like I
24:42 think Jensen Bong was on stage with a
24:44 massive number of robots from different
24:46 companies. I think there was like dozen
24:49 different humanoid robots. I mean, I
24:50 guess, you know, part of what I've been
24:52 fighting and maybe what has slowed me
24:54 down somewhat is that I'm a I'm a little
24:55 I don't want I don't want to make
24:57 Terminator real. I've been sort of, I
24:59 guess, at least until recent years,
25:02 dragging my feet on on AI and and
25:04 humanoid robotics. And then I sort of
25:06 come to the realiz realization it's it's
25:08 happening whether I do it or not. So,
25:10 you got really two choices. Particip you
25:12 could either be a spectator or a
25:14 participant. And so, like, well, I guess
25:15 I'd rather be a participant than a
25:17 spectator. And so now it's, you know,
25:19 pedal to the metal on humanoid robots
25:22 and digital super intelligence. So I
25:23 guess, you know, there's a third thing
25:25 that everyone has heard you talk a lot
25:26 about that I'm really a big fan of, you
25:29 know, becoming a multilanetary species.
25:30 How do you think about it? There's, you
25:32 know, AI obviously there's embodied
25:35 robotics and then there's being a multip
25:37 multilanetary species. Does everything
25:40 sort of feed into that last point or,
25:42 you know, what what are you driven by
25:44 right now for the next 10, 20, and 100
25:47 years? Jeez, 100 years, man. I hope
25:49 civilization's around in 100 years. If
25:50 it is around, it's going to look very
25:52 different from civilization today. I
25:54 mean, I'd predict that there's going to
25:56 be at least five times as many humanoid
25:59 robots as there are humans. Maybe 10
26:01 times. One way to look at the progress
26:03 of civilization is percentage completion
26:06 cautev. So, if you're, you know,
26:08 cautious of scale one, you've you've
26:09 harnessed all the energy of a planet.
26:12 Now in my in in my opinion we've only
26:15 harnessed maybe one or two% of Earth's
26:16 energy. So we've got a long way to go to
26:19 be KV scale one. Then Kv 2 you've
26:21 harnessed all the energy of a sun which
26:23 would be I don't know a billion times
26:25 more energy than Earth maybe closer to a
26:28 trillion. And then Khv 3 would be all
26:30 the energy of a galaxy. Pretty far from
26:32 that. So we're at the very very early
26:35 stage of the intelligence big bang. I I
26:37 I hope I hope we're in terms of being
26:39 multilanetary like I think I think we'll
26:42 have enough mass transferred to Mars
26:44 within like roughly 30 years to make
26:46 Mars self-sustaining such that Mars can
26:49 continue to grow and prosper even if the
26:51 resupply ships from Earth stop coming.
26:53 That greatly increases the probable
26:55 lifespan of civilization or or
26:57 consciousness or intelligence both
27:00 biological and digital. And I'm somewhat
27:02 troubled by the foamy paradox like why
27:04 have we not seen any aliens? And it
27:06 could be because intelligence is
27:09 incredibly rare and maybe we're the only
27:11 ones in this galaxy. Um, in which case
27:13 the intelligence of consciousness is
27:16 this like tiny candle in a vast dogmas
27:18 and we should do everything possible to
27:20 ensure the tiny candle candle does not
27:23 go out. And being a multilanet species
27:25 or making consciousness multilanetary
27:27 greatly improves the probable lifespan
27:29 of civilization and it's it's it's the
27:32 next step before going to other star
27:34 systems. Um once you once you at least
27:35 have two planets then you've got a
27:37 forcing function for the improvement of
27:39 space travel and and that that
27:41 ultimately is what will lead to
27:43 consciousness expanding to the stars.
27:45 The Fermy paradox dictates once you get
27:48 to some level of technology you destroy
27:51 yourself. What would you prescribe to I
27:53 mean a room full of engineers like what
27:54 can we do to prevent that from
27:56 happening? Yeah. How do we avoid the
27:57 great filters? One of the great filters
27:59 would obviously be global thermonuclear
28:01 war. So we we should try to avoid that.
28:05 Building benign AI, robots that AI that
28:07 loves humanity and robots that are
28:10 helpful. Something that I think is
28:12 extremely important in building AI is is
28:15 a very rigorous adherence to truth, even
28:17 if that truth is politically incorrect.
28:21 My intuition for what could make AI very
28:24 dangerous, is if if you force AI to
28:25 believe things that are not true. How do
28:27 you think about you know there's sort of
28:29 this argument for open open for safety
28:32 versus closed for competitive edge you
28:33 know there's fast takeoff and it's only
28:35 in one person's hands you know that
28:37 might you know sort of collapse a lot of
28:40 things whereas now we have choice which
28:42 is great how do you think about this
28:45 yeah I do think there will be several
28:47 deep intelligences may maybe at least
28:49 five I'm not sure that there's going to
28:51 be hundreds but it's probably close like
28:52 maybe there'll be like 10 or something
28:55 like that of which maybe four will be in
28:58 the US. But but yeah, se several deep
29:01 intelligences. What will these deep deep
29:03 intelligences actually be doing? Will it
29:05 be scientific research or trying to hack
29:07 each other? Probably all of the above. I
29:09 mean hopefully they will discover new
29:10 physics and I think they will definitely
29:13 going to invent new technologies like I
29:14 think I think we're quite close to
29:16 digital super intelligence. It may
29:18 happen this year and if it doesn't
29:20 happen this year, next year for sure. A
29:22 digital super intelligence defined as
29:24 smarter than any human at anything.
29:26 Well, so how do we direct that to sort
29:28 of super abundance? You know, we have we
29:30 could have robotic labor, we have cheap
29:33 energy, intelligence on demand, you
29:35 know, is that sort of the white pill?
29:37 Like where do you sit on the spectrum
29:40 and are there tangible things that you
29:43 would encourage everyone here to be
29:45 working on to make that white pill
29:46 actually reality? I think I think it
29:49 most likely will be a good outcome. I I
29:51 guess I'd sort of agree with Jeff Hinton
29:53 that maybe it's a 10 to 20% chance of
29:55 annihilation, but look on the bright
29:57 side, that's 80 to 90% probability of a
29:59 great outcome. Yeah, I can't emphasize
30:00 this enough. A rigorous adherence to
30:02 truth is is the most important thing for
30:05 AI safety and obviously empathy for
30:07 humanity and life as we know it. You're
30:09 working on closing the input and output
30:12 gap between humans and machines. How
30:15 critical is that to AGI, ASI? And you
30:18 know once that link is made can we not
30:20 only read but also write the neural link
30:22 is not necessary to solve digital super
30:24 intelligence that'll happen before
30:27 neural link is at scale but what what
30:28 Neurolink can effectively do is solve
30:31 the the input output bandwidth
30:32 constraints with a with a neural link
30:34 interface you can massively increase
30:36 your output bandwidth and your input
30:38 bandwidth input being right to you you
30:40 have to do write operations to the
30:43 brain. We have now five humans who have
30:45 received the kind of the read input
30:47 where it's reading signals and you've
30:50 got people with with ALS who really have
30:52 they're tetroplegics but they they can
30:54 now communicate at similar bandwidth to
30:56 a human with a fully functioning body
30:58 and control their computer and phone
31:00 which is pretty cool. In the next 6 to
31:02 12 months, we'll be doing our first
31:04 implants for vision where even if
31:07 somebody's completely blind. Uh we we
31:09 can write directly to the the visual
31:11 cortex and and we've had that working in
31:13 monkeys. One of our monkeys now has had
31:15 the visual implant for 3 years. At first
31:17 it'll be relatively fairly low
31:19 resolution, but longterm you would have
31:22 very high resolution and be able to see multisspectral
31:23 multisspectral
31:26 wavelengths. So you could see an
31:28 infrared ultraviolet radar. It's like a
31:30 superpower situation. At some point, the
31:32 cybernetic implants would would not
31:34 simply be correcting things that went
31:37 wrong, but augmenting human capabilities
31:39 dramatically. But digital super
31:40 intelligence will happen well before
31:42 that. I guess one of the limiting
31:45 reagents to all of your efforts across
31:47 all of these different domains is access
31:49 to the smartest possible people. like
31:51 what's going to happen in you know five
31:53 ten years and what should the people in
31:55 this room do to make sure that you know
31:57 they're the ones who are creating
31:59 instead of maybe below the API line.
32:01 Well they call it the singularity for a
32:03 reason because we don't know what's
32:06 going to happen in in the not that far
32:08 future. The percentage of intelligence
32:10 that is human will be quite small. At
32:12 some point the collective sum of human
32:15 intelligence will be less than 1% of all
32:18 intelligence. I guess just to end off
32:19 where do we go? So, how do we go from
32:21 here? I mean, I mean, all of this is
32:23 pretty wild sci-fi stuff that also could
32:26 be built by the people in this room. Do
32:27 you have a closing thought for the
32:29 smartest technical people of this
32:31 generation right now? If you're doing
32:33 something useful, that's great. Just
32:35 just try to be as useful as possible to
32:37 your fellow human beings and that that
32:39 then you're doing something good. I keep
32:41 harping on this like focus on super
32:43 truthful AI. That's the most important
32:45 thing for AI safety. You know, obviously
32:47 if anyone's interested in working at
32:49 XAI, I mean, please please please let us
32:52 know. We're aiming to make Grock the
32:54 maximally truth seeeking AI. Hopefully,
32:55 we can understand the nature of the
32:57 universe. That that's really I guess
33:00 what AI can hopefully tell us. Maybe AI
33:01 AI can maybe tell us where are the
33:03 aliens, you know, how did the universe
33:06 really start? How will it end? What are
33:08 the questions that we don't know that we
33:11 should ask? And are we in a simulation
33:12 or what level of simulation are we in?
33:14 Well, I think we're going to find out. NPC.
33:16 NPC.
33:17 From first principles thinking to
33:20 multilanetary civilization, this
33:22 conversation shows how Elon approaches
33:25 humanity's biggest challenges, not as
33:27 abstract problems, but as engineering
33:30 puzzles to solve. If you enjoyed this,
33:32 we've selected two more videos you'll
33:34 find fascinating. Check them out on your
33:36 screen now, and subscribe for more
33:38 content that cuts through the noise to
33:40 show you what's really shaping our
33:42 future. Elon, thank you so much for
33:45 joining us. Everyone, please give it up