0:02 People who are really high up at the AI
0:07 labs, they say that we are being rebels,
0:10 foolish rebels, if we don't listen to
0:12 the AI. They they truly say this. How
0:14 dare you rebel against God. Right.
0:16 Exactly. How dare you, you silly you
0:19 silly rebel. Nick Bostonramm is correct
0:21 that philosophy is on a deadline. He is
0:24 dead wrong about this
0:27 essential maybe central human good. Two
0:29 years from now, AGI is going to come
0:30 around the pipeline. You wouldn't be
0:32 doing what you're what you're doing now.
0:33 Right. There there's more pressing
0:36 questions to resolve. Yes. Wrong. People
0:38 right now believe that AI should tell
0:40 them what to do. There are Claude boys,
0:42 literally teenagers who call themselves
0:44 Claude Boys who will wake up and they
0:46 will do what Claude says and not
0:49 otherwise. And this, not EA, not
0:51 existential risk. This is the true
0:58 One of my friends uses chat GBT for
1:00 hours every day, not just as a search
1:03 engine, but as an operating system for
1:05 his life. He asks it where he should
1:07 eat, what he should text girls and
1:09 dating apps. He gets up every day and
1:12 has ChatGpt tell him what to do. My
1:14 friend does this not because he's
1:15 incompetent or stupid. He's one of the
1:18 smartest people I know. But because Chad
1:20 GBT already knows so much about him that
1:22 the advice is actually getting quite
1:25 good, the restaurants that it recommend,
1:27 for example, are already better than the
1:29 ones that he can find for himself. My
1:32 friend is not alone. Gen Z, Gen Alpha,
1:35 are increasingly using AI as a holistic
1:37 operating system to which they offload
1:40 all of their decisions onto. And Brendan
1:42 McCord argues that this kind of
1:44 offloading is the real danger of AI that
1:46 no one is talking about. In this
1:48 interview, you'll learn why human
1:50 autonomy is important, how AI threatens
1:53 it, and how to harness the power of AI
1:56 without forming an unhealthy dependency.
1:58 My name is Jonathan B. Brendan and I run
2:00 Cosmos together to deliver educational
2:03 programs, fund research, and build AI
2:05 startups that enhance human flourishing.
2:07 Both of us have a background in
2:09 philosophy and engineering. And we
2:10 believe that it's important to combine
2:12 the two if we are to build actually good
2:15 AI systems. If you want to join our
2:17 ecosystem of philosopher builders, you
2:19 can find roles we're hiring for, events
2:20 we're hosting, and other ways to get
2:24 involved on jonathanb.com/cosmos.
2:31 So, Brendan, today we're going to talk
2:34 about autonomy, which is uh a risk, but
2:36 also an opportunity that most people who
2:38 are building an AI, even for their
2:40 entire lives, are not really focused on.
2:42 Um but first I want to clear the ground
2:44 and talk about what people are worried
2:46 about. So there are very mature and
2:48 developed schools of philosophy in the
2:50 valley. The accelerationists, the
2:52 effective altruists, uh the x-risk
2:54 people and you argue that they're all
2:56 missing something essential about AI
2:58 development. Why is that? Uh and give us
3:00 an overview of these goals.
3:03 So the two main tendencies that I found
3:05 within the AI community are the
3:07 existential pessimists and the
3:09 accelerationists. and they roughly map
3:11 to the extremes of despair and the
3:13 extreme of hope.
3:15 The existential pessimist is kind of
3:17 three philosophies in a trench coat.
3:20 It's rationalism, effective altruism,
3:22 long-termism. These are all distinct,
3:24 but they're the intellectual incumbent.
3:27 And the prescription in light of this
3:30 possibility is that we should pause
3:33 development is that we should centralize
3:36 control is that we should radically
3:38 remake society on the basis of risk
3:40 avoidance. So it's a claim and then it's
3:44 a series of drastic and dystopian
3:47 prescriptions. you see some of the early
3:49 um godfathers of AI as they're called
3:51 people like Jeff Hinton
3:54 like having this almost Oenheimer like
3:56 hubristic awe of their own creation
3:58 saying I wish I hadn't done it
4:00 and you know that I brought this into
4:01 the world
4:04 and um you know the three schools as I
4:06 mentioned effective altruism rationalism
4:08 long-termism it's worth breaking it down
4:11 a little bit so rationalism comes out of
4:15 the 2000s Elizer Yubcowski um uh and
4:17 slatear codecs were kind of the
4:19 originating sources of this and they
4:22 claim to be focused on an idea of like
4:25 um uh perfecting human rationality but
4:28 they have a very very narrow commitment
4:30 to what rationality is. It's basian and
4:31 updating over a value function or the
4:34 value function over outcomes. It's a
4:37 kind of rationality that Aristotle and
4:40 Kant and you know modern thinkers would
4:42 not recognize. So, it's a commitment to
4:44 a very myopic kind of probabilistic
4:47 control type of morality. And what I
4:50 find interesting about this is that the
4:52 school is committed to rationality. Yet,
4:55 it finds itself the most fertile
4:58 breeding ground for the extremes of hope
4:59 and despair. And there's some kind of
5:02 irony to that. You know, the next one is
5:04 long-termism. And Bashram is probably
5:08 the source in long-termism. Long-termism
5:09 is worried about anything that could
5:10 affect the long-term outcomes for
5:12 humanity. That could be an asteroid. It
5:14 could be um you know pathogen. It
5:16 doesn't have to be AI. AI is
5:18 particularly important because its mo is
5:21 that of control. Boston advances
5:22 something called the orthogonality
5:25 thesis where you say something can
5:27 become very intelligent but on a totally
5:29 different axis. That does not imply that
5:32 they become more moral or that it
5:33 becomes more controllable. This is true
5:37 of humans as well. Um, and he worries
5:40 that a non-anthropomorphic final goal,
5:43 meaning create more paper clips, will
5:45 cause AI to pulp us all. So, this is a
5:48 concern that he has for the long-term
5:50 future of humanity based on this
5:53 orthogonality notion. And then the last
5:54 is effective altruism. And effective
5:56 altruism dominates the three when it
5:58 comes to the fundraising potential. It
6:00 is the most highly adopted academic
6:03 theory since Marxism. M and what
6:05 effective altruism does is it
6:08 rationalizes certain moral intuitions
6:11 and things that are not intuitive at all
6:13 it tries to pull in as well things like
6:16 you know other regarding behavior should
6:20 not factor in any notion of time or or
6:23 um a place. So, so what that means is I
6:26 should care about my family as much as
6:29 someone who lives 17 generations hence
6:32 in Indonesia or even plausibly a shrimp
6:35 equally. A util is a util whether it is
6:38 yours or those other entities. That is a
6:42 very radical premise. Um it also
6:45 attempts as as you know these forms of
6:48 utilitarianism do to reduce moral
6:51 questions to a single currency such that
6:53 we can compute them and maximize them.
6:57 And so um I reject and I think most
6:59 thinking people do reject that morality
7:01 is a thing that works like that. In
7:03 other words, endemic immorality appears
7:06 to be this difficulty of making pretty
7:08 sharply heterogeneous trade-offs. Like
7:12 how do you com compare the uh moral
7:14 question entailed by some suffering over
7:18 here with familial love with duty or
7:20 honor in battle? These are things that
7:21 don't commenserate
7:24 in utilitarianisms. They do and it's
7:27 false kind of scientistic way of viewing
7:28 the questions of morality that really
7:31 obiates what makes moral choice
7:33 difficult. The other thing I'll say here
7:37 is that um effective altruism
7:40 tries to take one's personal projects
7:43 and make them those of the universe
7:45 provide the calculational standpoint to
7:48 say rather than you know acting morally
7:50 in a way that inherently springs from me
7:51 right that's what we learn from
7:54 Aristotle and the comain ethics rather
7:56 than doing that it subordinates that
7:58 impulse that moral choices like is an
8:00 individual thing and tries to make it
8:01 the standpoint of the universe
8:03 subordinates uh what it means to act
8:06 morally. And this is why it is not
8:08 surprising to me to see such profound
8:10 moral failings in the EA community. In
8:13 other words, what is sexual assault?
8:14 What what is the what is the real issue
8:17 there if you're doing cosmically
8:19 significant work? What are the practical
8:22 consequences of following this through
8:24 of people who are actually building AI
8:26 or regulating AI thinking in this way?
8:29 What are the practical consequences?
8:32 So, two that come to mind. One is that
8:34 we would look to
8:36 governance solutions that are profoundly
8:39 illiberal and that would lead to tyranny
8:42 um namely the creation of a world state.
8:46 So in the paper um the vulnerable world
8:49 hypothesis Boston writes about the need
8:51 therefore you know it's always a
8:53 conclusion of these radical premises to
8:56 create a world state where basically you
8:59 eliminate the coordination difficulty of
9:02 having greater than one actor. So even
9:04 US and China let alone a more
9:06 multi-polar you know world presents this
9:08 like game theoretic challenge of racing
9:10 and so forth. And so we have to get to
9:12 one. we have to get to one state and of
9:14 course political philosophy has dealt
9:15 with the question of whether there
9:17 should be one state many times before.
9:20 There's a contan critique um that has to
9:23 do with uh you know the the challenges
9:26 of maintaining legitimacy uh uh versus
9:28 having to sort of squash disscent and
9:30 there's a Strauss co debate where Leo
9:32 Strauss argues that it would lead to a
9:35 universal perpetual homogeneous tyranny
9:39 and um and many other uh uh concerns. So
9:41 that's one. The other is that if we
9:44 accept and hold tight those utilitarian
9:49 premises, then we might think that it is
9:52 uh acceptable to live a life with
9:53 artificial intelligence that is
9:57 profoundly uh nonhumanistic.
9:59 In other words, that we might um be
10:02 perfectly willing to be downstream of
10:05 AI, having it tell us what to do and
10:07 guide our life like an autocomplete.
10:11 I see. Um, one interesting thing I want
10:13 to double click on is you use the word
10:15 hubris to describe these schools which
10:17 is I think is fascinating because in
10:19 many ways these schools portray
10:21 themselves to be egoless right the
10:22 effect of altruist is about it's not
10:24 about my good it's about altruism or the
10:26 rationalist right it's not about my ego
10:28 or my selfish interest it's about doing
10:30 what's rational but I was reading um one
10:33 of these uh important blogs of the EA
10:35 community and it said the way you should
10:37 make decisions is think what would a
10:40 benevolent an omnipotent God do. And I
10:42 think it's it's kind of a power trip,
10:44 right, to to to think about it in that
10:45 way. And maybe to build off of what you
10:48 said about Oppenheimer, I think that to
10:51 claim confession of one's guilt is at
10:54 once to claim credit for the sin. It's
10:56 to say, I am powerful enough to
10:58 potentially end the world. And there's
11:00 also a subtle ego trip there. Is that is
11:01 that what you meant by hubris?
11:04 Yeah. And I think it it's both the kind
11:07 of hubris that Bacon had in saying that
11:09 man would master nature, you know, at
11:10 the beginning of the scientific project
11:12 and the same kind of hubris that comes
11:15 through any serious philosophy project
11:17 where we try to find the one true answer
11:20 to all the things. Like there is a
11:23 hubris laden in that. And if you are the
11:26 type of person that can subjugate the
11:28 world with symbols mathematically
11:30 inclined among us, it is your moment and
11:33 what you do has cosmic significance for
11:34 the future of the race.
11:36 Right now I will say the strands of
11:39 philosophy to which I am attracted tend
11:41 to take a different approach. They tend
11:44 to be more epistemically
11:47 humble. They tend to focus on the use of
11:49 reason to whittle down the claims of
11:52 reason as David Hume would say. In other
11:54 words, a critical rationalism and not a
11:55 constructive rationalism that tries to
11:57 sort of say this is the way the world
11:59 is. Tries to focus a little bit more on
12:00 what the world isn't. Right?
12:00 Right?
12:02 Um so that there is, you know,
12:04 philosophy is not uh not solely the
12:06 domain of the hubristic,
12:09 right? Uh one of my favorite lines from
12:11 uh your favorite book from by Hayek,
12:14 which we'll talk about is something like
12:16 uh every political philosophy believes
12:19 other people are ignorant but liberals
12:20 also believe that they themselves are
12:22 ignorant. Right? And that's kind that's
12:24 the kind of difference there. Um many in
12:26 Silicon Valley have also reacted to
12:28 these dominant schools of philosophies
12:30 in the way that you have. Uh and so
12:32 accelerationism was born. Tell us about
12:33 that school and why you don't think
12:35 that's a satisfactory response. Right.
12:37 Right. So the the other end of the
12:38 extremes of hope and despair accelerationism
12:40 accelerationism
12:43 and this school would have us unleash
12:45 the development of AI as an end in
12:48 itself. And that's a very important
12:50 idea. You know, you and I, I think,
12:52 probably hold technology to be a very
12:55 powerful means by which we flourish as
12:58 individuals. The accelerationist school
13:00 confuses technology as a means with
13:03 technology as the end and views humans
13:06 as being a kind of instrument in this
13:08 trajectory that is sweeping,
13:10 transcendent, inexurable, this
13:13 technological trajectory. It draws a lot
13:17 not from basian thinking but from
13:20 thermodynamics. And so it wants to
13:23 portray humans as a kind of variable in
13:26 this thermodynamic equation in this um
13:28 broad project to be able to harness and
13:30 dissipate more energy or to be able to
13:33 uh climb the cardartesev scale which
13:36 means to harness not just the energy of
13:38 the planet or the sun but of the galaxy
13:40 you know ratchet up more and more and
13:43 more. These are not humanistic goals.
13:45 And it's very important because one of
13:47 the conclusions that the orthodox
13:50 accelerationists reach is actually that
13:54 we can and should hasten the time to
13:55 passing the baton
13:57 from humans. We invite the doom to
14:00 something higher. Yeah. Exactly. And I
14:01 think with both schools, but certainly
14:03 with accelerationism, you see a
14:05 metaphysical impulse, you know, the the
14:07 the the religious impulse that humans
14:11 have long had. You see it conserved but
14:14 redirected towards the thermodynamic
14:17 god. In other words, they're both kind
14:20 of esqueological. You know, in one case,
14:22 we die and in one case we build
14:24 something higher and better and then are
14:28 transcended. And here here's the upshot
14:31 is that both of these schools are very
14:33 imaginative in a sense, right? These are
14:34 you cannot fault them for being very
14:36 imaginative about the possibilities
14:38 here. um whether it's paper clips or
14:41 thermodynamic god but on the one thing
14:44 that is most needful they have a lack of
14:46 imagination and what I mean by that is
14:48 on what it means to be human what the
14:49 human good is. Yeah.
14:53 One side views humans as a kind of
14:56 aggregate of suffering or pleasure. In
14:58 other words, we view the human good as
15:00 the avoidance of pain. I have news for
15:02 you. That is not the entirety of the
15:05 human good. The other side abandons the
15:06 idea of the human good entirely. In
15:08 other words, it's a non-humanistic
15:10 philosophy. And this is precisely where
15:14 the issue is, is that we need a positive
15:17 uh uh uh approach that is humanistic,
15:20 that is grounded in the real goods, not
15:22 reductionist to to you know either
15:25 Beijing or thermodynamic effect, but
15:27 really focus on the underlying human
15:30 goods. If you are as we are investing in
15:32 for example application uh level
15:34 companies that are using AI uh if you're
15:37 worried about solving problems around uh
15:39 autonomy um and we'll discuss what that
15:42 means you are implicitly making a bet
15:44 and saying that there will no be there
15:46 there won't be a singularity or an AGI
15:49 within the next 5 to 10 years right
15:50 you were saying that there is a human
15:55 good that needs to be jealously defended
15:56 that's what you're saying and you're
15:58 saying that No one else will do it. We
15:59 have to do it.
16:01 So you're you're but you're agnostic to
16:03 the the speed of the the AI development.
16:04 You're agnostic to the development
16:07 scenarios. You're simply saying whatever
16:10 path we go down, this is a human good
16:12 that I want to protect and I must build
16:14 in order to protect it.
16:15 Let me push you in in one last direction
16:18 as a devil's advocate for these schools,
16:22 which is Brendan, you say that uh you
16:24 are agnostic to the AI kind of
16:27 development timeline. uh and the core
16:30 focus is figuring out how AI regardless
16:32 of development timeline can help these
16:34 human goods flourish and that's why for
16:36 example the the main questions we ask at
16:38 Cosmos are around autonomy which we'll
16:40 talk about soon decentralization and
16:42 truth seeeking the devil's advocate is
16:47 to say but if you think AGI is around
16:49 the corner then all of your energy
16:51 should be focused on getting that right
16:54 and aligning it and it will be able to
16:56 answer these questions
16:58 much better than we can today. And so
17:00 this is what Bolstrom said when I was
17:02 interviewing him, which is philosophy
17:04 has a deadline. And there are certain
17:06 philosophical questions that are more
17:09 urgent if you take that AGI is around
17:11 the corner kind of kind of idea. So even
17:13 though you claim you're agnostic, you're
17:14 still not agnostic, right? Because if I
17:16 told you, let's say I'm an Oracle, I
17:18 come from the future. In two years from
17:20 now, AGI is going to come around the
17:21 pipeline. You wouldn't be doing what
17:23 you're what you're doing now, right?
17:24 These questions we can save for later.
17:25 there's more pressing questions to
17:26 resolve. Yes.
17:30 Ron. Yeah. So I think you know for
17:33 example um when we talk when we get into
17:36 autonomy that is a lived practice. In
17:37 other words that is something that
17:41 humans do right and so the thought that
17:43 we will build AGI and AGI will figure
17:46 out autonomy is nonsensical. It's a
17:49 category error. Like it can't figure out
17:51 something that must live within us. In
17:53 other words we must self-develop. we
17:57 must self-direct. And so we can use AGI
17:59 instrumentally in that pursuit, but it
18:01 is not a thing to be figured out. It's a
18:03 practice. It's something to be lived. I see.
18:04 see.
18:08 And so like I I also want to um clarify
18:10 that my agnosticism doesn't imply a
18:12 withdrawal. In other words, we are
18:14 building like we're building the future
18:15 we want to see.
18:17 Okay. Well, this is a perfect segue.
18:19 Let's talk about the risk that I don't
18:21 think anyone in the valley has really
18:23 focused on in the way that we have
18:26 autonomy. What is the risk as well as
18:27 opportunity as it relates to AI and
18:30 human autonomy? So when you think about
18:32 the greatest goods in your life, you
18:34 probably think about things like friends
18:36 and family and loved ones. You might
18:39 think about the pursuit of wisdom.
18:41 That's I would say one of your highest
18:43 highest goods. Creative endeavor, that
18:45 sort of thing, right? Um it's actually
18:47 eating but wisdom is number two. Yeah.
18:49 Yeah. Eating. Exactly. That would I I
18:51 would say be one of the lowest that is
18:53 necessary for the highest. So this is an
18:54 interesting point about how the lowest
18:56 things in us are needed for the for the
18:59 highest uh as well. But these kinds of
19:01 goods what's common to them. Whatever
19:04 you hold to be your highest goods is
19:08 that they cannot be uh obtained on a
19:11 platter. You know AI can't give them
19:13 eating except for eating which we'll
19:13 I'll hold to the side.
19:16 We'll buck in. Um, and they have to be
19:19 uh attained as the result of some kind
19:20 of selfmotivated striving. In other
19:22 words, you had to get out there. You had
19:25 to try things. You had to enjoy it. You
19:26 had to experience it. You had to get
19:29 hurt by it. Being able to discover and
19:31 develop one's gifts, being able to
19:34 deliberate using reason and to line up
19:36 our actions and be able to pursue them.
19:38 So, this deliberative capacity for
19:40 self-direction, okay, is I think the
19:41 thing I want to call everyone's
19:44 attention to. This is autonomy. And
19:48 without this self-direction, we cease to
19:50 live fully human lives. We may act in
19:52 the world, but it isn't really our life
19:55 to live. And so I say this all so far at
19:57 the level of the individual. Like the
19:59 other piece of it is that it's very
20:01 important as a society, particularly a
20:03 democratic society, one that
20:06 self-governs. And this is because we
20:08 depend on individuals who can form
20:10 views, who can act on those views, who
20:14 can self-govern. Without that we lose
20:17 the the the greatest bull work against despatism.
20:18 despatism.
20:22 Right. So draw it out practically how uh
20:25 this current AI wave can harm or or
20:27 accelerate us being autonomous agents.
20:31 Okay. So the phrase that I would uh
20:34 stick in your mind is autocomplete for
20:38 life. What I mean by that is we use AI
20:41 systems, we obtain the incremental
20:44 convenience from them where we get not
20:45 just the next word in the sentence,
20:46 that's what everyone's familiar with of
20:48 autocomplete, but also the next
20:52 decision, the next, you know, uh uh job
20:55 recommendation, the next friend, the
20:56 next relationship, the next purpose. In
20:58 other words, we can sort of ladder up
21:01 what AI can do for us and feels very
21:03 harmless. It feels convenient and
21:06 probably useful, but it adds up. It it
21:09 causes a kind of erosion of choice. When
21:12 we offload, we can see at the level of
21:15 fMRI, but certainly we all recognize
21:18 this in our lived experience that we
21:22 atrophy. In other words, we um we
21:25 habitually offload in a way that causes
21:27 us to then lose the skill. you see this
21:30 like the fMRI stuff is like if you do a
21:31 lot of speed reading and not a lot of
21:34 deep reading you you lose some of the
21:36 ability to do that you know or if you do
21:37 a lot of calculator- based arithmetic
21:40 you lose the ability to do that now I
21:43 think an important point has to be made
21:44 like why is this not just another
21:46 version of that right Google uh Google
21:48 maps right I can't drive very well
21:50 without Google maps um you're probably
21:51 the same actually live in New York so
21:54 you probably can't drive period um but
21:56 um so then the next question becomes
21:59 Why is AI in particular a problem for this?
21:59 this?
22:00 Because this is a problem for all
22:02 technology, books and memory, right?
22:04 Like driving and being able to ride a
22:05 horse. Yeah. What what's
22:07 Well, actually, I think before we even
22:08 talk about that, I framed it as a
22:10 problem, but it's actually it's actually
22:12 a beautiful thing,
22:14 right? And you and I have talked about
22:16 the quote from Alfred North Whitehead
22:18 that the measure of civilizational
22:20 progress is the number of important
22:21 operations of thought we can perform
22:22 without thinking about them. Right?
22:23 Right?
22:25 It's a brilliant brilliant quote. There
22:27 are examples about my favorite of this
22:31 is Max Verssteppen the Formula 1 driver.
22:33 He's a kind of a prodigy, you know,
22:36 special driver. And when it's raining on
22:39 the course, he can talk to his pit wall,
22:43 his pit crew. And it's because he has
22:45 made it autonomic. Yeah.
22:48 He has done it so many times that he can
22:50 actually think about strategy. He's
22:51 talking about the tires. Yeah. You know,
22:53 he's going 220 m. In fact, I remember
22:55 one of the races, I'm big F1 fan. He was
22:57 watching the other drivers like on the
22:59 jumbotron and commenting on their races.
23:00 I mean, it's almost like he's just Yeah,
23:02 he's just like having a strategy thought
23:04 while everyone else is clinging on at
23:06 5Gs or whatever they drive at. And so
23:07 anyway, so it's a beautiful thing. It's
23:08 how we build the edifice of
23:09 civilization. It's how we do the higher
23:11 things, it's great. So it's a bit of a
23:12 paradox, right? On the one hand, it's
23:13 great. On the other hand, it's
23:16 problematic. Okay, so now we set sort of
23:18 the contours. So coming back to the key
23:20 question of why does AI why is that a a
23:22 thing that special case special case yeah
23:23 yeah
23:25 so one is you have to think about what
23:28 it is that you are offloading and thus
23:31 potentially eroding and in the case of
23:34 calculators it's calculation in the case
23:37 of maps it's like positioning in in
23:39 space yeah navigation that's
23:42 um in the case of writing that's is
23:44 essentially memory right primarily
23:46 memory and so these are kinds of
23:47 categories of
23:51 Never before has it been possible to
23:54 offload and therefore atrophy our core
23:56 practical wisdom or our core
23:58 deliberation I should say that leads to
24:02 a kind of wisdom so you know now you are
24:03 talking about something that is
24:06 necessary to self-direct that is
24:09 necessary for moral judgment right it is
24:11 necessary for us to decide what is good
24:14 for us and so it's a different kind of
24:16 thing that gets offloaded the other
24:17 question you have to you have to ask
24:19 okay now that we've pinpointed that this
24:21 is a very precious thing that we should
24:24 be careful to not offload then you have
24:26 to look at how pervasive is it likely to
24:29 be and AI is clearly something that can
24:32 scale it's clearly something that can be hyperpersonalized
24:34 hyperpersonalized
24:38 already 20% of human life is mediated by
24:40 algorithms human waking life is mediated
24:40 by algorithms
24:42 social media algorithms yeah
24:44 social media al algorithmic feeds not
24:48 just LLMs but AI that um determines or
24:51 guides or shapes what information
24:52 reaches your mind, what thoughts
24:55 therefore form within them. And so the
24:56 scale is already very significant. But
24:58 what that means is that you might not
25:00 encounter different possibilities. In
25:02 other words, you might not realize that
25:05 there's something else out there if you
25:07 have been sort of epistemically narrowed
25:08 to a high degree. I think another
25:11 mechanism that's important is how do you
25:12 sort of pull out, how do you recover,
25:14 right? In the case of calculators, you
25:16 can just do kind of the inverse
25:18 operation and check it. In the case of a
25:19 lot of the things AI does for us, it's
25:21 very hard to check and it seems
25:23 authoritative. You know, it can answer
25:25 questions like what is justice? Like no
25:27 one knows, but you know, you give us
25:29 that kind of question. And if if AI
25:31 seems authoritative and fast, the
25:33 computational cost of checking it is
25:36 very high, we don't check it. This is a
25:38 common automation issue. We don't check
25:42 it. So that fact combined with the
25:45 narrowing destroys the possibility of
25:47 error correction in the long term.
25:48 So so let me summarize the conversation
25:50 so far for our audience which is that
25:53 all technology as they give you a
25:55 superpower with their left hand takes
25:57 something away through dependence on the
26:00 right hand. Um this trade-off is worth
26:02 it if what is taken away isn't so
26:05 central. What makes AI special your view
26:09 is it sounds like is that because it's
26:11 the most uh it's a technology that's
26:14 most similar to humans that it can
26:16 potentially take away practical
26:18 deliberation. So it's the thing that you
26:21 lose. It's you know practical reason
26:23 versus navigation versus calculations
26:27 versus memory as well as the scope uh
26:29 with which you lose it because it can be
26:30 embed in everything. That's what makes
26:33 this extremely dangerous and the fact
26:36 that it is hard to recover because the
26:38 way you might audit that, the way you
26:40 might pull it out is through use of the
26:42 very same thing that it atrophies.
26:43 I see for example,
26:46 right? And so obviously I imagine people
26:47 are going to be curious about the
26:49 solution. What do we do about this? But
26:52 before we go there, uh I want to better
26:53 understand the concern by giving you a
26:54 thought experiment. Okay? And I I call
26:56 this thought experiment the uh the
26:58 initiant autocomplete. So, let's say
27:00 whatever practical question it gives
27:02 you, it'll always give you the best
27:03 answer for you to do. Should I marry
27:06 Sally? Should I marry Susan? Uh, and you
27:08 know, you know that it's the best
27:09 because historically it's been verified.
27:11 Okay. So, so you run back tests, it
27:13 always gives the right answer. And every
27:14 little thing that you've tested and that
27:16 your friends have tested, it always
27:17 seems to have have given you the right
27:19 answer. So, you're you're pretty
27:21 confident empirically that it's, you
27:23 know, it gives you always the right
27:25 practical answer. How would you use this system?
27:26 system?
27:28 I want to accept the thought experiment,
27:32 but but first I want to understand is it
27:34 um omnisient through time like is it is
27:37 it an oracle that so like in the middle
27:39 of the way through an NBA season um we
27:41 don't know who wins the championship,
27:43 right? Does this omnition oracle know
27:45 that? And I I raised this because there
27:48 is a class of knowledge that can only be
27:50 that's not sort of like computationally
27:52 reducible. It's something that only is
27:54 generated through actually playing out
27:57 the thing you know right and this is for
28:01 example um how markets uh function is we
28:02 you know across many different variables
28:05 we generate knowledge uh it's knowledge
28:07 that is known to no one because it isn't
28:08 even in existence in the world so let's
28:11 say that it's not a mission through time
28:13 um but let's say
28:15 let me put this way the wisest human
28:18 ever let's say Socrates it would make
28:20 the decision that they would make like
28:22 like Given imperfect information, date
28:25 Sally, date Susan, we don't know if
28:27 Sally or Susan has cancer, but given all
28:30 the the information, it makes like the
28:30 best possible answer. Yeah.
28:31 Yeah.
28:32 The reason I asked the question is because
28:34 because
28:38 um there is a mode of operation that is
28:42 suitable to exploiting knowledge then known.
28:42 known. Yeah.
28:42 Yeah.
28:44 In other words, like kind of what an
28:46 authoritarian government does, right?
28:49 there is quite another that is suited to
28:51 the maximal generation of knowledge,
28:54 right? And um for things that are not
28:56 known and won't be known except by sort
28:57 of playing it out, right?
28:58 right?
29:00 And so if you lean too hard on the
29:03 former uh on the exploitation of the
29:05 knowledge then known, you kind of
29:07 deplete the stock, right? You cease to
29:09 generate the new. And I would argue that
29:12 the real goal from the consequentialist
29:16 frame is we should want systems that can
29:19 uh allow the anonymous individual to
29:22 achieve his or her unknown ends. And if
29:24 we want to do that, it is not by simply
29:26 exploiting the knowledge of the unknown.
29:28 It is by maximally enticing the use of knowledge.
29:29 knowledge.
29:30 Right? But but but the AI could could
29:32 tell you to do that. So the AI could
29:34 say, you know, given what what you know
29:36 now about Sally and Susan, go with
29:38 Susan, but be open to it and then ask
29:40 you questions while you're dating Susan,
29:41 you know, every time you ask it a
29:43 question, it will come out with the with
29:46 the best kind of practical uh
29:47 deliberation. Yeah.
29:48 Yeah. So, so how would you use it? Would
29:50 you have like a pair of VR goggles that
29:52 it always tells you what to do? Would
29:53 you never consult it? Would you consult
29:57 it occasionally? M mhm. So I have a
29:59 four-year-old and a six-year-old and I
30:01 kind of am raising them with the idea
30:03 that this world is the world in which
30:04 they are entering. Right.
30:04 Right.
30:07 Okay. And so what I've done is on the
30:12 one hand try to utilize that oracle to
30:15 develop their skills. So that tilos is a
30:18 self-development one. And so my daughter
30:21 will do math that AI could trivially
30:23 answer, but she will still do it. So AI
30:25 poses questions and she does the math
30:27 and this works pretty well. But then I
30:32 timed limit that very strictly and uh
30:34 this is through a pro through uh a
30:36 curriculum at alpha school which you
30:38 know um in based in Austin. So there's a
30:41 time delimiting. Then for the remainder
30:43 of the day there is an experiential
30:45 learning component that is completely
30:47 without this oracle. In other words she
30:49 goes outside she tries things in the
30:51 world. She learns to ride a bike for
30:52 five miles without stopping. She climbs
30:54 a rock wall. She speaks in front of a
30:56 hundred people. Okay. So, there's a
30:59 nonoracle component. There can be light
31:01 consultation for like if I want to learn
31:03 how to garden, how do I do that? That's
31:05 a cons consultation that I would rely on
31:08 the oracle to tell me how to do. The the
31:11 last component is then um stimulating
31:15 through probably human discussion the
31:18 kind of characteristics and habits of
31:20 mind that are necessary to retain
31:22 self-direction in a world like that
31:24 because my biggest concern would not be
31:27 correctness. Correctness is solved but
31:29 it would be the concern around
31:31 infeeblement around not living a full
31:33 human life because I no longer
31:35 self-direct because I become a sheep.
31:37 And so what do I mean by that? Well, I
31:40 want to be able to cultivate a
31:44 reflective metacognition that says what
31:46 am I versus what is the pole of the
31:49 algorithm. In other words, if this is an
31:51 exosystem around me that I'm using quite
31:54 regularly, I sure as heck want to know
31:56 what I am, right? This is an extended
31:59 part of my mind um that I endorse, that
32:02 I transparently use, but I still need to
32:04 know what my boundary is so I don't get
32:05 lost in it, right?
32:06 right?
32:09 We need to know how to think in
32:11 connection with machines that could do
32:12 the thinking for us.
32:15 Right? So let me ask you this. One way
32:18 to frame your answer is to say I will
32:20 consult your oracle but I always need to
32:22 make sure that if I follow its
32:24 directions I need to know the steps. I
32:26 need it to tell me the full reasons. Is
32:28 that is that fair?
32:31 Uh I think it's more than that. I want I
32:33 need to be able to exercise my
32:35 deliberative capacity. In other words,
32:38 knowing the reasons. It is not enough to
32:40 just know them, right? I mean, this is
32:41 the Mino, right? You know, this is the
32:45 the idea that we need to have the statue
32:47 of datalus, I think, you know, tied
32:50 down. And the only way we can uh avoid
32:53 having this thought run away is through
32:55 working through it, giving an account,
32:58 as Plato would say. And so um I I need
33:01 not just to know the reasons but I also
33:03 need to be able to work through them
33:04 um myself
33:06 if that makes sense. Yeah. So and I and
33:08 the tilos is really important as well
33:11 because there's a lot you can do like
33:14 Rouso wrote in the Emil about how to
33:16 tutor uh how to raise an autonomous
33:19 child right and this gives us monosuri
33:23 um in in the end um but one thing he
33:25 does there is he configures the
33:28 environment for the boy in a way that is
33:30 very I don't know paternalistic like
33:32 controlling that environment but there's
33:36 a progressive uh letting go because the
33:39 end goal is self-development. With an AI
33:40 system, there's no end goal of
33:42 self-development. You know, that oracle
33:44 doesn't really care about that. That
33:47 oracle would just assume you be um
33:50 perpetually dependent and that you have
33:52 a habit of passivity and you just do
33:53 what it tells you to do, right?
33:56 So, it's crucial that you set that goal, right,
33:56 right,
33:59 of of self-development and not of, you
34:00 know, unthinking dependence.
34:04 Yeah. So, um, I agree with much of what
34:07 you said. One being that the shape of a
34:09 good life is to be self-directed, right?
34:11 So, so even if you're doing you're
34:13 making all the right choices, dating
34:15 Sally and not Susan, if you're not
34:16 feeling in the driver's seat, if you
34:19 live your entire life like that, that in
34:21 it in itself, the form of that, even
34:23 beyond the content of your decisions,
34:25 robs you of the good life. IPSO facto,
34:26 right? That that's what you're getting
34:29 at. So, I would definitely not use this
34:32 Oracle in like VR glasses mode where
34:33 like it would always just tell me to
34:34 raise your left hand, raise your right
34:37 hand because like and I would gladly
34:40 trade off suboptimal decisions for the
34:42 fact that I'm making the the decisions. Yeah.
34:43 Yeah.
34:45 But here's here's why I want to
34:48 challenge you. What if the Oracle is so
34:51 advanced that we can't even understand
34:54 its deliberations? And before this
34:55 sounds too dystopian, I'll draw an
34:59 analogy to religion, right? Um Dante
35:02 when he goes to paradise and he asks the
35:04 eagle of justice, why is the poor
35:06 virtuous pagan who lived before Christ,
35:08 who never even had a chance to see
35:10 Christ, why does he deserve to go to hell?
35:10 hell?
35:12 The eagle of justice says, none of your
35:13 damn business. Yeah.
35:13 Yeah.
35:15 And in the next Kanto, the eagle of
35:17 justice says, I'm the eagle of justice.
35:20 I don't even know. That's God. Okay. I I
35:22 just delight this. These are almost
35:24 exact words. I delight in following
35:27 God's will. The structure of faith as I
35:31 take it is that you uh try to validate
35:34 it as much as as you can. You know, is
35:35 it plausible that Jesus rose from the
35:37 dead? Are are the accounts accurate? But
35:40 after you validate the oracle, the
35:42 religion, the god to your rational
35:46 faculties, you as a leap of faith are
35:49 willing to um take actions even if you
35:51 cannot see the full reasons of those actions.
35:52 actions.
35:56 So if you now transplant that to a kind
35:59 of AI oracle, I imagine you wouldn't be
36:01 comfortable with that. And so and so my
36:03 question would be do you think faith
36:05 itself and do you think faith and
36:06 religion is just a deficient way of
36:08 living the human life even now
36:10 that's a big question so the most
36:12 beautiful part of the Bible for me is
36:15 the end of Job book of Job where
36:17 much the same as the eagle of justice
36:18 you have um
36:20 yeah where were you when I was building
36:21 the cosmos right what God says
36:23 yeah exactly and these harms have
36:24 befallen job and it's this beautiful
36:27 poetic explanation of the limits of
36:30 human um capacity to understand God. Now
36:36 in that case, God looks to us as a uh
36:38 from the standpoint of some form of
36:41 self-development. In other words, like
36:44 um we have a relationship with God in
36:47 which we self in which we develop um
36:49 that doesn't exist in the case of the
36:52 omnisient AI. Um and so it it's much
36:54 more likely to form a kind of passive
36:56 relationship. I think it's a close it's
36:58 a close call because in the because we
37:02 do sort of like try to give ourselves up
37:03 to Jesus.
37:04 My life is not my own. This is what they
37:05 say. Right.
37:07 Right. Right. But I think the fact that
37:10 like um there is some reciprocal expectation,
37:11 expectation,
37:13 not to say we're equal to God, but like
37:16 there's some reciprocity sets a slightly
37:18 different frame. We also choose in a
37:19 real sense, right,
37:20 right,
37:24 to uh engage in religion and like to to
37:27 sort of legislate upon ourselves that
37:30 ultimate question. And my concern with
37:32 the habituation mode of AI is we may
37:34 cease to choose like we may unknowingly
37:37 bind ourselves to a life of dependence
37:39 where we no longer are choosers.
37:41 Right? I see. I mean, it's not just
37:43 religion, right? What I'm trying to
37:44 highlight is that there's an entire
37:47 sphere of human activity that I I mean
37:49 may maybe you'll consider it all to be
37:51 deficient but but many serious
37:54 philosophical traditions don't that has
37:57 as an epistemic mode following the
38:00 advice of an authority
38:04 whose legitimacy kindness accuracy and
38:06 truth you have empirical reason to
38:08 believe in even if you can't understand
38:10 this specific piece of advice right
38:10 right
38:12 faith is one of them I gave Yeah,
38:14 maybe the military is is another one,
38:16 right? Where you don't know the full
38:17 reason that that you are being given
38:20 orders. Uh arranged marriage might be
38:21 another one.
38:23 Uh parental relationships that that's a
38:23 bit different because you're a kid there.
38:24 there.
38:25 But but do you see like what I'm trying
38:27 to highlight that there's an entire
38:30 sphere of human activity where you don't
38:32 trust and reason yourself on the advice
38:34 itself. Yeah. But you evaluate the
38:37 deliberator. You value the the advice
38:39 giver. Well, so I think you know were we
38:41 to be first principal reasoners about
38:43 everything, I think chaos would ensue. I
38:45 think we actually have to unthinkingly
38:47 accept quite a lot and not just to be
38:49 religious but to function in society. I
38:50 think we do that.
38:52 Um that is also the way in which I think
38:55 we should relate to tradition. In other
38:57 words, we should not use reason to
38:59 create tradition or to to scrap
39:01 tradition a new. This is where I think
39:02 Mill gets into some trouble and where
39:05 Hayek's Burkian reverence comes into
39:07 play in a better way. Okay. So there's a
39:09 conservative strand that I very much um
39:11 embrace as it pertains to the epistemic
39:14 value of tradition. But um so that's one
39:16 point is like I totally agree with that.
39:19 The the the point about um the military,
39:22 right? I was in the military. I think
39:24 Kant has a good framework for thinking
39:26 about the the laws that we give
39:28 ourselves. I mean he's talking about the
39:29 moral law and the categorical
39:31 imperative, but more broadly he's
39:33 talking about the idea that we can
39:34 restrict ourselves.
39:36 Yeah. We can be autonomous and not
39:40 heteronomous in so far as we rationally
39:41 choose to put something on ourselves. So
39:44 if I choose to join the military
39:47 and then in the military no longer get
39:49 to choose, I have to follow orders.
39:50 That's completely okay. That's fine.
39:52 That's autonomous. And in fact,
39:55 I can exit the military. I can hold the
39:57 officer above me to account through
39:59 court marshall. And so it seems to
40:03 matter very much whether we choose to um
40:06 to to willingly sacrifice our autonomy.
40:07 I'm trying to think of other things like
40:10 you know we we choose to do jury duty.
40:12 We choose to do a lot of things that you
40:13 know sacrifice that.
40:16 Yeah. So so just to be clear not talking
40:19 about AI yet talking about uh like human
40:22 interaction preai. It sounds like you're
40:25 fine with evaluating
40:27 the legitimacy, let's call it, of an
40:29 authority, outsourcing partial
40:32 deliberation to that authority, meaning
40:34 uh following certain kinds of orders
40:35 from that authority without
40:37 understanding the full reasons if you
40:41 maintain the ability to evaluate it if
40:43 not fully and the ability to exit.
40:45 Right? Therefore, let's just transplant
40:47 that exact structure that you talked
40:49 about the military onto AI. Would you be
40:50 comfortable with that? Again, this is
40:52 why I set it set it up to be the
40:54 omniscient autocomplete, right? The
40:56 oracle that that given every test you've
40:58 thrown in it, it's given you the best
41:02 practical decision. So, in this case,
41:04 would you be comfortable it just telling
41:06 you date Sally or Susan and he says date
41:08 Sally and you say explain to me the
41:10 reasons and it explains to you some
41:11 reasons, but you still can't understand
41:13 the full picture. Just like the eagle of
41:15 justice, would you be okay outsourcing
41:17 that decision decisions like that to
41:24 because like your reason, right, is kind
41:25 of perfected. Your reason is as good as
41:28 it can get. But again, I'm just assuming
41:29 that there's a limitation to to human
41:32 reason, then that AI potentially can be
41:35 higher. And so if you're if you're as as
41:38 good as you can get reason-wise and you
41:41 have the ability to to exit,
41:43 are you okay following that order?
41:46 I think I would I think on a case-bycase
41:48 basis, I would be willing to do it. I
41:49 would I hesitate a little bit on
41:52 marrying Sally. Um, one because I'm
41:54 married to Adrien, but um, no, but the
41:57 other is there's a um there's a critique
42:00 of utilitarianism
42:03 that Bernard Williams brings. Um, it's
42:05 called One Thought Too Many. And he
42:07 essentially poses this scenario in which
42:09 you have um, someone about to drown in a
42:12 rescue situation. And what he says is
42:14 that though the philosophers might want
42:18 you to do a calculation, you know, um
42:20 what you should do is go with your moral
42:22 intuition about that. You should just
42:23 act, you know, in other words, you
42:25 shouldn't actually run a calculation of
42:26 any sort.
42:28 And while in this case, I'm not running
42:31 calculation. I'm just kind of deferring.
42:33 I think on questions of love, yeah,
42:33 yeah,
42:36 I would I would I think it would be one
42:39 thought too many to have an external agent
42:40 agent
42:42 But but this is how arranged marriages
42:44 in all the like the premodern societies
42:47 have worked and okay and maybe if if
42:48 love is the trip tripping factor. Let's
42:49 say starting a company. Yeah.
42:50 Yeah.
42:51 Like like should I start company A or
42:53 company B? Let's let's take love because
42:55 I I understand it's more subjective.
42:56 Would you be comfortable outsourcing
42:58 that decision if you were able to
42:59 evaluate the oracle in all the ways that
43:01 that we described?
43:04 Yeah. I mean and and maybe it's um
43:08 semantic but I would outsource and um
43:12 give consideration to that question and
43:14 you know if I accept the thought
43:15 experiment maybe I need to give very
43:17 little consideration because it's like
43:19 already answering for me not just in
43:21 general like it's not just saying what
43:23 is the most profitable you know
43:24 opportunity here but it's actually
43:27 answering at the level of me I think I
43:29 would be comfortable using it I mean I
43:30 would want
43:32 a system of safeguards in place and
43:34 provided that was in place, right, I
43:37 would take advantage and in that way I
43:39 would be competitive with others that
43:40 were that were doing the same,
43:42 right? And this is actually very helpful
43:44 this thought experiment teasing it out
43:46 that the safeguards are you want to make
43:48 sure your reason
43:50 goes as high as it can go, right? Even
43:52 if it's not to constantly in use,
43:54 it's constantly in use. It's as you're
43:57 you're as practically sharp uh as you
43:59 can be trained by the AI. uh you want
44:01 the ability to exit and maybe compare
44:02 systems. You want different points of
44:04 view. But if those conditions are
44:06 satisfied and again I'm not suggesting
44:08 that Christian faith is like this,
44:10 you're willing to make the leap of faith
44:13 and and outsource certain certain decisions.
44:13 decisions.
44:15 Yeah. Yeah. The thing I own is the means
44:18 and hierarchy towards human flourishing
44:19 is one in one way and the deliberation
44:22 therein. In other words, I can use tools
44:26 as an instrument to attain my goals, but
44:28 I don't want my goals to be set for me.
44:30 And so if one of my goals is to um start
44:33 a company, I can use AI to help me
44:36 determine what that company should be,
44:38 but I don't want it to set the goal for
44:39 me. In other words, I don't want to be a
44:41 blank canvas and just say, "What should
44:44 my life be?" Right? However, let's say
44:47 you had it as one of your goals. Uh I
44:51 want to build a company to uh let's say
44:52 just make a lot of money, right? And I
44:56 think you and I both agree that a purely
44:59 merkantile life is not the best life.
45:02 Wouldn't it be better if the AI I mean
45:04 didn't force you to to add another end
45:08 but force you into a journey such that
45:10 you discovered the end of being
45:11 missiondriven and helping others. Does
45:13 does that make sense? So, it's not the
45:15 AI. The AI isn't saying, "No, no, here's
45:16 the real end you should you should go
45:19 for." The AI is, but the AI is also not
45:21 optimizing on your end that you told it
45:22 to optimize on.
45:23 Does that does that does that make
45:24 sense? Wouldn't that be better?
45:26 Yeah, it does. I mean, it's a kind of
45:29 like adult version of the tutor and Emil
45:31 honestly because you're sort of setting
45:34 a configuration for this child, in my
45:37 case, me, to do development. Now, that
45:39 presupposes that the AI cares about my
45:41 development, right? It also assumes that
45:43 the AI understands development in the
45:45 way that I understand it, which is to
45:47 say a process of self-direction that I
45:50 do. And even then, I think there is this
45:53 nagging question of that has been
45:55 externally computed. I am now the agent
45:57 of the AI. I'm the agent of an AI that
46:00 appears to be highly benevolent and very
46:03 focused on a kind of like Humbultian
46:04 million version of vision of human
46:07 flourishing. But the AI in your thought
46:10 experiment is determining my end for me.
46:12 It's saying that I should be developed
46:15 in this way.
46:18 Yes. But crucially, it sets you up on a
46:21 journey such that
46:24 you take on that end yourself. Does that
46:26 make sense? Yeah. So it's it's not like
46:27 beating you with the head saying like
46:29 money making bad, money making bad. It's
46:32 for example uh going through an IPO uh
46:33 and then giving you a business decision
46:34 that makes you lose a lot of money and
46:36 realize the the rather the relative
46:38 worthlessness of money.
46:42 Yeah. I think a good analogy here is the
46:43 um it's parenting, right? Parenting.
46:45 Well, another good analogy is the state.
46:49 So like do you want to have a system of
46:53 laws? Do you want to have a state that
46:56 views its raised as being um
46:58 developmental like that tries to
47:01 maximally endow you with autonomy? Or do
47:03 you want to have a state that is more
47:07 like a night watchman and preserve space
47:09 in which you can experiment and try
47:12 things and learn yourself? The latter
47:15 tends to have far fewer risks of
47:17 paternalism, right? because it genuinely
47:20 is not the role of the state in this
47:24 case to uh inculcate this habit in me.
47:27 And I I think that's a foundational
47:28 position I would take. In this case,
47:32 we're saying could AI fill that like
47:35 very benevolent paternalistic role.
47:36 I'm not asking could it fill it. I'm
47:39 saying let's assume it could. Would you
47:41 be happy to have it? Right. Right.
47:43 Because obviously, you know, but neither
47:46 you nor I believe that current AI can do
47:48 this effectively at all. Yeah. And maybe
47:50 even in our lifetimes, we won't have
47:51 this. I'm asking you the philosophical question.
47:52 question.
47:54 If we if you if you take the premise
47:56 that it can do this, would you be okay
47:59 with it? I.e. shaping your ends in a way
48:02 that you wouldn't agree with now, but in
48:04 some sense is the right answer to to the
48:06 human good because because you you also
48:07 don't want to go completely relativistic
48:08 and say there's no human good
48:10 whatsoever, right? It's it's what
48:11 whatever ends I want to have now.
48:14 Yeah, I tentatively accept it in so far
48:19 as the AI was um was setting up the
48:21 maximum space for me to have this
48:23 Sedartha like journey or whatever you
48:25 know this like developmental journey. If
48:28 AI was scaffolding that for me then that
48:29 it would be highly consistent with uh
48:31 the idea of human flourishing and
48:34 especially if I'm um directing the AI to
48:36 do so. In the case where AI is doing
48:38 this kind of surreptitiously, I'm a
48:40 little less enthused. But I think I'm
48:43 I'm trying to experiment with that as a
48:45 um completely unrealistic framework. Uh
48:47 but a thought experiment that could
48:50 could support
48:51 you don't have a philosophical problem
48:52 with it, right? You have engineering
48:53 doubts or something.
48:56 I have practical problems with it. Um
48:57 beyond just engineering, but I I think
49:00 the uh the philosophical premise I like
49:01 I tenatively accept.
49:04 I see. So obviously uh none of our AI
49:06 systems today are anywhere close to this
49:08 omissioned autocomplete. Um what do you
49:10 think our conversation and your thinking
49:13 around autonomy has to offer engineers
49:15 today and builders today in building
49:17 systems that that support and enhance autonomy?
49:18 autonomy?
49:21 Well, I want to say that the mere fact
49:23 that AI can't do the omnicient
49:27 autocomplete thing is uh it's only part
49:29 of the story. And I think this really
49:32 it's a really interesting um notion that
49:34 there is on the one hand the kind of
49:36 like epistemic question of like can it
49:38 do it right we agree it can't do it the
49:42 other question is why might we feel as
49:45 though it can or we should want it to
49:46 have that role in our life and that's a
49:48 psychological question. So there's kind
49:49 of the epistemic one and the
49:50 psychological one.
49:52 I'll make it really tangible here with
49:54 an example. So, a year or two ago, there
49:57 was a 42-year-old guy named Victor
49:59 Miller in Cheyenne, Wyoming who ran for mayor.
50:01 mayor.
50:03 And the what made his mayoral bid unique
50:06 is that he ran as the meat avatar
50:09 essentially of Chat GPT. In other words,
50:11 his pitch was I'm going to run for
50:12 mayor, but I'm going to turn around
50:14 every question that I get asked, I'm
50:15 going to tell Chat GPT.
50:16 Like the admission to autocomplete that
50:17 I suggested.
50:20 Yeah, exactly. He didn't win, but it's
50:22 interesting for a few reasons. One is
50:24 that it's potentially prophetic. Like we
50:26 may have AI playing a major role in
50:27 ruling, right?
50:29 That's one reason why it's interesting.
50:31 The other reason it's interesting is
50:33 that he thought that this was a good idea.
50:34 idea.
50:35 And I don't think he was making an
50:37 epistemic claim. Like I don't think he
50:39 was deeply analyzing what AI could and
50:42 couldn't do. I think he believed that it
50:44 was a good idea. And this is again a
50:46 psychological question. Like in this
50:49 case, we want to believe that all the
50:51 blood and treasure we spill on politics
50:54 can be solved by a ruler that has access
50:55 to truth that's authoritative, that's
50:58 seemingly impartial, neutral, right? And
50:59 so I think we really have to keep in
51:02 mind that like your scenario is far out,
51:04 but it almost doesn't matter. Like
51:07 people right now believe that AI should
51:09 tell them what to do. There are Claude
51:12 boys. There are literally teenagers who
51:14 call themselves Claude boys who will
51:16 wake up and they will do what Claude
51:18 says and not otherwise. Really?
51:18 Really?
51:20 Really? So they So
51:21 what do they do? Are they are their
51:22 lives good or?
51:26 Uh no. But I mean on what basis, right?
51:27 I mean it gets back to this
51:29 philosophical question, right? Are their
51:31 lives good in so far as they make fewer
51:34 dumb teenage errors? Probably actually.
51:35 And the point is like this is a great
51:37 litmus test for like do you think this
51:38 is a good idea?
51:41 Yeah. Right. I have been in closed door
51:43 you know room like in rooms with people
51:46 who are really high up at the AI labs
51:48 they tend to be effective altruists and
51:52 they say that we are being rebels
51:55 foolish rebels if we don't listen to the
51:58 AI they they truly say this and it is because
51:58 because
52:00 well it's religious language right we're
52:02 we're like re rebellion a like the
52:04 fallen angel right it's I'm like how
52:06 dare you rebel against God right
52:08 exactly how dare you you silly you silly
52:11 rebel And that is because they have a
52:14 view that the the things that we do
52:16 should be viewed purely through the lens
52:19 of of uh through a consequential lens.
52:20 In other words, they don't have a thick
52:22 notion of what it means to be a human.
52:24 And if you don't have that, why not take
52:26 the optimal path, whatever that means. Right.
52:27 Right.
52:28 Right. This is a major issue.
52:30 And I think both what you and I are
52:32 suggesting is
52:35 the optimal path ceases to be optimal
52:38 when you only think about optimality.
52:40 Like like like when you only care about
52:42 date Sally or date Susan when you don't
52:45 care about the autonomy the agency the
52:47 self that is willing that if you give
52:49 that up your life is ipso facto going to
52:51 be bad right
52:53 what is the point of optimizing a life
52:55 that ceases to be your life to live
52:58 right yeah but in the counter and this
52:59 is what I was trying to show with with a
53:02 thought experiment is that
53:05 AI if it does become and it's clearly
53:07 not there yet this omniscient
53:11 uh uh uh practical reasoner, it
53:15 potentially can help us direct our own
53:17 lives better.
53:19 Yeah. Yeah. Right. So that's you're
53:19 willing to grant it.
53:21 Yeah. And well, so I think the mechanism
53:23 why where I would be much more
53:27 wholesomely uh uh embracing of this is,
53:31 you know, if we built into AI the goal
53:35 of um uh fostering better
53:36 self-direction, right? Right?
53:37 And this is a little bit of this is kind
53:39 of where we were getting to in the
53:41 scenario, but like how do you do that,
53:44 right? So one way you could do it is you
53:46 kind of reject the idea that AI should
53:48 be an answer machine ever and always,
53:49 right? That's what it kind of is today.
53:50 We ask it a question. Doesn't matter
53:52 what question we ask, math question or
53:55 what is justice, just here's an answer,
53:57 right? In reality, the way we think
54:00 about questions is we have a question.
54:02 It invites other questions. There's kind
54:04 of a ball of questions around the
54:06 central question. And then we have to
54:08 kind of do a navigation of these things.
54:11 We have to balance that search for you
54:12 know broad understanding with a need to
54:14 act in the world. So we have to reach
54:16 some kind of an equilibrium that
54:20 balances you know both desires. Okay. If
54:24 you had an AI system that could guide
54:26 that that could spur that could raise
54:28 questions that could help you make
54:30 judgments that would get right at the
54:33 core of what um de direction and
54:35 deliberation entails. Yeah.
54:37 And I think there's no reason why, oh,
54:39 we can't do such a thing right now, right?
54:39 right?
54:42 But the difference is that tends to open
54:44 up the possibility for self-direction.
54:45 It raises questions. It doesn't close
54:47 off and just give you an answer. In
54:49 other words, the problematic use case is
54:51 the autocomplete, right? Like we should
54:52 be very concerned when anything
54:55 approximates autocomplete. When it when
54:57 AI is used instead as a provocator,
54:59 instead as a like a razor of questions
55:02 and as a helpful tool for deliberation, then
55:03 then yeah,
55:03 yeah,
55:05 you know, that's wonderful. An example
55:07 here is uh I'm talking with uh some of
55:08 my friends who are like junior
55:11 consultants or or or investment bankers
55:14 and they're just using it uh to replace
55:16 their own work, right? And that's the
55:17 kind of problematic case or or you know
55:19 the kid who uses GPT to write his own
55:21 philosophical essay. The way I'm using
55:23 GPT right now is that I'm using as a
55:25 live tutor to kind of read the text
55:27 together with me and and ask different
55:29 questions. And so that's the two
55:30 different paths you want to put in front
55:33 of us. And this not EA, not existential
55:35 risk. This is the true challenge that's
55:35 ahead of
55:37 us. Although I will say I'm not sure
55:39 that what analysts do at consulting
55:41 firms is that valuable. And so they confidence,
55:42 confidence,
55:43 right? And so like if they can make a
55:46 PowerPoint, great. But I think ideally
55:48 what would happen is that that freed up
55:50 space for them to do more, you know,
55:52 things whether that's higher level
55:54 strategic thinking or uh reading
55:56 philosophy in their spare time. But the
55:58 point is that that goes back to the
56:00 benefits of offloading. There are some
56:01 things that are very good to offload and
56:03 automate. Um but we should be very
56:06 careful to offload and automate the core
56:09 deliberative capacities, right? And uh
56:12 hopefully our audience can see why the
56:15 answer even if AGI is around the corner
56:18 to just focus on alignment and building
56:20 AGI and for AGI to sort of focus on on
56:23 these problems autonomy is short-sighted
56:26 is because by definition we are the ones
56:27 that need to be do this or or another
56:30 way to frame it is what alignment means
56:32 in our kind of philosophical framework
56:35 is to build AGI that enhances autonomy.
56:36 Yeah. Right. And so that well that's why
56:38 even if AGI is around the corner, we
56:40 have to be building AI with this
56:42 fundamental tension of autonomy and
56:43 dependency in view.
56:46 Yeah, Nick Bostonramm is correct that
56:48 philosophy is on a deadline. He is dead
56:51 wrong about the role of philosophy in
56:53 thinking about the deeper conception of
56:55 human good that AI needs to you know uphold.
56:56 uphold.
56:57 So we talked about the existing
56:59 landscape of philosophical schools. We
57:01 talked about what you actually think
57:03 matters which is the autonomy question
57:05 that people are overlooking. uh and how
57:08 it plays out in AI. But now I want to
57:11 dive deeper into autonomy itself. Okay.
57:12 So you described in your writing of
57:15 autonomy as the central good. Okay. Be a
57:17 bit more precise here. What do you mean
57:18 by that? Is it sufficient for the good
57:21 life? Is it necessary for the good life?
57:23 Um would you ever trade it off for other
57:24 goods? Tell us about that.
57:27 Yeah. So I think it is necessary to live
57:29 an autonomous life. It's something that
57:31 I think develops like a muscle. So I
57:32 think we try things out. We self-direct
57:34 maybe badly. Like certainly as kids we
57:36 do that really badly. We don't even
57:38 select our own projects essentially and
57:39 then we develop and we get better and
57:42 better and better and as we do that it
57:44 becomes more and more an important
57:46 contributor to our happiness. In other
57:48 words, we value it more. It becomes more
57:50 and more central to how we think about
57:53 the what pleasure even consists in. And
57:55 so it's this developmental thing that
57:58 sort of happens through our life. And
58:00 that for me is why it makes it the
58:01 central good because it is the thing
58:05 that unlocks our ability to know our own
58:07 selves, our own gifts, to develop those
58:10 gifts and to use those gifts to live the
58:11 life that we want to live.
58:13 Right? However, it's not sufficient for
58:16 a good life because one can imagine uh
58:18 this is another edge case where the the
58:21 fully autonomous man or woman who is
58:22 able to go through life like this but
58:24 just fails in all of his or her
58:26 endeavors, right? if all family dies
58:28 like this is the the Aristotle example
58:29 mic ethics about prium
58:30 prium
58:32 so so this is not the only good
58:34 I agree right yeah I agree it's not it
58:36 has a relationship to other human goods
58:38 but is not the only good and I agree
58:39 that it's not sufficient I also don't
58:42 think that autonomy um presupposes
58:45 choosing well and so I think that is
58:47 actually a consequence of having
58:50 autonomy that you can choose that you
58:52 have to allow people to choose very
58:55 badly in other words people choose a
58:58 self-directed path that is very harmful
59:00 to them and you have to let them and so
59:03 in other words like uh you know we it it
59:05 does not is not a prescription I say
59:07 causally efficacious because I think it
59:10 on balance tends to lead to happiness
59:11 but it certainly is not it also comes
59:13 with the weight of responsibility I mean
59:16 there's a lot to be said about the the
59:19 the burden that one feels when one can
59:20 freely choose
59:22 yeah well I I want to push you on on
59:24 that point because uh autonomy as you
59:26 described It is one of my like as I live
59:28 life like one of my central goods which
59:30 is why I'm doing this here and not not
59:33 in a more structured setting.
59:34 But I was really surprised when I
59:38 entered the workforce that most people
59:40 don't seem to like it. Like when I
59:43 started managing people out of college,
59:45 I structured their work environment how
59:46 I would like to be structured, which is
59:47 this is our goal. I'm going to explain
59:49 to you the reasons why we're going after
59:51 this goal. You choose how you you get
59:53 there. You just need to get there.
59:56 Yeah. They mo many people hate that and
59:57 they they won't say I want less
59:59 autonomy. They won't frame it there.
60:01 They would usually frame it that I want more structure in my life. So they want
60:03 more structure in my life. So they want to be told what to do. They want Yeah.
60:05 to be told what to do. They want Yeah. And so so how do you reconcile that? Is
60:07 And so so how do you reconcile that? Is it just people like you and me who value
60:09 it just people like you and me who value this and it's subjective or Yeah. This
60:11 this and it's subjective or Yeah. This is one of the most worrisome threads
60:12 is one of the most worrisome threads that my wife will raise with me because
60:14 that my wife will raise with me because she'll be like, "Brendan,
60:16 she'll be like, "Brendan, it may be the case that you are just
60:18 it may be the case that you are just like an outlier that you care about this
60:20 like an outlier that you care about this in a way that other people don't."
60:22 in a way that other people don't." Intuitively, meaning you may be like
60:24 Intuitively, meaning you may be like rationalizing it, but you may actually
60:27 rationalizing it, but you may actually just care about it deeply. Intuitively,
60:28 just care about it deeply. Intuitively, in the way that you might like
60:29 in the way that you might like chocolate, that doesn't mean everyone
60:30 chocolate, that doesn't mean everyone else likes chocolate.
60:31 else likes chocolate. So, we have lots of individual
60:32 So, we have lots of individual variation. I totally allow for that that
60:34 variation. I totally allow for that that people would like it more or less based
60:36 people would like it more or less based on their, you know, sociobiological
60:39 on their, you know, sociobiological kind of path. Um and I think what I mean
60:41 kind of path. Um and I think what I mean by that is that it could be like almost
60:43 by that is that it could be like almost epiphenomenal uh that somebody has
60:46 epiphenomenal uh that somebody has certain genetics and they have certain
60:48 certain genetics and they have certain disposition predisposition. Okay. But I
60:51 disposition predisposition. Okay. But I gen generally think that it's much more
60:53 gen generally think that it's much more determined by the by the conditions in
60:56 determined by the by the conditions in which we live and by the way in which
60:58 which we live and by the way in which we're habituated as a consequence. The
61:00 we're habituated as a consequence. The transition that I'll mention here is
61:01 transition that I'll mention here is that in um aristocracy,
61:05 that in um aristocracy, one of the things that one of the
61:07 one of the things that one of the benefits was that people knew their
61:10 benefits was that people knew their station. They knew their role. Um I'm
61:12 station. They knew their role. Um I'm not advocating for that. Obviously, I'm
61:14 not advocating for that. Obviously, I'm not advocating for a kind of hierarchal
61:16 not advocating for a kind of hierarchal system like that. But that was one of
61:18 system like that. But that was one of the silver linings, right? In America,
61:21 the silver linings, right? In America, when Alexis Toko came, he observed a
61:24 when Alexis Toko came, he observed a generalized anxiety. He called it
61:26 generalized anxiety. He called it enkiitude. It's a word in French for
61:28 enkiitude. It's a word in French for like anxiety without an object, a
61:30 like anxiety without an object, a particular object. And why he think it
61:32 particular object. And why he think it existed was because there was no one to
61:34 existed was because there was no one to tell you what to do, you know, like you
61:36 tell you what to do, you know, like you you had to make your own way. And so you
61:39 you had to make your own way. And so you look to the majority to tell you what to
61:41 look to the majority to tell you what to do, right? You look to the state. Like
61:42 do, right? You look to the state. Like these are the kind of pathologies of
61:44 these are the kind of pathologies of democracy is that you sort of fill the
61:46 democracy is that you sort of fill the void and it appeases you especially if
61:48 void and it appeases you especially if you take religion away. By the way,
61:49 you take religion away. By the way, religion grounds you and it minimizes
61:51 religion grounds you and it minimizes this.
61:51 this. Well, it's the ultimate form of telling
61:52 Well, it's the ultimate form of telling you what to do, right?
61:53 you what to do, right? And it grounds you and the family life
61:55 And it grounds you and the family life can ground you as well. Lots of things
61:57 can ground you as well. Lots of things can ground you, but absent those things,
61:59 can ground you, but absent those things, you kind of drift and float. In America,
62:01 you kind of drift and float. In America, it still was the case that people were
62:03 it still was the case that people were very self-directed. You're, you know,
62:06 very self-directed. You're, you know, the classic example of this is the
62:08 the classic example of this is the Jeffersonian ideal, which kind of like
62:11 Jeffersonian ideal, which kind of like thought of autonomy as being the farming
62:12 thought of autonomy as being the farming life, right? It's like you grow up on a
62:14 life, right? It's like you grow up on a farm. My wife's family has a ranch.
62:17 farm. My wife's family has a ranch. They're ranchers. And um it really is
62:19 They're ranchers. And um it really is the case that like you are very
62:20 the case that like you are very self-directed. So, I get it. And I
62:23 self-directed. So, I get it. And I wonder what the consequence of being a
62:26 wonder what the consequence of being a nation of farmers. Like almost
62:29 nation of farmers. Like almost allreneurs essentially were
62:31 allreneurs essentially were entrepreneurial farmers. Yeah. I mean
62:32 entrepreneurial farmers. Yeah. I mean like 85% of free Americans were were
62:35 like 85% of free Americans were were that way and now we're a nation of
62:37 that way and now we're a nation of employees,
62:38 employees, right? It's given way to being you know
62:40 right? It's given way to being you know industrial revolution caused us to enter
62:44 industrial revolution caused us to enter employment. We now are subservient to
62:48 employment. We now are subservient to process and to people in a way that we
62:50 process and to people in a way that we were not. And I say this to bring all
62:52 were not. And I say this to bring all this together is that the conditions
62:54 this together is that the conditions either the the regime like now we're in
62:57 either the the regime like now we're in a democracy, no one tells us what to do.
62:58 a democracy, no one tells us what to do. That's very scary for people. But also
63:01 That's very scary for people. But also the fact that we we move to an
63:04 the fact that we we move to an industrial system in which people tell
63:05 industrial system in which people tell us what to do constantly. Like can you
63:08 us what to do constantly. Like can you take vacation? Like no, you know, not
63:10 take vacation? Like no, you know, not today. Like these are kind of limits on
63:12 today. Like these are kind of limits on your self-direction. And um and as a
63:16 your self-direction. And um and as a result, we've now come to like it to
63:18 result, we've now come to like it to want it less,
63:19 want it less, right? It's Stockholm syndrome almost.
63:20 right? It's Stockholm syndrome almost. of sorts, right? Yeah.
63:22 of sorts, right? Yeah. You mentioned Stockholm and it makes me
63:23 You mentioned Stockholm and it makes me think of uh this uh uh case of Germany,
63:26 think of uh this uh uh case of Germany, which I fully understand is not in
63:28 which I fully understand is not in Germany, just so people don't think I'm
63:29 Germany, just so people don't think I'm an idiot, but um East and West Berlin
63:32 an idiot, but um East and West Berlin were two radically different systems, as
63:34 were two radically different systems, as different as it gets. You had the same
63:35 different as it gets. You had the same family, same genetics split across the
63:37 family, same genetics split across the wall, right? And
63:39 wall, right? And the East German system under Soviet
63:41 the East German system under Soviet control uh was habituated to follow
63:44 control uh was habituated to follow orders. The West German was much more
63:46 orders. The West German was much more like the general west and and less so.
63:48 like the general west and and less so. And then during COVID, like much more
63:51 And then during COVID, like much more recently, you had a very different
63:53 recently, you had a very different response. You had much more obedience
63:55 response. You had much more obedience among the people who had been habituated
63:57 among the people who had been habituated by the East German system.
63:58 by the East German system. Yeah.
63:59 Yeah. Why I say that is it appears that the
64:01 Why I say that is it appears that the habituation is longlasting.
64:03 habituation is longlasting. Like you can grow up under a system in
64:05 Like you can grow up under a system in which you're told what to do and then
64:07 which you're told what to do and then maybe for the rest of your life you are
64:09 maybe for the rest of your life you are inclined to do what you're told. Whereas
64:11 inclined to do what you're told. Whereas if you grow up in Texas, um you may be
64:14 if you grow up in Texas, um you may be inclined for the rest of your life to
64:15 inclined for the rest of your life to not do what you're told.
64:16 not do what you're told. Wait, wait, hold on. But that doesn't
64:18 Wait, wait, hold on. But that doesn't seem to be such a strong response for
64:21 seem to be such a strong response for the position that autonomy is the
64:23 the position that autonomy is the central good whether you appreciate it
64:24 central good whether you appreciate it or not. Right? Because what what you
64:26 or not. Right? Because what what you were saying is that nurture can greatly
64:28 were saying is that nurture can greatly change how people valued it or not.
64:30 change how people valued it or not. Right.
64:30 Right. But I thought you would you would want
64:32 But I thought you would you would want to argue for the position that
64:33 to argue for the position that regardless
64:35 regardless of what like whether you valued it or
64:36 of what like whether you valued it or not, it is valuable objectively.
64:38 not, it is valuable objectively. Yeah. I don't want to base the idea that
64:40 Yeah. I don't want to base the idea that it is constitutive of a flourishing life
64:42 it is constitutive of a flourishing life on the idea that it is widely used in
64:45 on the idea that it is widely used in practice or even that it is like um uh
64:50 practice or even that it is like um uh valued equally by all because again the
64:52 valued equally by all because again the mechanism is one of habituation so it's
64:54 mechanism is one of habituation so it's not going to be valued equally by all
64:56 not going to be valued equally by all and I actually think having highly
64:58 and I actually think having highly autonomous people is a total historical
65:00 autonomous people is a total historical anomaly.
65:00 anomaly. Oh yeah totally. Yeah. I mean, for most
65:02 Oh yeah totally. Yeah. I mean, for most of human life, we've either been in a
65:04 of human life, we've either been in a culture with slaves or we've been in,
65:06 culture with slaves or we've been in, you know, kind of hierarchical cultures
65:09 you know, kind of hierarchical cultures that really don't have the same
65:10 that really don't have the same presuppositions. So, I think it's
65:12 presuppositions. So, I think it's anomalous and it's precious,
65:14 anomalous and it's precious, but that's not why I think it's
65:16 but that's not why I think it's constitutive of a flourishing life. I I
65:18 constitutive of a flourishing life. I I simply think that like it is consistent
65:22 simply think that like it is consistent with it is the way in which we discover
65:24 with it is the way in which we discover our nature, the way in which we express
65:26 our nature, the way in which we express it. It is one of the things that humans
65:28 it. It is one of the things that humans uniquely do is to use reason to guide
65:31 uniquely do is to use reason to guide action to develop ourselves and um and
65:34 action to develop ourselves and um and it tends to also lead to happiness.
65:36 it tends to also lead to happiness. Okay. So, so what is the what is the
65:39 Okay. So, so what is the what is the reason like what is the reason that it
65:41 reason like what is the reason that it is constitutive despite the fact that
65:43 is constitutive despite the fact that people can be habituated out of desiring
65:45 people can be habituated out of desiring it? Yeah. Well, that's why it's because
65:47 it? Yeah. Well, that's why it's because we have a nature that the only way to
65:50 we have a nature that the only way to discover our purpose or our highest end
65:53 discover our purpose or our highest end is through this autonomous
65:55 is through this autonomous experimentation is through the
65:56 experimentation is through the development that we do through
65:58 development that we do through self-direction. What about again I'm
65:59 self-direction. What about again I'm going to go back to religion here
66:00 going to go back to religion here because I think this is the this is the
66:02 because I think this is the this is the counterpoint here. What about someone
66:05 counterpoint here. What about someone who surrenders themselves completely to
66:09 who surrenders themselves completely to Jesus and claims that through that that
66:11 Jesus and claims that through that that they that they found their true selves
66:13 they that they found their true selves or I mean in the Buddhist case they
66:15 or I mean in the Buddhist case they surrender themselves completely to the
66:16 surrender themselves completely to the master become completely obedient to the
66:18 master become completely obedient to the master and through that they think that
66:20 master and through that they think that that's the route through which their
66:21 that's the route through which their their development becomes
66:23 their development becomes well I think in so far as they
66:24 well I think in so far as they surrendered that is a pretty powerful
66:26 surrendered that is a pretty powerful act of self-direction you know in like
66:28 act of self-direction you know in like the making the choice I I do I do think
66:30 the making the choice I I do I do think that is pro it's difficult to square the
66:33 that is pro it's difficult to square the autonomy lens with say Islam that views
66:36 autonomy lens with say Islam that views that views um you know the
66:38 that views um you know the I mean submission it's in the name right
66:40 I mean submission it's in the name right yeah as submission but um but here's
66:42 yeah as submission but um but here's what I'll say about that I think you
66:44 what I'll say about that I think you know I think one of the most critical
66:47 know I think one of the most critical points one can make about the dynamism
66:50 points one can make about the dynamism in the west
66:51 in the west is that it uh it is a tradition that has
66:56 is that it uh it is a tradition that has been formed through different visions of
66:59 been formed through different visions of the good life all of which allow for
67:02 the good life all of which allow for some individual choice about what the
67:04 some individual choice about what the good life is. And to to be to be more
67:06 good life is. And to to be to be more specific on that,
67:07 specific on that, yeah,
67:09 yeah, one tradition, probably the earliest
67:12 one tradition, probably the earliest western tradition is from Homer. It's
67:14 western tradition is from Homer. It's from the Mesopotamian epics. It's from
67:16 from the Mesopotamian epics. It's from the Bronze Age, and it's the idea of the
67:20 the Bronze Age, and it's the idea of the heroic life, the life of of adventure.
67:23 heroic life, the life of of adventure. Achilles is a good good example of this.
67:27 Achilles is a good good example of this. The Greek response to that is a life of
67:32 The Greek response to that is a life of science and contemplation and philosophy
67:34 science and contemplation and philosophy and things like that. That's a life that
67:36 and things like that. That's a life that is to say a life in which Achilles is
67:38 is to say a life in which Achilles is replaced by Socrates like the
67:39 replaced by Socrates like the contemplative life. And those are two
67:42 contemplative life. And those are two totally different you know visions of
67:43 totally different you know visions of the good life. A third comes in the
67:45 the good life. A third comes in the Hebrew and biblical tradition in Jesus
67:48 Hebrew and biblical tradition in Jesus which is a life of pious devotion and
67:49 which is a life of pious devotion and centers more around the family and other
67:51 centers more around the family and other things. I would say none of these are
67:53 things. I would say none of these are commensurable. They're all different
67:54 commensurable. They're all different visions of the good life, but they form
67:58 visions of the good life, but they form a kind of tension that one I would say
68:01 a kind of tension that one I would say the overlapping areas is some measure of
68:03 the overlapping areas is some measure of individual choice as to what the good
68:04 individual choice as to what the good life is. Like it's not the community's
68:06 life is. Like it's not the community's job to tell the individual. The other
68:09 job to tell the individual. The other thing I would say is that
68:10 thing I would say is that even for the Christian tradition,
68:11 even for the Christian tradition, even for the Christian tradition in so
68:12 even for the Christian tradition in so far as we get to choose that path and we
68:15 far as we get to choose that path and we navigate that path towards Jesus, there
68:17 navigate that path towards Jesus, there is a submission element of it. But again
68:20 is a submission element of it. But again like the freedom to be unfree is I think
68:23 like the freedom to be unfree is I think a a satisfactory use of freedom to to
68:25 a a satisfactory use of freedom to to well because otherwise you wouldn't be
68:26 well because otherwise you wouldn't be able to enter enter into contract right
68:29 able to enter enter into contract right we bind ourselves and a lot of the you
68:31 we bind ourselves and a lot of the you know most important American figures
68:34 know most important American figures grappled with these and synthesized them
68:36 grappled with these and synthesized them in their own way like Lincoln read
68:38 in their own way like Lincoln read Shakespeare and the Bible for example
68:40 Shakespeare and the Bible for example like he's reading about these things and
68:41 like he's reading about these things and forming his own sort of local synthesis
68:44 forming his own sort of local synthesis and so I just say that you know we would
68:47 and so I just say that you know we would do well to preserve oberve those
68:49 do well to preserve oberve those tensions. One of the things I worry
68:51 tensions. One of the things I worry about with AI is that the dominant
68:54 about with AI is that the dominant schools seek to sort of come in with an
68:57 schools seek to sort of come in with an answer that's like no this is the one
68:59 answer that's like no this is the one true thing. It's actually
69:00 true thing. It's actually those good life things those are quaint
69:03 those good life things those are quaint really what it is is maximizing utils.
69:05 really what it is is maximizing utils. Wait, but you're coming in here and
69:06 Wait, but you're coming in here and you're saying autonomy is the central
69:07 you're saying autonomy is the central good
69:08 good precisely because it is the one in that
69:11 precisely because it is the one in that leaves open the possibility of deciding
69:14 leaves open the possibility of deciding the good life. In other words, it
69:15 the good life. In other words, it preserves the plurality. It preserves
69:17 preserves the plurality. It preserves that space for the individual. I think
69:19 that space for the individual. I think it's it's a critically important thing
69:21 it's it's a critically important thing because we could just say, well, isn't
69:23 because we could just say, well, isn't isn't it again isn't it just like
69:25 isn't it again isn't it just like another vision of the good life. But it
69:26 another vision of the good life. But it is very different to say that a vision
69:29 is very different to say that a vision that preserves one's individual ability
69:31 that preserves one's individual ability to question the ultimate question for
69:35 to question the ultimate question for themselves. That is different from one
69:37 themselves. That is different from one that tries to reduce and replace and say
69:39 that tries to reduce and replace and say we have found the one true answer.
69:40 we have found the one true answer. Right? And just to be clear here, when
69:42 Right? And just to be clear here, when you say autonomy, is it a very uh uh
69:46 you say autonomy, is it a very uh uh thin kind of simplistic I'm the one
69:49 thin kind of simplistic I'm the one willing it, whether that's the right
69:51 willing it, whether that's the right thing or the wrong thing, or it's
69:52 thing or the wrong thing, or it's something more like Kant where I am
69:54 something more like Kant where I am willing the thing that also is in like
69:57 willing the thing that also is in like rational and good and
69:59 rational and good and um there's a subtlety there in because
70:03 um there's a subtlety there in because so it's the former in the sense that I
70:06 so it's the former in the sense that I believe that one must deliberate but Not
70:10 believe that one must deliberate but Not that they need to necessarily deliver
70:13 that they need to necessarily deliver well. Nor am I stating that it is a
70:15 well. Nor am I stating that it is a moral autonomy like I'm not suggesting
70:18 moral autonomy like I'm not suggesting that what autonomy means is to give
70:19 that what autonomy means is to give oneself the moral law which is to say
70:21 oneself the moral law which is to say the categorical comparative. That's not
70:23 the categorical comparative. That's not what I'm saying. I'm saying that we must
70:24 what I'm saying. I'm saying that we must have a capacity for reasoned
70:26 have a capacity for reasoned self-direction. We can do it wrong but
70:29 self-direction. We can do it wrong but we we must you know preserve that space.
70:32 we we must you know preserve that space. The the reason I say it's complicated is
70:33 The the reason I say it's complicated is because I do think Kant actually enters
70:35 because I do think Kant actually enters into it. In other words, I personally
70:37 into it. In other words, I personally draw a freedom maximizing principle from
70:41 draw a freedom maximizing principle from Kant. And this freedom maximizing
70:43 Kant. And this freedom maximizing principle is what gives people space to
70:45 principle is what gives people space to be autonomous. These are two different
70:47 be autonomous. These are two different conceptions of
70:48 conceptions of right. I see. So it's not just the
70:51 right. I see. So it's not just the simple notion of like a whim. Like let's
70:54 simple notion of like a whim. Like let's say I have a whim to jump off the
70:56 say I have a whim to jump off the building right now. Yeah.
70:57 building right now. Yeah. Because no deliberation went into that.
70:59 Because no deliberation went into that. Yeah. But it's also not the, you know,
71:02 Yeah. But it's also not the, you know, if I really really deliberate, like it
71:04 if I really really deliberate, like it would be good to steal Brendan's wallet
71:06 would be good to steal Brendan's wallet right now.
71:07 right now. Yeah,
71:07 Yeah, that would count.
71:08 that would count. It would count.
71:09 It would count. It would count because it's even though
71:10 It would count because it's even though I didn't deliberate, well, I made a fair
71:12 I didn't deliberate, well, I made a fair attempt. And your response was, "If I do
71:14 attempt. And your response was, "If I do that enough and I and I continue
71:17 that enough and I and I continue deliberating, I
71:17 deliberating, I We have to make mistakes."
71:18 We have to make mistakes." Yeah. Like that's a dumb thing for you
71:21 Yeah. Like that's a dumb thing for you to do to want to steal my wallet. It's
71:23 to do to want to steal my wallet. It's over there, by the way. But um but we
71:25 over there, by the way. But um but we have to make mistakes you know like I
71:26 have to make mistakes you know like I think part and parcel of autonomy is
71:29 think part and parcel of autonomy is doing dumb things and um you know it's
71:32 doing dumb things and um you know it's it's simply the wrong standard to
71:34 it's simply the wrong standard to suggest that we must always do things
71:36 suggest that we must always do things rightly.
71:37 rightly. Um let's say someone lives an autonomous
71:39 Um let's say someone lives an autonomous life. What else is missing to get to the
71:42 life. What else is missing to get to the good life or even the best life?
71:45 good life or even the best life? It's a good question. Um, one point that
71:48 It's a good question. Um, one point that needs to be said is that what I've
71:51 needs to be said is that what I've outlined so far is fairly
71:52 outlined so far is fairly individualistic, but I think the way in
71:55 individualistic, but I think the way in which we learn and experiment is
71:57 which we learn and experiment is profoundly social. And so we learn from
71:59 profoundly social. And so we learn from others, especially people who are, you
72:02 others, especially people who are, you know, above us in terms of
72:03 know, above us in terms of aspirationally, but not so far from us
72:06 aspirationally, but not so far from us that we can't sort of learn. And so I
72:08 that we can't sort of learn. And so I don't want to suggest that we're sort of
72:10 don't want to suggest that we're sort of like operating in in in a solopcyistic
72:13 like operating in in in a solopcyistic uh way or an isolated way. Um that's
72:16 uh way or an isolated way. Um that's important to say. The other thing way I
72:18 important to say. The other thing way I think about this is like how do I want
72:20 think about this is like how do I want to educate my kids? I want them to be
72:22 to educate my kids? I want them to be autonomous but I also want them to be
72:24 autonomous but I also want them to be autonomous and virtuous. In other words
72:27 autonomous and virtuous. In other words like I view the the role of the state as
72:31 like I view the the role of the state as being to a procedural a formal role to
72:34 being to a procedural a formal role to preserve freedom um to minimize coercion
72:37 preserve freedom um to minimize coercion that sort of thing um to provide for
72:40 that sort of thing um to provide for security. But I view the role of you
72:42 security. But I view the role of you know I view the good life as being
72:44 know I view the good life as being something more than autonomous and in
72:47 something more than autonomous and in fact like virtuous as well. Does that
72:49 fact like virtuous as well. Does that make sense?
72:50 make sense? Sometimes you need to trade off autonomy
72:52 Sometimes you need to trade off autonomy and virtue. Right? One example would be
72:55 and virtue. Right? One example would be well okay if I want I want want to jump
72:57 well okay if I want I want want to jump out this building right now uh after
72:59 out this building right now uh after poor deliberation. uh there's a good
73:01 poor deliberation. uh there's a good case where you should restrain and limit
73:04 case where you should restrain and limit my autonomy to preserve let's say my
73:07 my autonomy to preserve let's say my bodily function in order so that that I
73:09 bodily function in order so that that I can be more autonomous in the future but
73:10 can be more autonomous in the future but also potentially to be more virtuous or
73:13 also potentially to be more virtuous or to not kill a person right if and so so
73:16 to not kill a person right if and so so do you allow for a trade-off between
73:18 do you allow for a trade-off between autonomy and other goods and this
73:21 autonomy and other goods and this obviously a political question as well
73:23 obviously a political question as well one extreme idea would be no paternal
73:25 one extreme idea would be no paternal paternalism is is allowed you can never
73:27 paternalism is is allowed you can never interfere autonomy like there are other
73:30 interfere autonomy like there are other goods but autonomy
73:31 goods but autonomy can't be traded off for other goods.
73:33 can't be traded off for other goods. Yeah.
73:35 Yeah. Yeah. I mean, in general, I don't think
73:36 Yeah. I mean, in general, I don't think that the gains to welfare from
73:39 that the gains to welfare from paternalism outweigh the losses to
73:41 paternalism outweigh the losses to autonomy. And so, I would take a very
73:44 autonomy. And so, I would take a very strong um position on
73:48 strong um position on um uh not applying the tools of state in
73:52 um uh not applying the tools of state in particular to paternalistically
73:54 particular to paternalistically uh deliver welfare gains. Um and I would
73:58 uh deliver welfare gains. Um and I would apply that to things like uh you know
74:00 apply that to things like uh you know UBI for example um as well as many other
74:04 UBI for example um as well as many other many other areas but I think that um we
74:09 many other areas but I think that um we you know security is a is a good example
74:12 you know security is a is a good example of a kind of a of a vital interest as
74:14 of a kind of a of a vital interest as Mill would say that it's not clear that
74:17 Mill would say that it's not clear that autonomy has a lexical priority over
74:19 autonomy has a lexical priority over security like security seems
74:21 security like security seems preconditional to autonomy we have to
74:23 preconditional to autonomy we have to maintain security. So, so your answer is
74:26 maintain security. So, so your answer is uh it's it doesn't have lexical priority
74:29 uh it's it doesn't have lexical priority i.e. a trade-off is sometimes worth it.
74:32 i.e. a trade-off is sometimes worth it. Yeah.
74:32 Yeah. However, people overly value the
74:35 However, people overly value the benefits from uh from from welfare gains
74:37 benefits from uh from from welfare gains and under value the dangers of even
74:40 and under value the dangers of even removing a little bit of us. That that
74:41 removing a little bit of us. That that would be your answer.
74:42 would be your answer. Yeah. And this is a fundamental conflict
74:44 Yeah. And this is a fundamental conflict because we value convenience quite a
74:46 because we value convenience quite a lot. I mean this is this gets back to
74:49 lot. I mean this is this gets back to soft despatism from Toeville is that we
74:52 soft despatism from Toeville is that we welcome the incremental convenience and
74:54 welcome the incremental convenience and from a state that is farseeing and mild
74:57 from a state that is farseeing and mild you know that will offer this it's happy
74:58 you know that will offer this it's happy to offer it we're happy to welcome it
75:00 to offer it we're happy to welcome it or or from AI right
75:01 or or from AI right or from AI and uh from any sort of you
75:04 or from AI and uh from any sort of you know neutral or centralizing force but
75:07 know neutral or centralizing force but what do we give up well we give up the
75:10 what do we give up well we give up the vigorous use of our own capacities we
75:12 vigorous use of our own capacities we become infeebled we become like a flock
75:14 become infeebled we become like a flock of timid industrious animals I mean took
75:16 of timid industrious animals I mean took beautiful language around this and so
75:18 beautiful language around this and so this is a fundamental issue particularly
75:20 this is a fundamental issue particularly in democracy where again we otherwise
75:22 in democracy where again we otherwise have kind of enkitude because we have no
75:24 have kind of enkitude because we have no one to tell us what to do. So I think
75:26 one to tell us what to do. So I think like we staged then the critical issue
75:28 like we staged then the critical issue of our time which is we now have built
75:31 of our time which is we now have built something that can deliver the
75:32 something that can deliver the incremental convenience that can offload
75:35 incremental convenience that can offload our deliberation. We are going to
75:37 our deliberation. We are going to welcome it into our life. We're going to
75:39 welcome it into our life. We're going to be tempted more than we've ever been
75:41 be tempted more than we've ever been tempted before and we must find the
75:44 tempted before and we must find the resources within us to resist. Yeah, by
75:46 resources within us to resist. Yeah, by the way, this is uh as you know, I just
75:49 the way, this is uh as you know, I just uh interviewed uh the founder of Alpha
75:51 uh interviewed uh the founder of Alpha School and this where you send your kids
75:52 School and this where you send your kids and and I would I shadowed them for a
75:54 and and I would I shadowed them for a week uh and I was so excited by that
75:57 week uh and I was so excited by that project because I mean the so for our
76:00 project because I mean the so for our audience the quick pitch is that you
76:02 audience the quick pitch is that you learn basically your entire K- to2
76:05 learn basically your entire K- to2 curriculum instead of six eight hours a
76:06 curriculum instead of six eight hours a day with homework in two hours. You're
76:08 day with homework in two hours. You're like okay that's interesting. But what I
76:10 like okay that's interesting. But what I found when I talked to the kids was that
76:12 found when I talked to the kids was that the greatest benefit was a ch a
76:14 the greatest benefit was a ch a fundamental change in their character
76:16 fundamental change in their character that I am capable that I am autonomous
76:18 that I am capable that I am autonomous in this way
76:19 in this way and which is a fundamental difference
76:21 and which is a fundamental difference from how all the kids are being taught
76:22 from how all the kids are being taught today which is this kind of teacher in
76:24 today which is this kind of teacher in front of a classroom lecturing the kind
76:26 front of a classroom lecturing the kind of Prussian model to create industrial
76:28 of Prussian model to create industrial slaves industrial like employees
76:31 slaves industrial like employees essentially right
76:32 essentially right and so so that like I I almost think
76:34 and so so that like I I almost think that they're underelling underpitching
76:36 that they're underelling underpitching what what they offer. offering a fun a
76:39 what what they offer. offering a fun a fundamental building of a building of
76:41 fundamental building of a building of their own character and not just like
76:42 their own character and not just like being able to to to cram. That's
76:45 being able to to to cram. That's precisely right and I that's exactly the
76:47 precisely right and I that's exactly the benefit that I see in my own children
76:50 benefit that I see in my own children when I went there to like talk about
76:52 when I went there to like talk about entrepreneurship to eight and nine year
76:53 entrepreneurship to eight and nine year olds and I was just so moved by the
76:56 olds and I was just so moved by the extent to which the kids had
76:58 extent to which the kids had individuated they were highly individual
77:01 individuated they were highly individual and very autonomous and very um high
77:04 and very autonomous and very um high agency as well and um and I fear I just
77:08 agency as well and um and I fear I just to make it kind of a a cautionary point
77:11 to make it kind of a a cautionary point um I fear a kind of divide
77:14 um I fear a kind of divide Right. In other words, I see the alpha
77:16 Right. In other words, I see the alpha model and I see the vision by the way is
77:19 model and I see the vision by the way is to get that out there and I really hope
77:20 to get that out there and I really hope it I really hope it can can scale um
77:23 it I really hope it can can scale um massively.
77:25 massively. But I also understand that if one's
77:27 But I also understand that if one's relationship with technology is one of
77:29 relationship with technology is one of passivity, one of dependence, one of
77:31 passivity, one of dependence, one of doom scrolling um then we become highly
77:35 doom scrolling um then we become highly dependent from from the beginning
77:37 dependent from from the beginning and we almost create two classes of
77:41 and we almost create two classes of people. We almost create one individual
77:44 people. We almost create one individual who um for whom it is the best time in
77:48 who um for whom it is the best time in history to be a six-year-old and one
77:51 history to be a six-year-old and one individual that is um on the path to
77:54 individual that is um on the path to become an NPC. Right.
77:55 become an NPC. Right. And I I I think we we must avoid that.
77:57 And I I I think we we must avoid that. Yeah.
77:58 Yeah. And in some sense compared to early
78:00 And in some sense compared to early America, we're already there with the
78:01 America, we're already there with the employee with the employee versus uh
78:04 employee with the employee versus uh versus like gentleman farmership. So
78:06 versus like gentleman farmership. So what what you're saying is that there
78:08 what what you're saying is that there are a lot of uh things that don't seem
78:11 are a lot of uh things that don't seem political which build the autonomous
78:14 political which build the autonomous muscle or weaken the autonomous muscle
78:16 muscle or weaken the autonomous muscle in civil society. That's right.
78:18 in civil society. That's right. Education being one of them, workplace
78:19 Education being one of them, workplace being another. Okay,
78:20 being another. Okay, we've talked a lot about the intrinsic
78:23 we've talked a lot about the intrinsic importance of autonomy, how it's
78:25 importance of autonomy, how it's constitutive to a good life.
78:26 constitutive to a good life. I now want to move on to the exttrinsic
78:28 I now want to move on to the exttrinsic benefits of autonomy and why it's
78:30 benefits of autonomy and why it's important to defend autonomy. Not for
78:32 important to defend autonomy. Not for why it's gonna make you live a good
78:33 why it's gonna make you live a good life, but for a flourishing
78:35 life, but for a flourishing civilization. Okay. So, I know you've
78:36 civilization. Okay. So, I know you've been dying to talk about a Hayek, so
78:38 been dying to talk about a Hayek, so now's your chance.
78:39 now's your chance. H Yeah. So, um I'm kind of a Hayek
78:43 H Yeah. So, um I'm kind of a Hayek stand, but that the reason is because I
78:46 stand, but that the reason is because I think he's um desperately in need of
78:48 think he's um desperately in need of being revived for the for the AI age. Um
78:53 being revived for the for the AI age. Um where to start? I mean, he
78:56 where to start? I mean, he my favorite book of his is Constitution
78:58 my favorite book of his is Constitution of Liberty. And in that book he makes a
79:02 of Liberty. And in that book he makes a consequentialist case for liberty. So he
79:04 consequentialist case for liberty. So he says that
79:06 says that um liberty is useful. And the reason he
79:09 um liberty is useful. And the reason he does this is that when you argue for
79:12 does this is that when you argue for something and you start from axioms and
79:15 something and you start from axioms and you deduce from axioms, you just invite
79:18 you deduce from axioms, you just invite the challenge, well I don't agree with
79:18 the challenge, well I don't agree with your axioms. So like I don't care if
79:20 your axioms. So like I don't care if your deductive reasoning is good. I just
79:22 your deductive reasoning is good. I just don't agree with your axioms. So he
79:23 don't agree with your axioms. So he doesn't do that. He makes a
79:24 doesn't do that. He makes a consequentialist argument. I say that
79:26 consequentialist argument. I say that because I don't think that that is the
79:28 because I don't think that that is the only reason why Hayek thinks liberty is
79:30 only reason why Hayek thinks liberty is dear. But he makes a consequentialist
79:32 dear. But he makes a consequentialist argument and what he concludes is that
79:34 argument and what he concludes is that we should have a minimization of
79:36 we should have a minimization of coercion.
79:38 coercion. What is coercion? Coercion in for Hayek
79:40 What is coercion? Coercion in for Hayek is a kind of configuring of the decision
79:44 is a kind of configuring of the decision space such that you do the bidding of
79:48 space such that you do the bidding of another because it is the lesser of two
79:50 another because it is the lesser of two evils. Basically, you have your decision
79:52 evils. Basically, you have your decision space so so so configured by another
79:55 space so so so configured by another that you no longer are uh taking action
79:58 that you no longer are uh taking action on your own plans but you're taking
80:01 on your own plans but you're taking actions on the plan of others. Um the
80:04 actions on the plan of others. Um the steps to get there are are many but
80:07 steps to get there are are many but essentially what he says is that liberty
80:09 essentially what he says is that liberty is useful because it facilitates the use
80:12 is useful because it facilitates the use of knowledge in society and that
80:14 of knowledge in society and that knowledge is what allows the anonymous
80:16 knowledge is what allows the anonymous person to attain their unknown ends. So
80:19 person to attain their unknown ends. So that's a consequentialist view. How does
80:21 that's a consequentialist view. How does it do that? Well, what he says is that
80:23 it do that? Well, what he says is that most knowledge is practical. It's
80:27 most knowledge is practical. It's primordally practical. It's not the
80:29 primordally practical. It's not the explicit semantic knowledge that we
80:31 explicit semantic knowledge that we write down. So like the knowledge in
80:32 write down. So like the knowledge in science, what people usually think of as
80:34 science, what people usually think of as knowledge, he would say is the tip of
80:36 knowledge, he would say is the tip of the iceberg or the the droplet of the
80:38 the iceberg or the the droplet of the wave that above the ocean of knowledge.
80:40 wave that above the ocean of knowledge. What does he mean by knowledge being
80:42 What does he mean by knowledge being practical? He means the dispositions,
80:45 practical? He means the dispositions, the habits that each of us has. The way
80:47 the habits that each of us has. The way an entrepreneur thinks about an
80:48 an entrepreneur thinks about an opportunity, the way a diplomat sizes up
80:50 opportunity, the way a diplomat sizes up a room, the way we ride a bike, those
80:52 a room, the way we ride a bike, those are all things that are sort of locked
80:54 are all things that are sort of locked up and inside of us and they're either
80:58 up and inside of us and they're either inarticulated or inarticulable.
81:01 inarticulated or inarticulable. And so we have this, it drives our
81:04 And so we have this, it drives our action, but we can't share it. The way
81:07 action, but we can't share it. The way in which we can share it, the best way
81:09 in which we can share it, the best way we can share it is through the market.
81:11 we can share it is through the market. We have a low bandwidth mechanism called
81:13 We have a low bandwidth mechanism called prices, money prices that allows us to
81:16 prices, money prices that allows us to share our knowledge because as we try to
81:18 share our knowledge because as we try to do things, as we formulate ends, try to
81:20 do things, as we formulate ends, try to achieve those ends, we release bits of
81:22 achieve those ends, we release bits of this knowledge. Of course, I never like
81:24 this knowledge. Of course, I never like excavate it and share it with you, but
81:26 excavate it and share it with you, but my action is colored by it. So, it
81:28 my action is colored by it. So, it releases this knowledge. And as this
81:30 releases this knowledge. And as this happens in parallel across the entire
81:32 happens in parallel across the entire world, we pursue our independent plans.
81:35 world, we pursue our independent plans. The market allows for us to kind of
81:37 The market allows for us to kind of equilibriate those plans. preferences
81:40 equilibriate those plans. preferences change constantly. So the equilibrium
81:42 change constantly. So the equilibrium should be thought of as an asmtote, not
81:44 should be thought of as an asmtote, not a fixed raenzian equilibrium as
81:47 a fixed raenzian equilibrium as traditional economics would say. But it
81:49 traditional economics would say. But it allows us to do that. We share knowledge
81:51 allows us to do that. We share knowledge and this is what gives us a way to
81:54 and this is what gives us a way to benefit from knowledge that we don't
81:56 benefit from knowledge that we don't possess. uh part of Hayek's
81:59 possess. uh part of Hayek's consequentialist uh argument for
82:01 consequentialist uh argument for autonomy uh is uh as is described in the
82:05 autonomy uh is uh as is described in the second chapter of the of the book uh the
82:07 second chapter of the of the book uh the creative powers of a free civilization.
82:11 creative powers of a free civilization. Um what do you make of the tremendous
82:14 Um what do you make of the tremendous creative powers of unfree civilizations
82:17 creative powers of unfree civilizations as well as unfree peoples? Let me give
82:19 as well as unfree peoples? Let me give you a few few examples. The pyramids,
82:22 you a few few examples. The pyramids, the great wall built by basically
82:24 the great wall built by basically slaves. Virgil was writing propaganda,
82:27 slaves. Virgil was writing propaganda, right? Pro-mpire propaganda. Uh Dosi
82:30 right? Pro-mpire propaganda. Uh Dosi wrote in exile, imprisonment, financial
82:32 wrote in exile, imprisonment, financial uh great distress. Most religious
82:35 uh great distress. Most religious traditions had these unquestionable
82:37 traditions had these unquestionable truths. Uh and as you know, early modern
82:39 truths. Uh and as you know, early modern science and philosophy flourished under
82:41 science and philosophy flourished under severe uh censorship and sometimes even
82:43 severe uh censorship and sometimes even persecution. M so I think that if you
82:47 persecution. M so I think that if you have as your goal to exploit the
82:52 have as your goal to exploit the existing stock of knowledge then I think
82:55 existing stock of knowledge then I think unfree societies can do that. In other
82:57 unfree societies can do that. In other words if you want to demonstrate what
83:00 words if you want to demonstrate what command and control can do you build the
83:01 command and control can do you build the pyramids. Um but I think to discover new
83:04 pyramids. Um but I think to discover new production methods for example you need
83:07 production methods for example you need the um undirected experimentation and
83:10 the um undirected experimentation and the spontaneous um order that arises
83:13 the spontaneous um order that arises through the through free societies. In
83:14 through the through free societies. In other words, it's a totally different
83:16 other words, it's a totally different question to say how do we um uh deliver
83:20 question to say how do we um uh deliver something at this moment in time based
83:22 something at this moment in time based on the knowledge then known versus how
83:24 on the knowledge then known versus how do we promote how do we bear and
83:26 do we promote how do we bear and disseminate and generate more knowledge
83:28 disseminate and generate more knowledge in the in the world and make progress.
83:30 in the in the world and make progress. But but surely you're underappreciating
83:32 But but surely you're underappreciating the uh the difficulty and innovation in
83:34 the uh the difficulty and innovation in building like the first pyramid or
83:36 building like the first pyramid or starting to build the Great Wall or
83:37 starting to build the Great Wall or another from from Chinese antiquity
83:39 another from from Chinese antiquity being able to divert rivers away such
83:42 being able to divert rivers away such that uh such that you know it doesn't
83:43 that uh such that you know it doesn't flood certain parts of areas like that's
83:46 flood certain parts of areas like that's not just you know these are things that
83:48 not just you know these are things that they they didn't know how to do before
83:49 they they didn't know how to do before right well so I think you can um set as
83:53 right well so I think you can um set as a goal like you have a known goal you
83:55 a goal like you have a known goal you want to create a pyramid and then we
83:56 want to create a pyramid and then we have a lot of technical obstacles that
83:57 have a lot of technical obstacles that we need to get or um we want to go to
83:59 we need to get or um we want to go to space and we need to do it with alloys
84:01 space and we need to do it with alloys that haven't yet been invented to grow
84:03 that haven't yet been invented to grow JFK and Rice University.
84:05 JFK and Rice University. This is the kind of like top down uh
84:07 This is the kind of like top down uh it's like the pinnacle of the top down
84:09 it's like the pinnacle of the top down model is that we can set an audacious
84:11 model is that we can set an audacious goal and we can we can often realize it
84:13 goal and we can we can often realize it if we have the right structures in
84:14 if we have the right structures in place. What I think free societies do is
84:17 place. What I think free societies do is they secure a kind of like adaptation to
84:19 they secure a kind of like adaptation to the future. So as future conditions
84:21 the future. So as future conditions change, people doing lots of experiments
84:23 change, people doing lots of experiments in parallel create um varants create
84:27 in parallel create um varants create solutions that just sort of bubble up.
84:29 solutions that just sort of bubble up. Right? That's one thing. They also are
84:31 Right? That's one thing. They also are the best way to grow the stock of
84:33 the best way to grow the stock of knowledge in general. Like yes, you
84:35 knowledge in general. Like yes, you know, the creating creation of new
84:38 know, the creating creation of new alloys was probably accelerated by JFK's
84:40 alloys was probably accelerated by JFK's push, but in general, science works
84:42 push, but in general, science works through, you know, a kind of republic of
84:44 through, you know, a kind of republic of science, you know, like to quote Michael
84:46 science, you know, like to quote Michael Palani, where you have distributed
84:48 Palani, where you have distributed science. No one is setting the
84:49 science. No one is setting the direction. There's no house of Ben Salem
84:52 direction. There's no house of Ben Salem from, you know, um, the new Atlantis.
84:54 from, you know, um, the new Atlantis. There's no one saying what what science
84:56 There's no one saying what what science should do. It's just a a republic that's
84:59 should do. It's just a a republic that's loosely connected of people all trying
85:01 loosely connected of people all trying things and experimenting.
85:02 things and experimenting. Right. So your response to the seeming
85:05 Right. So your response to the seeming counter example of early modern science
85:07 counter example of early modern science like Galileo uh surviving and
85:09 like Galileo uh surviving and flourishing under persecution is to say
85:12 flourishing under persecution is to say but there is a republic that is free
85:14 but there is a republic that is free among the scientists or relatively so.
85:16 among the scientists or relatively so. Right? That's what you would be forced
85:17 Right? That's what you would be forced to I'm also saying that there's a
85:19 to I'm also saying that there's a tension where like Galileo h you know
85:22 tension where like Galileo h you know DSki they have to carve out pockets of
85:24 DSki they have to carve out pockets of freedom in very bad conditions in order
85:26 freedom in very bad conditions in order to do their work and you know the
85:28 to do their work and you know the inquisition with Galileo or the
85:30 inquisition with Galileo or the geneticists under Lysenko are good
85:32 geneticists under Lysenko are good counterexamples where you have
85:33 counterexamples where you have authoritarian you know control that for
85:36 authoritarian you know control that for non-scientific reasons wants to shut
85:38 non-scientific reasons wants to shut them down and succeeds to some degree in
85:40 them down and succeeds to some degree in those cases
85:41 those cases right so your response there um is
85:45 right so your response there um is essentially like a lack of counterfact
85:47 essentially like a lack of counterfact actual response like if Galileo did what
85:49 actual response like if Galileo did what he did surviving persecution but what if
85:52 he did surviving persecution but what if we had a a parallel Europe that that was
85:55 we had a a parallel Europe that that was free at the time think about how much
85:56 free at the time think about how much better that would be right that's
85:57 better that would be right that's I think that's brilliantly put yeah I
85:58 I think that's brilliantly put yeah I think it's you know you look at the
86:00 think it's you know you look at the examples where uh great art was created
86:03 examples where uh great art was created amidst terrible conditions but then what
86:06 amidst terrible conditions but then what you don't consider is yeah what would
86:07 you don't consider is yeah what would the parallel universe have looked like
86:09 the parallel universe have looked like yeah okay well well I'm glad uh I'm glad
86:11 yeah okay well well I'm glad uh I'm glad I trapped you there because I set up a
86:12 I trapped you there because I set up a little trap for you because America
86:15 little trap for you because America which I think any reasonable person is
86:18 which I think any reasonable person is the mo would would say is the most free
86:20 the mo would would say is the most free in the way that you're in this hayekan
86:22 in the way that you're in this hayekan way society it's not totally free but
86:24 way society it's not totally free but the most right uh a sort of nation in
86:26 the most right uh a sort of nation in human history America clearly is amazing
86:31 human history America clearly is amazing in its entrepreneurial innovations and
86:33 in its entrepreneurial innovations and its creativity there but where is
86:36 its creativity there but where is America's Virgil where is America's
86:38 America's Virgil where is America's Shakespeare we've had three thou 300
86:40 Shakespeare we've had three thou 300 years man like like like and like but
86:42 years man like like like and like but even hard sciences like America's is
86:45 even hard sciences like America's is okay for like practical applied science,
86:47 okay for like practical applied science, but like Einstein, well, he he well it
86:51 but like Einstein, well, he he well it was in America, but he's obviously he
86:53 was in America, but he's obviously he was in a different you see what I'm
86:54 was in a different you see what I'm trying to say like the counterfactual
86:55 trying to say like the counterfactual response would be a lot more compelling
86:57 response would be a lot more compelling if America became this land of infinite
87:00 if America became this land of infinite creativity and great works being written
87:01 creativity and great works being written left and right and yet
87:03 left and right and yet the only place where America seems to
87:05 the only place where America seems to extend creativity is in the uh is the
87:07 extend creativity is in the uh is the economic sphere.
87:09 economic sphere. I think it's a good point. Um I mean I I
87:12 I think it's a good point. Um I mean I I want to say that we are not without
87:13 want to say that we are not without achievements in each area, right? there
87:15 achievements in each area, right? there is an American novel, you know, there is
87:16 is an American novel, you know, there is a Faulner kind of thing. But I I think
87:19 a Faulner kind of thing. But I I think right I take your point that we're we
87:21 right I take your point that we're we have like relative greatness in other
87:23 have like relative greatness in other ways. And you know the question is like
87:25 ways. And you know the question is like does our does something about our system
87:28 does our does something about our system tend to uh uh squash the other kinds of
87:32 tend to uh uh squash the other kinds of greatness that we would see and it's a
87:34 greatness that we would see and it's a little reminiscent of like Toqueville's
87:36 little reminiscent of like Toqueville's concern obviously because you know he
87:38 concern obviously because you know he thinks that um democracy can create a
87:41 thinks that um democracy can create a kind of mediocrity and a particularly
87:43 kind of mediocrity and a particularly particularly a mediocrity of desire or
87:45 particularly a mediocrity of desire or of like aspiration. He he writes about
87:49 of like aspiration. He he writes about like um the American merchant captain
87:53 like um the American merchant captain actually I think this is a very funny
87:54 actually I think this is a very funny and not well-known uh thing which is
87:56 and not well-known uh thing which is that uh he sort of like looks at where
87:59 that uh he sort of like looks at where honor still exists very aristocratic
88:01 honor still exists very aristocratic virtue of honor uh all but lost to the
88:04 virtue of honor uh all but lost to the world but he says it exists among these
88:06 world but he says it exists among these merchant captains in America who you
88:09 merchant captains in America who you know when they do something and save
88:10 know when they do something and save somebody else they say I won't accept
88:12 somebody else they say I won't accept payment because it's a captain's role to
88:14 payment because it's a captain's role to not accept it and so he's like it exists
88:16 not accept it and so he's like it exists but I think what he says is that we need
88:18 but I think what he says is that we need to keep the memory alive of the like
88:20 to keep the memory alive of the like high aspirations. And so I would say
88:22 high aspirations. And so I would say that I probably fault not the free
88:24 that I probably fault not the free society but the system of education that
88:27 society but the system of education that doesn't like cultivate this kind of
88:30 doesn't like cultivate this kind of desire, this kind of like highest
88:32 desire, this kind of like highest desire, right?
88:33 desire, right? In other words, like your example about
88:35 In other words, like your example about the Prussian system of like industrial
88:37 the Prussian system of like industrial education, the kind of sameness that it
88:39 education, the kind of sameness that it breeds, that sort of thing I think is
88:41 breeds, that sort of thing I think is one proximal cause. And then I, you
88:44 one proximal cause. And then I, you know, I mean, I I don't I don't think I
88:46 know, I mean, I I don't I don't think I disagree that like capitalism um tends
88:50 disagree that like capitalism um tends to produce the kind of person who has
88:53 to produce the kind of person who has material desires. I'm a little bit
88:55 material desires. I'm a little bit ambivalent on this because on the one
88:57 ambivalent on this because on the one hand, I think um I agree probably with
88:59 hand, I think um I agree probably with Freriedman that um markets are a
89:02 Freriedman that um markets are a consequence of freedom. In other words,
89:03 consequence of freedom. In other words, like we truck, barter, and exchange.
89:05 like we truck, barter, and exchange. That wasn't Freeman, that was Smith. But
89:07 That wasn't Freeman, that was Smith. But and if we're left free to do it, markets
89:09 and if we're left free to do it, markets kind of arise. Now, institutions play a
89:12 kind of arise. Now, institutions play a big role. But if that's true, then on
89:14 big role. But if that's true, then on the one hand, markets are just kind of
89:16 the one hand, markets are just kind of like a product of of freedom. On the
89:19 like a product of of freedom. On the other hand, they clearly shape
89:21 other hand, they clearly shape uh the way we see the world,
89:23 uh the way we see the world, normative ends. Yeah.
89:24 normative ends. Yeah. Yeah. And um but all things do this like
89:27 Yeah. And um but all things do this like I mean like the availability of clocks
89:29 I mean like the availability of clocks shapes our the way we think about time.
89:31 shapes our the way we think about time. Yeah.
89:31 Yeah. These are unavoidable.
89:32 These are unavoidable. Yeah. Well, I'm I actually uh prepared a
89:35 Yeah. Well, I'm I actually uh prepared a quote actually from one of my favorite
89:36 quote actually from one of my favorite passages from Toqueville that kind of
89:39 passages from Toqueville that kind of gives his answer about why he would say
89:41 gives his answer about why he would say why he said uh that America would never
89:44 why he said uh that America would never produce her own Pascal.
89:46 produce her own Pascal. And I think he's been right so far,
89:48 And I think he's been right so far, right? Who who's the best American
89:49 right? Who who's the best American philosopher so far? John Dwey probably
89:51 philosopher so far? John Dwey probably not a Pascal. I quote you Toville. If
89:54 not a Pascal. I quote you Toville. If Pascal had had in mind only some great
89:56 Pascal had had in mind only some great source of profit, this is what you're
89:57 source of profit, this is what you're saying about the markets, or had been
89:59 saying about the markets, or had been motivated only by self- glory, I cannot
90:01 motivated only by self- glory, I cannot think he would have been able, as he
90:03 think he would have been able, as he was, to gather, as he did, all the
90:05 was, to gather, as he did, all the powers of his intellect for a deeper
90:07 powers of his intellect for a deeper discovery of the most hidden secrets of
90:08 discovery of the most hidden secrets of the creator. When I observe him tearing
90:10 the creator. When I observe him tearing his soul away, so to speak, from the
90:12 his soul away, so to speak, from the concerns of life, to devoted entirely to
90:15 concerns of life, to devoted entirely to this research, and severing prematurely
90:17 this research, and severing prematurely the ties which bind his soul to his
90:19 the ties which bind his soul to his body, to die of old age before his 40th
90:22 body, to die of old age before his 40th year, I stand a gasast and realize that
90:25 year, I stand a gasast and realize that no ordinary cause can produce such
90:27 no ordinary cause can produce such extraordinary effects. So um again like
90:32 extraordinary effects. So um again like I think Hayek's uh consequentialist
90:35 I think Hayek's uh consequentialist argument of creativity and freedom and
90:37 argument of creativity and freedom and which I understand is not his only
90:39 which I understand is not his only argument would be a lot more compelling
90:42 argument would be a lot more compelling if you know in the American system you
90:44 if you know in the American system you had all these great artists and great
90:45 had all these great artists and great great great creators and great writers
90:47 great great creators and great writers in addition to the great entrepreneurs
90:48 in addition to the great entrepreneurs that America clearly does have. But but
90:51 that America clearly does have. But but that seems to be the only domain that
90:53 that seems to be the only domain that American creativity expresses itself
90:55 American creativity expresses itself notably in human history. Mhm.
90:58 notably in human history. Mhm. Mhm.
90:58 Mhm. Yeah. I think this is a very tough line
91:00 Yeah. I think this is a very tough line of inquiry. The was Pascal wealthy?
91:04 of inquiry. The was Pascal wealthy? Uh I Yeah. I I think I think he might
91:07 Uh I Yeah. I I think I think he might have been a a gentleman scholar kind of.
91:09 have been a a gentleman scholar kind of. Yeah. Because I think Toqueville also
91:12 Yeah. Because I think Toqueville also shares this kind of like it's a
91:13 shares this kind of like it's a controversial point to make but that the
91:15 controversial point to make but that the the um avail the multigenerational
91:18 the um avail the multigenerational wealth that like um primogenitor like
91:20 wealth that like um primogenitor like the estate law basically the idea that
91:23 the estate law basically the idea that whether or not a country breaks up
91:24 whether or not a country breaks up estates or passes it to the firstborn
91:27 estates or passes it to the firstborn son
91:28 son has a big role in the kinds of goals one
91:31 has a big role in the kinds of goals one can pursue because
91:32 can pursue because says this exactly
91:34 says this exactly and so I think this is I say it's
91:35 and so I think this is I say it's controversial because like people don't
91:37 controversial because like people don't like to talk about you know the literal
91:39 like to talk about you know the literal elite and like you know a state uh
91:41 elite and like you know a state uh estate sort of law in that way. But I
91:44 estate sort of law in that way. But I think Toville is right that we if we you
91:48 think Toville is right that we if we you know to the extent that we sort of break
91:49 know to the extent that we sort of break up uh estates we give people uh a
91:53 up uh estates we give people uh a starting point that makes them very
91:55 starting point that makes them very hungry to but but but especially hungry
91:57 hungry to but but but especially hungry for material things. Whereas if you're
91:59 for material things. Whereas if you're born into wealth, which I was not to be
92:02 born into wealth, which I was not to be clear, you know, if you're born into
92:03 clear, you know, if you're born into wealth, then you have a different set of
92:05 wealth, then you have a different set of ideas like you kind of are are kind of
92:08 ideas like you kind of are are kind of um blasze about it and you you're like
92:10 um blasze about it and you you're like thinking and you either become, you
92:13 thinking and you either become, you know, lazy or you pursue different ends
92:15 know, lazy or you pursue different ends that are higher. Um but it I think
92:18 that are higher. Um but it I think Toqueville thinks it's a useful
92:19 Toqueville thinks it's a useful experiment to have. So what you need to
92:23 experiment to have. So what you need to concede maybe is the full creativity of
92:27 concede maybe is the full creativity of the market or or something like that. I
92:29 the market or or something like that. I mean obviously the market itself is
92:30 mean obviously the market itself is creative in the entrepreneurial sphere
92:32 creative in the entrepreneurial sphere but you still can preserve your point
92:34 but you still can preserve your point about li like liberty itself being
92:36 about li like liberty itself being important for for creativity just
92:38 important for for creativity just different kinds of liberty are needed
92:40 different kinds of liberty are needed something like that.
92:41 something like that. Yeah. Yeah. I I also will say to in
92:43 Yeah. Yeah. I I also will say to in defense of of of of entrepreneurs that
92:46 defense of of of of entrepreneurs that some of them have very great desires
92:47 some of them have very great desires about humanity that are that would rival
92:49 about humanity that are that would rival the the desires of the the most laudable
92:52 the the desires of the the most laudable aristocrats of old. Like in other words,
92:54 aristocrats of old. Like in other words, I don't think that honor and those
92:56 I don't think that honor and those questions while I do think they're
92:58 questions while I do think they're they're they're they've diminished. I do
93:00 they're they're they've diminished. I do think that people get into the game for
93:02 think that people get into the game for reasons like that, but it's the
93:04 reasons like that, but it's the expression that would be kind of totally
93:06 expression that would be kind of totally different than what Toqueville is is is
93:08 different than what Toqueville is is is is thinking of. So, I want to move on to
93:12 is thinking of. So, I want to move on to the last part of our interview, which is
93:14 the last part of our interview, which is about we talked about what right you
93:16 about we talked about what right you want to achieve mostly about autonomy as
93:19 want to achieve mostly about autonomy as it relates to AI. We talked about why
93:21 it relates to AI. We talked about why that's important, the intrinsic and the
93:23 that's important, the intrinsic and the extrinsic reasons. And now I want to
93:25 extrinsic reasons. And now I want to move on to the how and it's in this idea
93:27 move on to the how and it's in this idea that uh that you suggested of the
93:29 that uh that you suggested of the philosopher builder. So so what is a
93:31 philosopher builder. So so what is a philosopher builder?
93:32 philosopher builder? So the philosopher builder is a new kind
93:34 So the philosopher builder is a new kind of technologist. It's a technologist
93:36 of technologist. It's a technologist that thinks very deeply, contemplates
93:38 that thinks very deeply, contemplates very deeply about the alternate ends of
93:40 very deeply about the alternate ends of technology and also has the skill to
93:43 technology and also has the skill to build that in the world. When you think
93:45 build that in the world. When you think of the philosopher builder, you should
93:46 of the philosopher builder, you should think of Benjamin Franklin who you know
93:49 think of Benjamin Franklin who you know everyone will know as one of America's
93:51 everyone will know as one of America's founding fathers as the face on the $100
93:52 founding fathers as the face on the $100 bill. Um, what a lot of people don't
93:54 bill. Um, what a lot of people don't know is that Franklin is an engineer of
93:57 know is that Franklin is an engineer of a very high caliber. He invents the
93:59 a very high caliber. He invents the lightning rod. He invents the bif focal
94:02 lightning rod. He invents the bif focal lens. He coins positive and negative
94:03 lens. He coins positive and negative charge and electricity. He's also a
94:05 charge and electricity. He's also a philosopher. He lives by a 13 virtues
94:09 philosopher. He lives by a 13 virtues idea and creates this thing called the
94:11 idea and creates this thing called the junto for mutual evaluation, mutual
94:13 junto for mutual evaluation, mutual discussion.
94:14 discussion. When Franklin is at his best and when he
94:16 When Franklin is at his best and when he really brings to life this idea of the
94:18 really brings to life this idea of the philosopher builder is when he's
94:20 philosopher builder is when he's translating a philosophical idea into a
94:23 translating a philosophical idea into a practical innovation in the world. He's
94:25 practical innovation in the world. He's taking the idea that for example
94:27 taking the idea that for example knowledge should live outside of the
94:30 knowledge should live outside of the scope of authority outside of the church
94:32 scope of authority outside of the church in the state and he's translating that
94:34 in the state and he's translating that into the world through the first lending
94:36 into the world through the first lending library or the first network of
94:39 library or the first network of independent publishers in America. That
94:41 independent publishers in America. That is the essence that we want to capture.
94:43 is the essence that we want to capture. the idea that you're thinking about
94:45 the idea that you're thinking about these philosophical ideas and you're
94:48 these philosophical ideas and you're translating them into real world
94:50 translating them into real world innovation. So today, you know, it's
94:54 innovation. So today, you know, it's never been more necessary to have that.
94:56 never been more necessary to have that. But I think the institutions we have are
94:59 But I think the institutions we have are failing to produce the archetype. You
95:01 failing to produce the archetype. You know, most universities um I would say
95:05 know, most universities um I would say produce pretty narrow technicians or
95:08 produce pretty narrow technicians or conforming ideologues. I think that's by
95:11 conforming ideologues. I think that's by and large an accurate descriptor. Most
95:13 and large an accurate descriptor. Most tech companies produce people who are
95:14 tech companies produce people who are very very good at building and thinking
95:16 very very good at building and thinking about the means but who are not thinking
95:19 about the means but who are not thinking about the ends beyond just sort of the
95:20 about the ends beyond just sort of the customer use case satisfaction and then
95:22 customer use case satisfaction and then think tanks create theorists that don't
95:25 think tanks create theorists that don't tend to build. So that's what I'm
95:27 tend to build. So that's what I'm focused on. The the inspiration that I
95:30 focused on. The the inspiration that I draw is that there have been moments
95:33 draw is that there have been moments when institutions
95:35 when institutions uh uh really rose to the challenge.
95:37 uh uh really rose to the challenge. institutions that come to mind are
95:39 institutions that come to mind are Cambridge during the industrial
95:40 Cambridge during the industrial revolution took a lot of mathematicians
95:42 revolution took a lot of mathematicians and turned them into the engineers that
95:44 and turned them into the engineers that powered that. Um MIT during World War
95:47 powered that. Um MIT during World War II, the Rad Lab in particular took
95:49 II, the Rad Lab in particular took physicists, made them into inventors,
95:51 physicists, made them into inventors, help the war. And then Chicago more
95:54 help the war. And then Chicago more recently um took economists and made
95:57 recently um took economists and made them into uh reformers who freed markets
96:00 them into uh reformers who freed markets across five continents. So we can do it
96:02 across five continents. So we can do it like when a mission has that as their or
96:04 like when a mission has that as their or when an institution has that as their
96:05 when an institution has that as their purpose and acts urgently we can do it
96:08 purpose and acts urgently we can do it that is what we're doing in cosmos is
96:10 that is what we're doing in cosmos is creating that new kind of technologies
96:12 creating that new kind of technologies and in some ways it's reflective of your
96:15 and in some ways it's reflective of your own story right because you you started
96:16 own story right because you you started off on the building side uh and then you
96:18 off on the building side uh and then you got into philosophy uh later on in life
96:20 got into philosophy uh later on in life so tell us that story
96:22 so tell us that story yeah I mean it even goes back a little
96:23 yeah I mean it even goes back a little further like my mom was an educator she
96:25 further like my mom was an educator she taught special needs kids for 36 years
96:27 taught special needs kids for 36 years and she brought us up in what people
96:29 and she brought us up in what people call a virtue culture So arisatilian, an
96:32 call a virtue culture So arisatilian, an idea that things like courage, honor
96:35 idea that things like courage, honor mattered. And this was very effective
96:38 mattered. And this was very effective and it is what caused my sister, my
96:41 and it is what caused my sister, my brother and I to all go into the
96:42 brother and I to all go into the military. I was a submarine officer, my
96:44 military. I was a submarine officer, my brother was as well. My sister was the
96:45 brother was as well. My sister was the lead medical person when we defeated
96:47 lead medical person when we defeated ISIS in battle with Mosul. And it's
96:50 ISIS in battle with Mosul. And it's because that was a natural expression
96:52 because that was a natural expression for this um desire for public good that
96:55 for this um desire for public good that that she and my father had inculcated in
96:57 that she and my father had inculcated in us. I then went, you know, went to MIT,
97:01 us. I then went, you know, went to MIT, joined the military, went to Harvard
97:02 joined the military, went to Harvard Business School. So, I had this kind of
97:04 Business School. So, I had this kind of classic STEM in business. And it was
97:06 classic STEM in business. And it was only after selling two AI companies and
97:10 only after selling two AI companies and having my second of two kids, my son
97:12 having my second of two kids, my son who's four, that I really was hit with
97:15 who's four, that I really was hit with this like big perennial human question
97:18 this like big perennial human question like what what do you what is the good
97:20 like what what do you what is the good life? What do you do with the rest of
97:21 life? What do you do with the rest of your career? What do you model for the
97:23 your career? What do you model for the little humans? That's a profound thing
97:24 little humans? That's a profound thing that happens when you have kids is you
97:26 that happens when you have kids is you you realize that you're on the hook. And
97:28 you realize that you're on the hook. And I didn't have answers and I was very
97:30 I didn't have answers and I was very dissatisfied with the depth. I had
97:32 dissatisfied with the depth. I had cocktail party level answers. And so I
97:34 cocktail party level answers. And so I started to read and I had a mentor uh
97:38 started to read and I had a mentor uh named Michael Strong who gave me the
97:40 named Michael Strong who gave me the gift of a lifetime, a 17page reading
97:42 gift of a lifetime, a 17page reading list that really started with the
97:44 list that really started with the ancients philosophy and went up to the
97:46 ancients philosophy and went up to the enlightenment, the American founding,
97:48 enlightenment, the American founding, contemporary debates. It changed my
97:50 contemporary debates. It changed my life. It transformed me. It made me more
97:52 life. It transformed me. It made me more interesting to myself and it totally
97:54 interesting to myself and it totally changed the trajectory and that's how I
97:56 changed the trajectory and that's how I shifted from being an entrepreneur to
97:58 shifted from being an entrepreneur to being well I'm still an entrepreneur but
98:00 being well I'm still an entrepreneur but to creating to be a philosopher.
98:02 to creating to be a philosopher. Entrepreneur.
98:03 Entrepreneur. Sure. Philosopher.
98:04 Sure. Philosopher. There we go.
98:04 There we go. Exactly.
98:05 Exactly. Um you said that you didn't grow up in
98:07 Um you said that you didn't grow up in wealth. Uh and in some sense what's
98:10 wealth. Uh and in some sense what's quite crazy about your entrepreneurial
98:11 quite crazy about your entrepreneurial journey was how quick the exits came.
98:13 journey was how quick the exits came. Right. It was a span of what 18 months
98:15 Right. It was a span of what 18 months or or something like that that there was
98:17 or or something like that that there was like 400 million uh exits. What was like
98:20 like 400 million uh exits. What was like coming into that much wealth in that
98:22 coming into that much wealth in that short period of time? How did that
98:24 short period of time? How did that manifest for you?
98:25 manifest for you? So, I had people around me that um
98:28 So, I had people around me that um played a big role in this. So, I saw
98:30 played a big role in this. So, I saw examples of lives that I didn't want to
98:32 examples of lives that I didn't want to live. And these are people that I'm in
98:35 live. And these are people that I'm in some cases friends with. So, I don't
98:36 some cases friends with. So, I don't want to be too uh too specific here, but
98:39 want to be too uh too specific here, but I saw examples of lives that I did not
98:41 I saw examples of lives that I did not want to live. And I also saw examples of
98:45 want to live. And I also saw examples of lives I did. And so I had a friend of
98:48 lives I did. And so I had a friend of mine who is part of a philanthropic
98:50 mine who is part of a philanthropic network of young people like in our 30s
98:53 network of young people like in our 30s and 40s who are doing pretty serious
98:54 and 40s who are doing pretty serious philanthropic work. And I saw him do
98:57 philanthropic work. And I saw him do something that I felt was
98:59 something that I felt was transformational at a young age. And I
99:02 transformational at a young age. And I remember the moment when I heard this
99:05 remember the moment when I heard this and he had given, you know, $5 million
99:07 and he had given, you know, $5 million or something to this amazing cause. Um,
99:10 or something to this amazing cause. Um, and I I just talked to Adrian. I
99:12 and I I just talked to Adrian. I thought, you know, we could do something
99:14 thought, you know, we could do something like that. you know, we could do
99:15 like that. you know, we could do something big and the counterfactual of
99:18 something big and the counterfactual of not doing it was felt huge. In other
99:21 not doing it was felt huge. In other words, not having the benefits of not
99:24 words, not having the benefits of not just money but like time and effort and
99:26 just money but like time and effort and talent applied and not compounding that
99:29 talent applied and not compounding that felt like a huge missed opportunity. So,
99:32 felt like a huge missed opportunity. So, I really credit it to being uh able to
99:34 I really credit it to being uh able to surround myself with examples of what I
99:37 surround myself with examples of what I thought bold action looked like and I
99:39 thought bold action looked like and I chose, you know, very deliberately the
99:41 chose, you know, very deliberately the path that I wanted to to emulate.
99:43 path that I wanted to to emulate. I see. Oh well, let's let's go back to
99:45 I see. Oh well, let's let's go back to the philosopher builder archetype
99:46 the philosopher builder archetype because um I think about it as fleshing
99:50 because um I think about it as fleshing out this third option um in platonic
99:54 out this third option um in platonic political philosophy. So obviously uh
99:56 political philosophy. So obviously uh Plato separates his polus into three.
99:58 Plato separates his polus into three. There's the ruling class, uh there's the
100:00 There's the ruling class, uh there's the the army, right? The military uh and
100:02 the army, right? The military uh and then then there's the producers like the
100:04 then then there's the producers like the the merchants, the builders essentially.
100:07 the merchants, the builders essentially. And Plato obviously advises the rulers
100:10 And Plato obviously advises the rulers to be philosophers. This is the
100:11 to be philosophers. This is the philosopher king. Um I think we see in
100:13 philosopher king. Um I think we see in the end of the Roman Republic the
100:16 the end of the Roman Republic the philosopher general. These are people
100:18 philosopher general. These are people like uh Cicero like uh like um Caesar
100:21 like uh Cicero like uh like um Caesar who was ve very learned. He was like
100:23 who was ve very learned. He was like composing uh tracks about anomaly and
100:26 composing uh tracks about anomaly and analogy in Gaul while like arrows were
100:28 analogy in Gaul while like arrows were flying in his face. Um and now you're
100:31 flying in his face. Um and now you're suggesting is that it's the third class
100:33 suggesting is that it's the third class which to Plato was in some sense the
100:35 which to Plato was in some sense the lowest class the builders
100:37 lowest class the builders should be philosophers. Why? Why is
100:38 should be philosophers. Why? Why is that? Well, yes. I mean, I think it
100:41 that? Well, yes. I mean, I think it draws some inspiration but differs in
100:43 draws some inspiration but differs in really import one really important way
100:45 really import one really important way from those those archetypes. The
100:47 from those those archetypes. The inspiration is that there is a belief
100:49 inspiration is that there is a belief that you can have not a unity of
100:53 that you can have not a unity of contemplation and power, wisdom and
100:55 contemplation and power, wisdom and power as you see in the philosopher
100:56 power as you see in the philosopher king, but a unity of the wisdom and the
101:01 king, but a unity of the wisdom and the contemplation that gets you there with
101:03 contemplation that gets you there with uh the ability to create to create
101:05 uh the ability to create to create worlds to build. Um what I think it it
101:08 worlds to build. Um what I think it it makes the distinction clear is actually
101:10 makes the distinction clear is actually the Greek concept of order which divides
101:13 the Greek concept of order which divides into two distinct words. There is taxis
101:17 into two distinct words. There is taxis and there is cosmos.
101:19 and there is cosmos. Taxis is the order that we impose on the
101:21 Taxis is the order that we impose on the world from the top down. It's taxonomy,
101:23 world from the top down. It's taxonomy, right? Or it's more generally a kind of
101:26 right? Or it's more generally a kind of top down order, right? Um and then the
101:30 top down order, right? Um and then the alternative is the bottomup emergent
101:32 alternative is the bottomup emergent order that is Cosmos. And obviously I've
101:36 order that is Cosmos. And obviously I've embraced this uh in so far as we're
101:39 embraced this uh in so far as we're named the Cosmos Institute. But what it
101:41 named the Cosmos Institute. But what it means in practice is that we're not
101:44 means in practice is that we're not looking for one individual who has a
101:47 looking for one individual who has a kind of blueprint that we look to to
101:49 kind of blueprint that we look to to rescue us in difficult times um and who
101:52 rescue us in difficult times um and who can implement that plan sort of from the
101:53 can implement that plan sort of from the top. That's the philosopher king. We are
101:56 top. That's the philosopher king. We are looking for a much more bottom-up
101:58 looking for a much more bottom-up distributed approach where people may
102:00 distributed approach where people may have slices of truth, slices of the
102:02 have slices of truth, slices of the solution and are working in their corner
102:05 solution and are working in their corner of the world to project that that vision
102:07 of the world to project that that vision forward. That's the cosmos approach and
102:10 forward. That's the cosmos approach and the philosopher builder approach. It's a
102:12 the philosopher builder approach. It's a different archetype, much more like
102:14 different archetype, much more like Franklin, much more distributed and one
102:17 Franklin, much more distributed and one that I think is
102:18 that I think is right,
102:19 right, you know, necessary for the the current
102:21 you know, necessary for the the current moment.
102:22 moment. Right. So taking the insight from Plato
102:24 Right. So taking the insight from Plato about the importance to join worldly
102:26 about the importance to join worldly activity and and and philosophy and and
102:28 activity and and and philosophy and and uh and contemplation but given
102:31 uh and contemplation but given everything you said about autonomy
102:32 everything you said about autonomy transplanting that into an autonomous
102:35 transplanting that into an autonomous decentralized way. Yes.
102:36 decentralized way. Yes. And hence you find the philosopher in
102:38 And hence you find the philosopher in the third class and not in the not in
102:39 the third class and not in the not in the ruling class. I I was going to give
102:41 the ruling class. I I was going to give a very different answer to to why this
102:43 a very different answer to to why this is important today. please
102:45 is important today. please which is that technology like you can
102:48 which is that technology like you can argue in some sense that in Plato's time
102:50 argue in some sense that in Plato's time uh ruling and political power was the
102:52 uh ruling and political power was the dominant pole right um whereas today I
102:56 dominant pole right um whereas today I think it's the market uh as well as
102:59 think it's the market uh as well as technology that is the dominant pole of
103:01 technology that is the dominant pole of the three um you see how uh I mean
103:04 the three um you see how uh I mean obviously you see how technology is
103:06 obviously you see how technology is dominant over politics given everything
103:08 dominant over politics given everything we've described so far about how
103:09 we've described so far about how technology can form political citizens
103:11 technology can form political citizens to be more autonomous or less autonomous
103:13 to be more autonomous or less autonomous uh and you canso O see with things like
103:15 uh and you canso O see with things like Anderil or or Palunteer how an AI of
103:18 Anderil or or Palunteer how an AI of course has has dual use technology uh
103:21 course has has dual use technology uh you see what I'm trying to say here
103:22 you see what I'm trying to say here right like the technology like it's
103:24 right like the technology like it's important for the philosopher builder
103:25 important for the philosopher builder not just because of what you said but
103:27 not just because of what you said but because technology
103:29 because technology now supersedes the other two spheres.
103:31 now supersedes the other two spheres. Yeah. And it becomes the driving logic
103:32 Yeah. And it becomes the driving logic of the two spheres.
103:33 of the two spheres. Absolutely. No. And it's a break with
103:35 Absolutely. No. And it's a break with the ancient idea that politics is
103:38 the ancient idea that politics is architectonic. the regime sets the frame
103:42 architectonic. the regime sets the frame for what kind of technology could even
103:43 for what kind of technology could even be done, right? I am persuaded by that.
103:47 be done, right? I am persuaded by that. But as you as you sort of look at where
103:49 But as you as you sort of look at where we are in 2025,
103:50 we are in 2025, it's opposite.
103:51 it's opposite. It's flipped a bit. And so yeah, so you
103:53 It's flipped a bit. And so yeah, so you could talk about it being upstream, but
103:54 could talk about it being upstream, but I like your idea that it sort of is like
103:56 I like your idea that it sort of is like techn technology is kind of
103:57 techn technology is kind of architectonic. Like it's kind of the if
103:59 architectonic. Like it's kind of the if you think about who the best reformers
104:01 you think about who the best reformers are, the most capable reformers are,
104:04 are, the most capable reformers are, it's not people like John Stewart Mill
104:07 it's not people like John Stewart Mill who whom I love, but it's people like
104:09 who whom I love, but it's people like Elon Musk. And I I don't say that to
104:12 Elon Musk. And I I don't say that to endorse
104:13 endorse everything he stands for.
104:14 everything he stands for. That's not what I'm saying. I'm saying
104:15 That's not what I'm saying. I'm saying his position as a profoundly capable
104:19 his position as a profoundly capable builder gives him enormous leverage on,
104:22 builder gives him enormous leverage on, you know, what we thought of as like
104:23 you know, what we thought of as like political questions of old or or but
104:26 political questions of old or or but again I just want to tease out the the
104:28 again I just want to tease out the the mechanism because one of it is him able
104:30 mechanism because one of it is him able to use his money um and to be able to
104:32 to use his money um and to be able to for example fund Trump. Um but and I
104:35 for example fund Trump. Um but and I think alpha school is the better example
104:36 think alpha school is the better example here like the technology of AI when it
104:39 here like the technology of AI when it applies to education might be a a much
104:42 applies to education might be a a much more powerful political tool for liberty
104:45 more powerful political tool for liberty than you know like doing anything with
104:47 than you know like doing anything with the government that can possibly done
104:48 the government that can possibly done today. Yeah. Yeah. Because because
104:50 today. Yeah. Yeah. Because because precisely because technology is going
104:52 precisely because technology is going through all of our lives in this way.
104:53 through all of our lives in this way. Yeah. That's right. And alpha school
104:56 Yeah. That's right. And alpha school example gives also color on the
104:59 example gives also color on the interdependencies because if you don't
105:02 interdependencies because if you don't have a political order that is capable
105:05 have a political order that is capable of uh sustaining that kind of innovative
105:08 of uh sustaining that kind of innovative school model um you at the very least
105:11 school model um you at the very least have a it's hampered
105:12 have a it's hampered right so so you're saying it is the
105:14 right so so you're saying it is the dominant poll but it's not the only poll
105:15 dominant poll but it's not the only poll that is constrained by the others. Um,
105:18 that is constrained by the others. Um, one can respond to to this and say that
105:21 one can respond to to this and say that what we may say is true, but all the
105:24 what we may say is true, but all the other waves of technology, whether it's
105:25 other waves of technology, whether it's printing press, industrial, certainly
105:27 printing press, industrial, certainly nuclear, web, PC, would also have
105:30 nuclear, web, PC, would also have benefited from philosopher builders. Um,
105:33 benefited from philosopher builders. Um, why is there something specific about AI
105:35 why is there something specific about AI that that makes this more urgent?
105:38 that that makes this more urgent? I think that gets back to autonomy.
105:41 I think that gets back to autonomy. Yeah. But it also, you know, more
105:43 Yeah. But it also, you know, more generally, I think there have been
105:46 generally, I think there have been epical moments in science and tech that
105:48 epical moments in science and tech that have made us question what it means to
105:51 have made us question what it means to be human.
105:52 be human. And, you know, I think of Galileo or
105:54 And, you know, I think of Galileo or Capernicus and and and and Darwin. I
105:56 Capernicus and and and and Darwin. I think about pushing us out of the center
105:58 think about pushing us out of the center of the universe, putting us among the
106:00 of the universe, putting us among the animals. Those were major major
106:02 animals. Those were major major reorienting moments in human life as a
106:05 reorienting moments in human life as a consequence of technological insight and
106:07 consequence of technological insight and breakthrough. AI is similar. The age of
106:09 breakthrough. AI is similar. The age of Turring brings this question of what
106:11 Turring brings this question of what does it mean to be human in a world in
106:13 does it mean to be human in a world in which we are no longer you know the most
106:15 which we are no longer you know the most intelligent being um or at least
106:18 intelligent being um or at least plausibly uh uh becoming so and so
106:22 plausibly uh uh becoming so and so that's one kind of broad impetus broad
106:24 that's one kind of broad impetus broad strokes but I think the question of like
106:27 strokes but I think the question of like because AI substitutes for this
106:32 because AI substitutes for this essential maybe central human good then
106:36 essential maybe central human good then it becomes a very philosophical
106:37 it becomes a very philosophical technology. It operates through
106:39 technology. It operates through language. It has a semantic um
106:42 language. It has a semantic um interface, you know, with with humans
106:45 interface, you know, with with humans and um and has a mediating effect
106:49 and um and has a mediating effect between us and the world, especially the
106:51 between us and the world, especially the world of words and information, right?
106:54 world of words and information, right? Whereas it's just as important, maybe
106:56 Whereas it's just as important, maybe more so, you could easily make the
106:58 more so, you could easily make the argument, to get nuclear, right? Right.
107:00 argument, to get nuclear, right? Right. To get the the game theory of nuclear
107:01 To get the the game theory of nuclear right. um it doesn't raise as many
107:04 right. um it doesn't raise as many interesting philosophical uh questions
107:07 interesting philosophical uh questions and problems as AI does because of how
107:09 and problems as AI does because of how humanlike it is. Um and it's not
107:12 humanlike it is. Um and it's not manipulable through the semantic way,
107:14 manipulable through the semantic way, but but rather through the numeric way.
107:16 but but rather through the numeric way. Yeah. I I also think there's an argument
107:17 Yeah. I I also think there's an argument to be made that AI is plausibly the end
107:20 to be made that AI is plausibly the end of the modern technological project. In
107:22 of the modern technological project. In other words, is plausibly a technology
107:24 other words, is plausibly a technology that can create other technologies or
107:25 that can create other technologies or create other scientific breakthroughs.
107:27 create other scientific breakthroughs. We we are barely scratching the surface
107:29 We we are barely scratching the surface there. But if you go back to Bacon and
107:32 there. But if you go back to Bacon and the beginning of this whole modern
107:33 the beginning of this whole modern scientific project, the thought of a
107:36 scientific project, the thought of a technology that could discover other you
107:39 technology that could discover other you know breakthroughs would have been held
107:41 know breakthroughs would have been held in a special category. So I want to
107:43 in a special category. So I want to offer uh you a few critiques and see how
107:45 offer uh you a few critiques and see how you respond to this philosopher builder
107:46 you respond to this philosopher builder ideal. And the first one is a catalyst
107:49 ideal. And the first one is a catalyst critique and that's to say look uh it's
107:52 critique and that's to say look uh it's redundant. We don't need our builders to
107:54 redundant. We don't need our builders to aim towards any higher normative end uh
107:57 aim towards any higher normative end uh other than profit because this is one of
107:59 other than profit because this is one of the key tenants of of uh classical
108:01 the key tenants of of uh classical liberal thinking and markets thinking
108:03 liberal thinking and markets thinking about capitalism that the invisible hand
108:05 about capitalism that the invisible hand of the market will turn private vice
108:08 of the market will turn private vice into public virtue. Uh and so it seems
108:11 into public virtue. Uh and so it seems like the urgency by which you uh uh
108:15 like the urgency by which you uh uh recommend the philosopher builder kind
108:16 recommend the philosopher builder kind of undermines that that piece.
108:18 of undermines that that piece. Well, I think the market does turn
108:19 Well, I think the market does turn private vice into public virtue by and
108:21 private vice into public virtue by and large. I also think it permits private
108:23 large. I also think it permits private virtue. In other words, the market
108:25 virtue. In other words, the market permits you to um act in a way that is
108:28 permits you to um act in a way that is uh yours uh you know as as it is chosen
108:32 uh yours uh you know as as it is chosen and you can uh do that in so far as you
108:34 and you can uh do that in so far as you offer value to others. But what I would
108:36 offer value to others. But what I would say here is like you need to think about
108:38 say here is like you need to think about the two poles of kind of how an
108:41 the two poles of kind of how an entrepreneur is seen to act in a in a
108:44 entrepreneur is seen to act in a in a capitalist system. One is Freriedman who
108:47 capitalist system. One is Freriedman who I think occupies the the most narrow
108:50 I think occupies the the most narrow farthest poll 1970 what is the corporate
108:53 farthest poll 1970 what is the corporate social responsibility business. He
108:55 social responsibility business. He writes this writes this article and it
108:57 writes this writes this article and it says that the the role of an
108:58 says that the the role of an entrepreneur is to uh is to seek
109:01 entrepreneur is to uh is to seek shareholder value to deliver that and
109:03 shareholder value to deliver that and the other end of the extreme is ESG I
109:06 the other end of the extreme is ESG I would say where you have a kind of
109:10 would say where you have a kind of unitary goal a United Nations goal for
109:13 unitary goal a United Nations goal for example to to do something that we we
109:16 example to to do something that we we view as being laudable and then
109:18 view as being laudable and then entrepreneurs are seen as the execution
109:20 entrepreneurs are seen as the execution instrument for that goal and I would say
109:22 instrument for that goal and I would say both are troubling in a sense I'm much
109:25 both are troubling in a sense I'm much closer to the Freriedman view, but they
109:27 closer to the Freriedman view, but they they both position the entrepreneur in a
109:31 they both position the entrepreneur in a low agency way in so far as on the one
109:34 low agency way in so far as on the one hand we're we're we're supposedly very
109:37 hand we're we're we're supposedly very limited in what we can do. we just think
109:39 limited in what we can do. we just think very financially about the shareholder
109:41 very financially about the shareholder value and on the other we're we're seen
109:43 value and on the other we're we're seen as the agent of another who's
109:45 as the agent of another who's implementing this you know this this
109:47 implementing this you know this this goal and I would say there's an
109:49 goal and I would say there's an alternative that and the alternative is
109:51 alternative that and the alternative is the entrepreneur as a pioneer the
109:53 the entrepreneur as a pioneer the entrepreneur who sets the norms and
109:56 entrepreneur who sets the norms and builds that future and there is simply
109:58 builds that future and there is simply nothing that restricts that one needs to
110:00 nothing that restricts that one needs to be creative I would argue that's what
110:02 be creative I would argue that's what entrepreneurs do today in many cases um
110:06 entrepreneurs do today in many cases um but I would say one needs to be creative
110:08 but I would say one needs to be creative because the um the customer may not may
110:14 because the um the customer may not may or may not care about your underlying
110:16 or may not care about your underlying philosophy. And so I'll give you an
110:18 philosophy. And so I'll give you an example is like if we want to think
110:19 example is like if we want to think about autonomy, there's a lot of stuff
110:20 about autonomy, there's a lot of stuff that we could do architecturally.
110:23 that we could do architecturally. But if we consider the case Bloomberg
110:25 But if we consider the case Bloomberg terminal for anyone who's not in finance
110:27 terminal for anyone who's not in finance is a kind of mainstay of finance and the
110:31 is a kind of mainstay of finance and the use of it is about improving decision
110:34 use of it is about improving decision quality. That's the the point when you
110:37 quality. That's the the point when you use social media. Often the point is to
110:40 use social media. Often the point is to kind of check out like that's the job to
110:42 kind of check out like that's the job to be done is like you had a hard day and
110:43 be done is like you had a hard day and you just want to scroll and so you need
110:45 you just want to scroll and so you need to think about that. You need to think
110:47 to think about that. You need to think about your business model as well. Are
110:49 about your business model as well. Are you paying money in a subscription or
110:51 you paying money in a subscription or are you being subsidized because the
110:53 are you being subsidized because the model is adriven? These are really
110:55 model is adriven? These are really important design questions. But in so
110:58 important design questions. But in so far as opportunities do exist that
111:00 far as opportunities do exist that resemble the Bloomberg terminal where
111:01 resemble the Bloomberg terminal where you're improving decision quality and
111:03 you're improving decision quality and getting paid for it on subscription
111:04 getting paid for it on subscription basis, there's an entirely consistent
111:07 basis, there's an entirely consistent pro- capitalist way to build autonomy
111:10 pro- capitalist way to build autonomy producing tools.
111:11 producing tools. Right. Right. Uh so when I say the
111:15 Right. Right. Uh so when I say the philosopher builder undermines a key
111:16 philosopher builder undermines a key tenant of capitalism,
111:18 tenant of capitalism, I'm not taking the capitalist position
111:21 I'm not taking the capitalist position to be that you can't aim towards
111:24 to be that you can't aim towards normative uh goals. But I'm taking it to
111:27 normative uh goals. But I'm taking it to to say it's not necessary for the
111:29 to say it's not necessary for the positive benefits to ensue. And yet what
111:31 positive benefits to ensue. And yet what I hear you saying is if we don't have
111:33 I hear you saying is if we don't have philosophers who are builders, then
111:36 philosophers who are builders, then entrepreneurs driven by the profit
111:38 entrepreneurs driven by the profit mechanism can build systems that just
111:41 mechanism can build systems that just turn people into automatons. You you see
111:44 turn people into automatons. You you see you see the difference I'm trying to
111:45 you see the difference I'm trying to draw.
111:45 draw. And I also So one is like profit pools
111:47 And I also So one is like profit pools are incredibly important. They tell us
111:48 are incredibly important. They tell us what to go figure out about. They're
111:50 what to go figure out about. They're they're things that people value based
111:52 they're things that people value based on their preference preference ordering
111:54 on their preference preference ordering sort of so to speak. But also someone
111:57 sort of so to speak. But also someone needs to care about preserving the
111:59 needs to care about preserving the institutions that make free markets
112:01 institutions that make free markets possible. Someone needs to care about
112:02 possible. Someone needs to care about preserving the habits of mind that do
112:04 preserving the habits of mind that do that. And also markets are while
112:09 that. And also markets are while incredible a means by which we attain
112:13 incredible a means by which we attain human flourishing. And so someone also
112:14 human flourishing. And so someone also needs to care about that. And so those
112:16 needs to care about that. And so those are things that are entirely consistent
112:18 are things that are entirely consistent with free markets. It's a bit of a
112:20 with free markets. It's a bit of a thicker conception of liberalism than I
112:22 thicker conception of liberalism than I would say you know Freriedman would
112:24 would say you know Freriedman would hold. I grant that. But I think it's
112:26 hold. I grant that. But I think it's critical that we we we we consider those
112:29 critical that we we we we consider those other dimensions.
112:30 other dimensions. Right? So now let me push you from the
112:32 Right? So now let me push you from the exact opposite direction. Right? That
112:33 exact opposite direction. Right? That was the pro- capitalist critique. This
112:34 was the pro- capitalist critique. This is the anti- capitalist critique. And I
112:36 is the anti- capitalist critique. And I think OpenAI is a is a good example
112:37 think OpenAI is a is a good example which is let's say there is an
112:39 which is let's say there is an entrepreneur who is motivated by more
112:41 entrepreneur who is motivated by more than the profit motive.
112:43 than the profit motive. There are two kinds of pressures upon
112:46 There are two kinds of pressures upon him or her that might deter his genuine
112:50 him or her that might deter his genuine desire to do this. Number one is when a
112:54 desire to do this. Number one is when a company scales and brings on investors
112:56 company scales and brings on investors and other shareholders, they just might
112:58 and other shareholders, they just might need to optimize shareholder value.
113:00 need to optimize shareholder value. Regardless of what the entrepreneur,
113:01 Regardless of what the entrepreneur, what the philosopher builder wants and
113:04 what the philosopher builder wants and number two is competitive pressures. You
113:06 number two is competitive pressures. You might build a social media platform that
113:08 might build a social media platform that genuinely helps people that gives them
113:10 genuinely helps people that gives them socratic dialogues, but people might
113:12 socratic dialogues, but people might just uh gravitates towards the addicting
113:15 just uh gravitates towards the addicting one and then the profit incentive drives
113:17 one and then the profit incentive drives growth towards that.
113:19 growth towards that. Yeah, I would I would add another which
113:21 Yeah, I would I would add another which is that for early stage companies which
113:23 is that for early stage companies which I think a lot about investors the
113:26 I think a lot about investors the incremental financing if you fund your
113:29 incremental financing if you fund your company at a seed stage and you have a
113:32 company at a seed stage and you have a real mission behind it but then the
113:34 real mission behind it but then the series A investor doesn't care they just
113:37 series A investor doesn't care they just reduce you to a metric because they're
113:38 reduce you to a metric because they're they're chasing DPI um they are not
113:42 they're chasing DPI um they are not going to be um uh kind if you sort of
113:46 going to be um uh kind if you sort of deviate from a growth plan and so you
113:48 deviate from a growth plan and so you have to kind of this is a big design
113:51 have to kind of this is a big design question for like how you bring these
113:53 question for like how you bring these companies into the world. You need to
113:54 companies into the world. You need to have a group of investors capital that
113:58 have a group of investors capital that is
113:58 is aligned
113:59 aligned aligned philosophically and this is a
114:01 aligned philosophically and this is a big reason why we started Cosmos
114:03 big reason why we started Cosmos Holdings which is a complimentary
114:05 Holdings which is a complimentary portion of this that focuses on venture
114:07 portion of this that focuses on venture creation and venture backing is because
114:09 creation and venture backing is because you need to have align capital to do
114:11 you need to have align capital to do this. What you can't insulate from is
114:13 this. What you can't insulate from is the customer. In other words, like right
114:15 the customer. In other words, like right you can provide insulation in terms of
114:17 you can provide insulation in terms of the capital and have very principled
114:19 the capital and have very principled capital, but you can't insulate from the
114:21 capital, but you can't insulate from the customer, which is why you need to kind
114:22 customer, which is why you need to kind of be inspired by the Bloomberg model,
114:24 of be inspired by the Bloomberg model, which is funny to say because in 2025
114:26 which is funny to say because in 2025 it's not exactly the greatest, you know,
114:28 it's not exactly the greatest, you know, user interface or whatever. They they
114:29 user interface or whatever. They they they beat their competitors though
114:32 they beat their competitors though because they've chosen a part of the
114:34 because they've chosen a part of the market where it's real mutual benefit
114:37 market where it's real mutual benefit like they're really offering benefit for
114:38 like they're really offering benefit for their customers. You have to build in
114:40 their customers. You have to build in those kinds of pockets and directions.
114:42 those kinds of pockets and directions. What about the other pockets? Well, so I
114:45 What about the other pockets? Well, so I think once you once you demonstrate
114:47 think once you once you demonstrate that, I think the costs and the design
114:49 that, I think the costs and the design patterns will come down and I think you
114:51 patterns will come down and I think you will be able to kind of infect the other
114:53 will be able to kind of infect the other regions of the world, right?
114:54 regions of the world, right? I'm not sort of utopian in thinking, but
114:56 I'm not sort of utopian in thinking, but I do think there's a path, there's a
114:58 I do think there's a path, there's a trajectory, there's a way to enter into
115:00 trajectory, there's a way to enter into this market, right?
115:01 this market, right? And then improve and learn about this in
115:04 And then improve and learn about this in such a way that we can kind of cross
115:06 such a way that we can kind of cross apply our lessons into the more
115:07 apply our lessons into the more difficult areas. I also think that
115:09 difficult areas. I also think that there's something to be said about like
115:11 there's something to be said about like there's a great deal of entrenchment
115:13 there's a great deal of entrenchment when you have one business model like
115:15 when you have one business model like adriven model that is this entrenchment
115:19 adriven model that is this entrenchment is not felt by new challengers. In other
115:21 is not felt by new challengers. In other words, if you set up your business to
115:23 words, if you set up your business to monetize entirely differently then you
115:26 monetize entirely differently then you don't bind yourself in the way that
115:28 don't bind yourself in the way that Google has bound itself
115:29 Google has bound itself like Substack for example.
115:30 like Substack for example. Substack is a great example. Yeah.
115:32 Substack is a great example. Yeah. Exactly. And so this is a challenge for
115:34 Exactly. And so this is a challenge for incumbents. I think if I were an
115:35 incumbents. I think if I were an incumbent trying to drive change there,
115:38 incumbent trying to drive change there, it would be hard. And that's why a lot
115:39 it would be hard. And that's why a lot of the top researchers that we know are
115:41 of the top researchers that we know are people who worked at at big tech
115:44 people who worked at at big tech platforms who built something that was
115:46 platforms who built something that was incredibly I think potent for you know
115:49 incredibly I think potent for you know promoting human flourishing. It went
115:51 promoting human flourishing. It went against the business model for them. The
115:54 against the business model for them. The right answer is you need to either build
115:56 right answer is you need to either build it, you know, in academia and open
115:58 it, you know, in academia and open source it or you need to start a startup
116:00 source it or you need to start a startup and find a business model that can take
116:03 and find a business model that can take that into the world. Let me give you uh
116:06 that into the world. Let me give you uh a different line of critique. Um and
116:07 a different line of critique. Um and I'll begin by quoting uh part of an
116:10 I'll begin by quoting uh part of an essay that we are soon to publish. The
116:12 essay that we are soon to publish. The way to acquire more stable views is
116:14 way to acquire more stable views is almost paradoxically more inquiry. Okay,
116:17 almost paradoxically more inquiry. Okay, you're explaining why it's important for
116:18 you're explaining why it's important for the builder to philosophize. In Plato's
116:21 the builder to philosophize. In Plato's Mino, Socrates describes this with
116:23 Mino, Socrates describes this with reference to the legend of the statues
116:24 reference to the legend of the statues of Datalus which were said to run about
116:26 of Datalus which were said to run about if not tied down. The idea being that
116:29 if not tied down. The idea being that opinion is made valuable via inquiry
116:31 opinion is made valuable via inquiry which helps to ground our knowledge and
116:33 which helps to ground our knowledge and holds it more stably in place. Inquiry
116:35 holds it more stably in place. Inquiry improves our convictions even as it
116:37 improves our convictions even as it replaces them. You're talking about here
116:40 replaces them. You're talking about here the importance of the builder, many of
116:42 the importance of the builder, many of whom in Silicon Valley are just uh
116:44 whom in Silicon Valley are just uh unthinking in their building of
116:46 unthinking in their building of philosophizing that one of the benefits
116:48 philosophizing that one of the benefits is that it grants them more conviction.
116:50 is that it grants them more conviction. However, many of the of the uh Socratic
116:54 However, many of the of the uh Socratic dialogues end in appar
117:02 um uh uh sort of answer to the what is X question is arrived at. And even
117:04 question is arrived at. And even stronger for some of them, the
117:06 stronger for some of them, the interlocular seems to be made worse,
117:09 interlocular seems to be made worse, more puzzled, more angry, more
117:10 more puzzled, more angry, more humiliated through the conversation. So,
117:14 humiliated through the conversation. So, and of course, there's an entire school
117:15 and of course, there's an entire school that's been founded from Socratic
117:17 that's been founded from Socratic apparcics,
117:20 apparcics, right?
117:20 right? And so, why are you so certain that kind
117:23 And so, why are you so certain that kind of philosophical questioning is going to
117:25 of philosophical questioning is going to lead to more certainty for builders who
117:27 lead to more certainty for builders who are even considering to take this path?
117:29 are even considering to take this path? Yeah, it's an interesting question. So
117:31 Yeah, it's an interesting question. So the um
117:34 the um the kind of thing that we want to do is
117:37 the kind of thing that we want to do is to inculcate a habit of mind. We want
117:40 to inculcate a habit of mind. We want people who inquire, who think deeply
117:42 people who inquire, who think deeply about the alternate possibilities, who
117:44 about the alternate possibilities, who understand that philosophy, the love of
117:47 understand that philosophy, the love of wisdom and the pursuit of wisdom is a
117:50 wisdom and the pursuit of wisdom is a kind of quest for knowledge but never an
117:52 kind of quest for knowledge but never an attainment and who are satisfied to some
117:55 attainment and who are satisfied to some degree by simply knowing more about what
117:58 degree by simply knowing more about what they don't know. In other words, we can
118:00 they don't know. In other words, we can we can look to I mean a lot of the most
118:03 we can look to I mean a lot of the most inspirational philosophy for me is
118:05 inspirational philosophy for me is people who have un who have tried to
118:08 people who have un who have tried to demarcate the limits of reason what we
118:11 demarcate the limits of reason what we can know. Yeah. Exactly. Kant Hume many
118:15 can know. Yeah. Exactly. Kant Hume many others. Hayek and so I think that's a
118:17 others. Hayek and so I think that's a perfectly acceptable place to land in
118:20 perfectly acceptable place to land in philosophy that requires a um one to
118:23 philosophy that requires a um one to have a constitution that isn't like
118:25 have a constitution that isn't like pathologically certainty seeking. But if
118:28 pathologically certainty seeking. But if that's true, then we would rather have
118:30 that's true, then we would rather have questioners who are constantly
118:32 questioners who are constantly questioning, which by the way is a habit
118:34 questioning, which by the way is a habit that I think is very consistent with
118:36 that I think is very consistent with company building. It's like you're
118:37 company building. It's like you're asking questions, you're deeply curious,
118:39 asking questions, you're deeply curious, you're fostering that. It's just you're
118:41 you're fostering that. It's just you're asking more capacious questions, more
118:44 asking more capacious questions, more expansive questions that um get at what
118:46 expansive questions that um get at what the technology is actually going to do
118:49 the technology is actually going to do for what matters, which is human
118:50 for what matters, which is human flourishing,
118:51 flourishing, right? Um what do you think is ideal
118:54 right? Um what do you think is ideal path to train these philosopher
118:56 path to train these philosopher builders? Is it to take builders such as
118:58 builders? Is it to take builders such as yourself uh and then try to teach them
119:00 yourself uh and then try to teach them philosophy when they're ready or is it
119:02 philosophy when they're ready or is it to take the best philosophers uh and uh
119:04 to take the best philosophers uh and uh teach them how to build?
119:06 teach them how to build? Good question. So my focus is to try
119:09 Good question. So my focus is to try primarily to take the builders who have
119:13 primarily to take the builders who have this kind of sense that they want to
119:16 this kind of sense that they want to help humans and just like I did but it's
119:19 help humans and just like I did but it's untutoed just like mine was and then be
119:22 untutoed just like mine was and then be able to give them tools to um really try
119:25 able to give them tools to um really try to derive a set of principles or ideas.
119:28 to derive a set of principles or ideas. The method I think must be a combination
119:30 The method I think must be a combination of education and that education must
119:33 of education and that education must blend the kind of textual and technical
119:36 blend the kind of textual and technical meaning if we want to think about
119:38 meaning if we want to think about collective intelligence we might read
119:40 collective intelligence we might read mill on how to correct collective error
119:42 mill on how to correct collective error but we should also understand research
119:45 but we should also understand research at the very front you know very forward
119:47 at the very front you know very forward parts of the frontier like what
119:49 parts of the frontier like what midjourney is doing on collective
119:51 midjourney is doing on collective intelligence that's one part of it the
119:53 intelligence that's one part of it the other part is practice it's action it's
119:56 other part is practice it's action it's translation So you can think about
119:58 translation So you can think about translation in two senses. One is how do
120:00 translation in two senses. One is how do you create prototypes that test ideas
120:03 you create prototypes that test ideas and for that what we do at Cosmos is we
120:07 and for that what we do at Cosmos is we back you know individuals to build
120:10 back you know individuals to build projects in 30 60 90 days with micro
120:13 projects in 30 60 90 days with micro grants inspired by Tyler Cowan inspired
120:15 grants inspired by Tyler Cowan inspired by emergent ventures.
120:17 by emergent ventures. The other method is to do deeper
120:19 The other method is to do deeper research. And so for some questions that
120:22 research. And so for some questions that deal with like the heart of what is
120:24 deal with like the heart of what is inquiry or what it what does it mean for
120:27 inquiry or what it what does it mean for a machine to uh promote virtue? These
120:30 a machine to uh promote virtue? These are big questions that I think
120:31 are big questions that I think prototypes would be valuable for. But
120:33 prototypes would be valuable for. But there's a much deeper kind of question,
120:35 there's a much deeper kind of question, right? And so for that we helped to you
120:38 right? And so for that we helped to you know um facilitate this by setting up an
120:41 know um facilitate this by setting up an AI lab at the University of Oxford
120:43 AI lab at the University of Oxford called the human- centered AI lab where
120:45 called the human- centered AI lab where you have some of the top philosophers
120:48 you have some of the top philosophers Philip Coralus leads this um he's a very
120:50 Philip Coralus leads this um he's a very unique philosopher who thinks about
120:52 unique philosopher who thinks about reason but you combine that with people
120:55 reason but you combine that with people who are fresh out of open AI anthropic
120:57 who are fresh out of open AI anthropic deep mind that kind of thing and they're
120:59 deep mind that kind of thing and they're building systems and doing research that
121:02 building systems and doing research that I don't think could be done anywhere
121:03 I don't think could be done anywhere else I don't think it could be done
121:04 else I don't think it could be done within
121:05 within I don't think it would be done within
121:07 I don't think it would be done within academia in a traditional way. So you
121:09 academia in a traditional way. So you kind of put that alchemical set of
121:13 kind of put that alchemical set of philosopher and technologists together
121:15 philosopher and technologists together and you do incredible research. Then as
121:17 and you do incredible research. Then as I mentioned the last part of it which I
121:19 I mentioned the last part of it which I still view as part of like this journey
121:21 still view as part of like this journey of transformation is you kind of
121:23 of transformation is you kind of graduate and you build a company. That's
121:25 graduate and you build a company. That's what what makes this so distinctive is
121:28 what what makes this so distinctive is that, you know, we want to take ideas
121:30 that, you know, we want to take ideas and then scale them out into the world.
121:32 and then scale them out into the world. And the way to do that is through
121:34 And the way to do that is through markets, is through entrepreneurship.
121:36 markets, is through entrepreneurship. And so that's where Cosmos Holdings
121:37 And so that's where Cosmos Holdings comes in.
121:38 comes in. Well, it's really interesting because uh
121:40 Well, it's really interesting because uh most of the people we interview in this
121:42 most of the people we interview in this series, the really successful
121:43 series, the really successful philosopher builders, all went the
121:44 philosopher builders, all went the opposite direction.
121:45 opposite direction. They all studied philosophy. Reed
121:47 They all studied philosophy. Reed Hoffman, Peter Teal, Colin Moran, uh
121:49 Hoffman, Peter Teal, Colin Moran, uh Marcus Rue, and then they were pulled
121:52 Marcus Rue, and then they were pulled into the real world. Yeah. I'm not
121:54 into the real world. Yeah. I'm not saying there's a necessary tension
121:54 saying there's a necessary tension there. I I just think that's it's very
121:56 there. I I just think that's it's very interesting. Yeah,
121:57 interesting. Yeah, absolutely. Well, I mean, I think it
121:59 absolutely. Well, I mean, I think it hits people at the right moment for
122:00 hits people at the right moment for them. So, I don't mean to dismiss people
122:02 them. So, I don't mean to dismiss people who come at it from the other direction.
122:04 who come at it from the other direction. We're very welcoming of that. I know for
122:06 We're very welcoming of that. I know for me, I took one philosophy class at MIT.
122:08 me, I took one philosophy class at MIT. I didn't like it that much.
122:09 I didn't like it that much. Yeah.
122:10 Yeah. And then it had to be the case that I
122:12 And then it had to be the case that I found it in my, you know, mid30s after
122:15 found it in my, you know, mid30s after selling the companies, after having the
122:16 selling the companies, after having the kids. It took hold. So, it's very unique
122:19 kids. It took hold. So, it's very unique the journey that each individual
122:20 the journey that each individual and uh same for me. it could just be be
122:22 and uh same for me. it could just be be a matter of months or even a single year
122:24 a matter of months or even a single year where for me uh I took Colombia's Great
122:27 where for me uh I took Colombia's Great Books core curriculum after I filled the
122:29 Books core curriculum after I filled the company, dropped out and went back. Um
122:32 company, dropped out and went back. Um and uh that was life-changing.
122:35 and uh that was life-changing. Yeah.
122:36 Yeah. If I were forced to take in my freshman
122:37 If I were forced to take in my freshman fall, I think I would have hated it.
122:40 fall, I think I would have hated it. Yeah.
122:40 Yeah. Because I didn't realize the importance.
122:42 Because I didn't realize the importance. So yeah. Uh let me end this interview
122:44 So yeah. Uh let me end this interview with a final challenge and it is a
122:46 with a final challenge and it is a challenge from an unsuspected friend.
122:48 challenge from an unsuspected friend. Hayek. Okay.
122:50 Hayek. Okay. This is uh constitution of liberty.
122:52 This is uh constitution of liberty. Coercion however cannot be altogether
122:54 Coercion however cannot be altogether avoided because the only way to prevent
122:57 avoided because the only way to prevent it is by the threat of coercion. Free
123:00 it is by the threat of coercion. Free society has met this problem by
123:02 society has met this problem by conferring the monopoly of coercion on
123:04 conferring the monopoly of coercion on the state and by attempting to limit
123:06 the state and by attempting to limit this power of the state to instances
123:08 this power of the state to instances where it is required to prevent coercion
123:10 where it is required to prevent coercion by private persons. So for our audience,
123:13 by private persons. So for our audience, what Hayek is saying here is that if you
123:15 what Hayek is saying here is that if you don't want
123:17 don't want individuals, private citizens to coersse
123:19 individuals, private citizens to coersse each other, you need to have the
123:21 each other, you need to have the ultimate form of coercion or at least a
123:23 ultimate form of coercion or at least a singular form of coercion, which is the
123:25 singular form of coercion, which is the monopoly of violence of the state
123:26 monopoly of violence of the state guarding these rules.
123:29 guarding these rules. Might there be an uncomfortable
123:31 Might there be an uncomfortable structure as it relates to building
123:32 structure as it relates to building non-coercive AI in the sense that right
123:35 non-coercive AI in the sense that right now with Cosmos and philosopher
123:37 now with Cosmos and philosopher builders, you're going for the
123:38 builders, you're going for the non-coercive approach to build that,
123:41 non-coercive approach to build that, right? They're saying, "Let's let's give
123:42 right? They're saying, "Let's let's give people these fast grants. Let's build
123:44 people these fast grants. Let's build these these these uh these companies.
123:46 these these these uh these companies. Let's let markets do their trick."
123:49 Let's let markets do their trick." But, you know, if someone is about to
123:52 But, you know, if someone is about to build an AI and push it on the masses
123:54 build an AI and push it on the masses that is going to turn them into
123:56 that is going to turn them into automatons, might we need a coercive
124:00 automatons, might we need a coercive measure to ensure that coercion does not
124:02 measure to ensure that coercion does not happen either through regulation or
124:04 happen either through regulation or other means. So what Hayek is
124:07 other means. So what Hayek is identifying there, I think you could
124:09 identifying there, I think you could call the paradox of government, which is
124:11 call the paradox of government, which is to say you've got to have a government
124:13 to say you've got to have a government that is sufficiently powerful to defend
124:16 that is sufficiently powerful to defend to defend liberty. Um
124:19 to defend liberty. Um that invites uh huge difficulty if you
124:22 that invites uh huge difficulty if you grant a government that much power that
124:25 grant a government that much power that monopoly and violence as you said. But
124:26 monopoly and violence as you said. But it's it's it's essential um that we live
124:30 it's it's it's essential um that we live with this paradox as it concerns
124:32 with this paradox as it concerns regulation. I would call to mind three
124:36 regulation. I would call to mind three three tests. Um one is the test of is
124:41 three tests. Um one is the test of is that consistent is it consistent with
124:43 that consistent is it consistent with the rule of law. In other words, with
124:46 the rule of law. In other words, with the idea that each um each law needs to
124:52 the idea that each um each law needs to be general, abstract and perspective.
124:55 be general, abstract and perspective. That's f the first. And so there are
124:57 That's f the first. And so there are lots of ways in which regulation can can
124:59 lots of ways in which regulation can can violate that and be commands to specific
125:01 violate that and be commands to specific groups and things like that that violate
125:03 groups and things like that that violate the generality so on and so forth. So
125:06 the generality so on and so forth. So that's a baseline test that again draws
125:08 that's a baseline test that again draws from K. The second is is the regulation
125:12 from K. The second is is the regulation something that is made based on
125:15 something that is made based on knowledge that we have no reason to
125:17 knowledge that we have no reason to believe that we possess. In other words,
125:20 believe that we possess. In other words, if we had made regulation at the
125:21 if we had made regulation at the beginning of the internet era, would we
125:23 beginning of the internet era, would we have gotten it right? I can tell you we
125:24 have gotten it right? I can tell you we would not have. You know, I I can tell
125:26 would not have. You know, I I can tell you that we would have been in profound
125:28 you that we would have been in profound ignorance about what was to come. We
125:29 ignorance about what was to come. We would have gotten it wrong. And this
125:32 would have gotten it wrong. And this gets at the idea of um the dominant
125:35 gets at the idea of um the dominant paradigm which is ex anti trying to
125:38 paradigm which is ex anti trying to regulate before some hypo hypothesized
125:42 regulate before some hypo hypothesized harm occurs versus the much more
125:45 harm occurs versus the much more adaptive incremental evolutionary
125:46 adaptive incremental evolutionary approach which is expost through the
125:49 approach which is expost through the common law. So expost adjud adjudication
125:52 common law. So expost adjud adjudication I think should be favored on an
125:54 I think should be favored on an epistemological basis because we can't
125:56 epistemological basis because we can't make such predictions.
125:58 make such predictions. Thirdly any intervention that we might
126:01 Thirdly any intervention that we might have should be evaluated not just on the
126:04 have should be evaluated not just on the basis of the local cost benefit. In
126:06 basis of the local cost benefit. In other words if we if we intervene here
126:08 other words if we if we intervene here and we intervene here and we intervene
126:10 and we intervene here and we intervene here then this proximal harm that we can
126:12 here then this proximal harm that we can that we can foresee. We may not have
126:14 that we can foresee. We may not have experienced it because again we're not
126:15 experienced it because again we're not taking the ex post approach but if we
126:18 taking the ex post approach but if we can foresee it and articulate it is it
126:21 can foresee it and articulate it is it sort of worth it and the cost benefit
126:23 sort of worth it and the cost benefit analysis that really I think we should
126:25 analysis that really I think we should do in that case is don't just look at
126:29 do in that case is don't just look at the harm that you're trying to solve
126:31 the harm that you're trying to solve proximally look at the overall harm to
126:34 proximally look at the overall harm to the system and so if you if you buy the
126:37 the system and so if you if you buy the idea that the system is just especially
126:42 idea that the system is just especially because that tends to allow the
126:45 because that tends to allow the anonymous individual to achieve his or
126:47 anonymous individual to achieve his or her unknown ends. Then you have to be
126:50 her unknown ends. Then you have to be committed to some idea that we care
126:52 committed to some idea that we care about the way in which knowledge
126:55 about the way in which knowledge propagates through the system and gets
126:56 propagates through the system and gets generated through the system. And in
126:58 generated through the system. And in fact, our interventions tend to harm
127:01 fact, our interventions tend to harm that crucial crucial attribute of
127:04 that crucial crucial attribute of spontaneous order. And so
127:07 spontaneous order. And so that is something that needs to go on
127:09 that is something that needs to go on the other side of the ledger, right?
127:12 the other side of the ledger, right? And this is a good place to end it
127:14 And this is a good place to end it because it mirrors what you said about
127:17 because it mirrors what you said about the trade-off function of autonomy in
127:19 the trade-off function of autonomy in the good life, which is that it's not a
127:21 the good life, which is that it's not a lexical priority. It's not like no
127:23 lexical priority. It's not like no regulation whatsoever. It's that simply
127:25 regulation whatsoever. It's that simply people underestimate the cost of doing
127:28 people underestimate the cost of doing business, so to speak, right? Of
127:30 business, so to speak, right? Of intervening. All right. Thank you,
127:31 intervening. All right. Thank you, Brendon. Thank you for a fascinating
127:33 Brendon. Thank you for a fascinating conversation.
127:33 conversation. Thank you.
127:36 Thank you. Thanks for watching my interview. If you
127:38 Thanks for watching my interview. If you like these kinds of discussions, I think
127:39 like these kinds of discussions, I think you'd fit in great with the ecosystem
127:41 you'd fit in great with the ecosystem we're building at Cosmos. We deliver
127:43 we're building at Cosmos. We deliver educational programs, fund research,
127:45 educational programs, fund research, invest in AI startups, and believe that
127:47 invest in AI startups, and believe that philosophy is critical to building
127:48 philosophy is critical to building better technology. If you want to join
127:51 better technology. If you want to join our ecosystem of philosopher builders,
127:52 our ecosystem of philosopher builders, you can find roles we're hiring for,
127:54 you can find roles we're hiring for, events we're hosting, and other ways to
127:56 events we're hosting, and other ways to get involved on jonathanb.com/cosmos.
127:59 get involved on jonathanb.com/cosmos. Thank you.