YouTube Transcript:
This Terrifying Ideology has Infiltrated AI Labs | AI Expert Brendan McCord
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Available languages:
View:
People who are really high up at the AI
labs, they say that we are being rebels,
foolish rebels, if we don't listen to
the AI. They they truly say this. How
dare you rebel against God. Right.
Exactly. How dare you, you silly you
silly rebel. Nick Bostonramm is correct
that philosophy is on a deadline. He is
dead wrong about this
essential maybe central human good. Two
years from now, AGI is going to come
around the pipeline. You wouldn't be
doing what you're what you're doing now.
Right. There there's more pressing
questions to resolve. Yes. Wrong. People
right now believe that AI should tell
them what to do. There are Claude boys,
literally teenagers who call themselves
Claude Boys who will wake up and they
will do what Claude says and not
otherwise. And this, not EA, not
existential risk. This is the true
One of my friends uses chat GBT for
hours every day, not just as a search
engine, but as an operating system for
his life. He asks it where he should
eat, what he should text girls and
dating apps. He gets up every day and
has ChatGpt tell him what to do. My
friend does this not because he's
incompetent or stupid. He's one of the
smartest people I know. But because Chad
GBT already knows so much about him that
the advice is actually getting quite
good, the restaurants that it recommend,
for example, are already better than the
ones that he can find for himself. My
friend is not alone. Gen Z, Gen Alpha,
are increasingly using AI as a holistic
operating system to which they offload
all of their decisions onto. And Brendan
McCord argues that this kind of
offloading is the real danger of AI that
no one is talking about. In this
interview, you'll learn why human
autonomy is important, how AI threatens
it, and how to harness the power of AI
without forming an unhealthy dependency.
My name is Jonathan B. Brendan and I run
Cosmos together to deliver educational
programs, fund research, and build AI
startups that enhance human flourishing.
Both of us have a background in
philosophy and engineering. And we
believe that it's important to combine
the two if we are to build actually good
AI systems. If you want to join our
ecosystem of philosopher builders, you
can find roles we're hiring for, events
we're hosting, and other ways to get
involved on jonathanb.com/cosmos.
So, Brendan, today we're going to talk
about autonomy, which is uh a risk, but
also an opportunity that most people who
are building an AI, even for their
entire lives, are not really focused on.
Um but first I want to clear the ground
and talk about what people are worried
about. So there are very mature and
developed schools of philosophy in the
valley. The accelerationists, the
effective altruists, uh the x-risk
people and you argue that they're all
missing something essential about AI
development. Why is that? Uh and give us
an overview of these goals.
So the two main tendencies that I found
within the AI community are the
existential pessimists and the
accelerationists. and they roughly map
to the extremes of despair and the
extreme of hope.
The existential pessimist is kind of
three philosophies in a trench coat.
It's rationalism, effective altruism,
long-termism. These are all distinct,
but they're the intellectual incumbent.
And the prescription in light of this
possibility is that we should pause
development is that we should centralize
control is that we should radically
remake society on the basis of risk
avoidance. So it's a claim and then it's
a series of drastic and dystopian
prescriptions. you see some of the early
um godfathers of AI as they're called
people like Jeff Hinton
like having this almost Oenheimer like
hubristic awe of their own creation
saying I wish I hadn't done it
and you know that I brought this into
the world
and um you know the three schools as I
mentioned effective altruism rationalism
long-termism it's worth breaking it down
a little bit so rationalism comes out of
the 2000s Elizer Yubcowski um uh and
slatear codecs were kind of the
originating sources of this and they
claim to be focused on an idea of like
um uh perfecting human rationality but
they have a very very narrow commitment
to what rationality is. It's basian and
updating over a value function or the
value function over outcomes. It's a
kind of rationality that Aristotle and
Kant and you know modern thinkers would
not recognize. So, it's a commitment to
a very myopic kind of probabilistic
control type of morality. And what I
find interesting about this is that the
school is committed to rationality. Yet,
it finds itself the most fertile
breeding ground for the extremes of hope
and despair. And there's some kind of
irony to that. You know, the next one is
long-termism. And Bashram is probably
the source in long-termism. Long-termism
is worried about anything that could
affect the long-term outcomes for
humanity. That could be an asteroid. It
could be um you know pathogen. It
doesn't have to be AI. AI is
particularly important because its mo is
that of control. Boston advances
something called the orthogonality
thesis where you say something can
become very intelligent but on a totally
different axis. That does not imply that
they become more moral or that it
becomes more controllable. This is true
of humans as well. Um, and he worries
that a non-anthropomorphic final goal,
meaning create more paper clips, will
cause AI to pulp us all. So, this is a
concern that he has for the long-term
future of humanity based on this
orthogonality notion. And then the last
is effective altruism. And effective
altruism dominates the three when it
comes to the fundraising potential. It
is the most highly adopted academic
theory since Marxism. M and what
effective altruism does is it
rationalizes certain moral intuitions
and things that are not intuitive at all
it tries to pull in as well things like
you know other regarding behavior should
not factor in any notion of time or or
um a place. So, so what that means is I
should care about my family as much as
someone who lives 17 generations hence
in Indonesia or even plausibly a shrimp
equally. A util is a util whether it is
yours or those other entities. That is a
very radical premise. Um it also
attempts as as you know these forms of
utilitarianism do to reduce moral
questions to a single currency such that
we can compute them and maximize them.
And so um I reject and I think most
thinking people do reject that morality
is a thing that works like that. In
other words, endemic immorality appears
to be this difficulty of making pretty
sharply heterogeneous trade-offs. Like
how do you com compare the uh moral
question entailed by some suffering over
here with familial love with duty or
honor in battle? These are things that
don't commenserate
in utilitarianisms. They do and it's
false kind of scientistic way of viewing
the questions of morality that really
obiates what makes moral choice
difficult. The other thing I'll say here
is that um effective altruism
tries to take one's personal projects
and make them those of the universe
provide the calculational standpoint to
say rather than you know acting morally
in a way that inherently springs from me
right that's what we learn from
Aristotle and the comain ethics rather
than doing that it subordinates that
impulse that moral choices like is an
individual thing and tries to make it
the standpoint of the universe
subordinates uh what it means to act
morally. And this is why it is not
surprising to me to see such profound
moral failings in the EA community. In
other words, what is sexual assault?
What what is the what is the real issue
there if you're doing cosmically
significant work? What are the practical
consequences of following this through
of people who are actually building AI
or regulating AI thinking in this way?
What are the practical consequences?
So, two that come to mind. One is that
we would look to
governance solutions that are profoundly
illiberal and that would lead to tyranny
um namely the creation of a world state.
So in the paper um the vulnerable world
hypothesis Boston writes about the need
therefore you know it's always a
conclusion of these radical premises to
create a world state where basically you
eliminate the coordination difficulty of
having greater than one actor. So even
US and China let alone a more
multi-polar you know world presents this
like game theoretic challenge of racing
and so forth. And so we have to get to
one. we have to get to one state and of
course political philosophy has dealt
with the question of whether there
should be one state many times before.
There's a contan critique um that has to
do with uh you know the the challenges
of maintaining legitimacy uh uh versus
having to sort of squash disscent and
there's a Strauss co debate where Leo
Strauss argues that it would lead to a
universal perpetual homogeneous tyranny
and um and many other uh uh concerns. So
that's one. The other is that if we
accept and hold tight those utilitarian
premises, then we might think that it is
uh acceptable to live a life with
artificial intelligence that is
profoundly uh nonhumanistic.
In other words, that we might um be
perfectly willing to be downstream of
AI, having it tell us what to do and
guide our life like an autocomplete.
I see. Um, one interesting thing I want
to double click on is you use the word
hubris to describe these schools which
is I think is fascinating because in
many ways these schools portray
themselves to be egoless right the
effect of altruist is about it's not
about my good it's about altruism or the
rationalist right it's not about my ego
or my selfish interest it's about doing
what's rational but I was reading um one
of these uh important blogs of the EA
community and it said the way you should
make decisions is think what would a
benevolent an omnipotent God do. And I
think it's it's kind of a power trip,
right, to to to think about it in that
way. And maybe to build off of what you
said about Oppenheimer, I think that to
claim confession of one's guilt is at
once to claim credit for the sin. It's
to say, I am powerful enough to
potentially end the world. And there's
also a subtle ego trip there. Is that is
that what you meant by hubris?
Yeah. And I think it it's both the kind
of hubris that Bacon had in saying that
man would master nature, you know, at
the beginning of the scientific project
and the same kind of hubris that comes
through any serious philosophy project
where we try to find the one true answer
to all the things. Like there is a
hubris laden in that. And if you are the
type of person that can subjugate the
world with symbols mathematically
inclined among us, it is your moment and
what you do has cosmic significance for
the future of the race.
Right now I will say the strands of
philosophy to which I am attracted tend
to take a different approach. They tend
to be more epistemically
humble. They tend to focus on the use of
reason to whittle down the claims of
reason as David Hume would say. In other
words, a critical rationalism and not a
constructive rationalism that tries to
sort of say this is the way the world
is. Tries to focus a little bit more on
what the world isn't. Right?
Right?
Um so that there is, you know,
philosophy is not uh not solely the
domain of the hubristic,
right? Uh one of my favorite lines from
uh your favorite book from by Hayek,
which we'll talk about is something like
uh every political philosophy believes
other people are ignorant but liberals
also believe that they themselves are
ignorant. Right? And that's kind that's
the kind of difference there. Um many in
Silicon Valley have also reacted to
these dominant schools of philosophies
in the way that you have. Uh and so
accelerationism was born. Tell us about
that school and why you don't think
that's a satisfactory response. Right.
Right. So the the other end of the
extremes of hope and despair accelerationism
accelerationism
and this school would have us unleash
the development of AI as an end in
itself. And that's a very important
idea. You know, you and I, I think,
probably hold technology to be a very
powerful means by which we flourish as
individuals. The accelerationist school
confuses technology as a means with
technology as the end and views humans
as being a kind of instrument in this
trajectory that is sweeping,
transcendent, inexurable, this
technological trajectory. It draws a lot
not from basian thinking but from
thermodynamics. And so it wants to
portray humans as a kind of variable in
this thermodynamic equation in this um
broad project to be able to harness and
dissipate more energy or to be able to
uh climb the cardartesev scale which
means to harness not just the energy of
the planet or the sun but of the galaxy
you know ratchet up more and more and
more. These are not humanistic goals.
And it's very important because one of
the conclusions that the orthodox
accelerationists reach is actually that
we can and should hasten the time to
passing the baton
from humans. We invite the doom to
something higher. Yeah. Exactly. And I
think with both schools, but certainly
with accelerationism, you see a
metaphysical impulse, you know, the the
the the religious impulse that humans
have long had. You see it conserved but
redirected towards the thermodynamic
god. In other words, they're both kind
of esqueological. You know, in one case,
we die and in one case we build
something higher and better and then are
transcended. And here here's the upshot
is that both of these schools are very
imaginative in a sense, right? These are
you cannot fault them for being very
imaginative about the possibilities
here. um whether it's paper clips or
thermodynamic god but on the one thing
that is most needful they have a lack of
imagination and what I mean by that is
on what it means to be human what the
human good is. Yeah.
One side views humans as a kind of
aggregate of suffering or pleasure. In
other words, we view the human good as
the avoidance of pain. I have news for
you. That is not the entirety of the
human good. The other side abandons the
idea of the human good entirely. In
other words, it's a non-humanistic
philosophy. And this is precisely where
the issue is, is that we need a positive
uh uh uh approach that is humanistic,
that is grounded in the real goods, not
reductionist to to you know either
Beijing or thermodynamic effect, but
really focus on the underlying human
goods. If you are as we are investing in
for example application uh level
companies that are using AI uh if you're
worried about solving problems around uh
autonomy um and we'll discuss what that
means you are implicitly making a bet
and saying that there will no be there
there won't be a singularity or an AGI
within the next 5 to 10 years right
you were saying that there is a human
good that needs to be jealously defended
that's what you're saying and you're
saying that No one else will do it. We
have to do it.
So you're you're but you're agnostic to
the the speed of the the AI development.
You're agnostic to the development
scenarios. You're simply saying whatever
path we go down, this is a human good
that I want to protect and I must build
in order to protect it.
Let me push you in in one last direction
as a devil's advocate for these schools,
which is Brendan, you say that uh you
are agnostic to the AI kind of
development timeline. uh and the core
focus is figuring out how AI regardless
of development timeline can help these
human goods flourish and that's why for
example the the main questions we ask at
Cosmos are around autonomy which we'll
talk about soon decentralization and
truth seeeking the devil's advocate is
to say but if you think AGI is around
the corner then all of your energy
should be focused on getting that right
and aligning it and it will be able to
answer these questions
much better than we can today. And so
this is what Bolstrom said when I was
interviewing him, which is philosophy
has a deadline. And there are certain
philosophical questions that are more
urgent if you take that AGI is around
the corner kind of kind of idea. So even
though you claim you're agnostic, you're
still not agnostic, right? Because if I
told you, let's say I'm an Oracle, I
come from the future. In two years from
now, AGI is going to come around the
pipeline. You wouldn't be doing what
you're what you're doing now, right?
These questions we can save for later.
there's more pressing questions to
resolve. Yes.
Ron. Yeah. So I think you know for
example um when we talk when we get into
autonomy that is a lived practice. In
other words that is something that
humans do right and so the thought that
we will build AGI and AGI will figure
out autonomy is nonsensical. It's a
category error. Like it can't figure out
something that must live within us. In
other words we must self-develop. we
must self-direct. And so we can use AGI
instrumentally in that pursuit, but it
is not a thing to be figured out. It's a
practice. It's something to be lived. I see.
see.
And so like I I also want to um clarify
that my agnosticism doesn't imply a
withdrawal. In other words, we are
building like we're building the future
we want to see.
Okay. Well, this is a perfect segue.
Let's talk about the risk that I don't
think anyone in the valley has really
focused on in the way that we have
autonomy. What is the risk as well as
opportunity as it relates to AI and
human autonomy? So when you think about
the greatest goods in your life, you
probably think about things like friends
and family and loved ones. You might
think about the pursuit of wisdom.
That's I would say one of your highest
highest goods. Creative endeavor, that
sort of thing, right? Um it's actually
eating but wisdom is number two. Yeah.
Yeah. Eating. Exactly. That would I I
would say be one of the lowest that is
necessary for the highest. So this is an
interesting point about how the lowest
things in us are needed for the for the
highest uh as well. But these kinds of
goods what's common to them. Whatever
you hold to be your highest goods is
that they cannot be uh obtained on a
platter. You know AI can't give them
eating except for eating which we'll
I'll hold to the side.
We'll buck in. Um, and they have to be
uh attained as the result of some kind
of selfmotivated striving. In other
words, you had to get out there. You had
to try things. You had to enjoy it. You
had to experience it. You had to get
hurt by it. Being able to discover and
develop one's gifts, being able to
deliberate using reason and to line up
our actions and be able to pursue them.
So, this deliberative capacity for
self-direction, okay, is I think the
thing I want to call everyone's
attention to. This is autonomy. And
without this self-direction, we cease to
live fully human lives. We may act in
the world, but it isn't really our life
to live. And so I say this all so far at
the level of the individual. Like the
other piece of it is that it's very
important as a society, particularly a
democratic society, one that
self-governs. And this is because we
depend on individuals who can form
views, who can act on those views, who
can self-govern. Without that we lose
the the the greatest bull work against despatism.
despatism.
Right. So draw it out practically how uh
this current AI wave can harm or or
accelerate us being autonomous agents.
Okay. So the phrase that I would uh
stick in your mind is autocomplete for
life. What I mean by that is we use AI
systems, we obtain the incremental
convenience from them where we get not
just the next word in the sentence,
that's what everyone's familiar with of
autocomplete, but also the next
decision, the next, you know, uh uh job
recommendation, the next friend, the
next relationship, the next purpose. In
other words, we can sort of ladder up
what AI can do for us and feels very
harmless. It feels convenient and
probably useful, but it adds up. It it
causes a kind of erosion of choice. When
we offload, we can see at the level of
fMRI, but certainly we all recognize
this in our lived experience that we
atrophy. In other words, we um we
habitually offload in a way that causes
us to then lose the skill. you see this
like the fMRI stuff is like if you do a
lot of speed reading and not a lot of
deep reading you you lose some of the
ability to do that you know or if you do
a lot of calculator- based arithmetic
you lose the ability to do that now I
think an important point has to be made
like why is this not just another
version of that right Google uh Google
maps right I can't drive very well
without Google maps um you're probably
the same actually live in New York so
you probably can't drive period um but
um so then the next question becomes
Why is AI in particular a problem for this?
this?
Because this is a problem for all
technology, books and memory, right?
Like driving and being able to ride a
horse. Yeah. What what's
Well, actually, I think before we even
talk about that, I framed it as a
problem, but it's actually it's actually
a beautiful thing,
right? And you and I have talked about
the quote from Alfred North Whitehead
that the measure of civilizational
progress is the number of important
operations of thought we can perform
without thinking about them. Right?
Right?
It's a brilliant brilliant quote. There
are examples about my favorite of this
is Max Verssteppen the Formula 1 driver.
He's a kind of a prodigy, you know,
special driver. And when it's raining on
the course, he can talk to his pit wall,
his pit crew. And it's because he has
made it autonomic. Yeah.
He has done it so many times that he can
actually think about strategy. He's
talking about the tires. Yeah. You know,
he's going 220 m. In fact, I remember
one of the races, I'm big F1 fan. He was
watching the other drivers like on the
jumbotron and commenting on their races.
I mean, it's almost like he's just Yeah,
he's just like having a strategy thought
while everyone else is clinging on at
5Gs or whatever they drive at. And so
anyway, so it's a beautiful thing. It's
how we build the edifice of
civilization. It's how we do the higher
things, it's great. So it's a bit of a
paradox, right? On the one hand, it's
great. On the other hand, it's
problematic. Okay, so now we set sort of
the contours. So coming back to the key
question of why does AI why is that a a
thing that special case special case yeah
yeah
so one is you have to think about what
it is that you are offloading and thus
potentially eroding and in the case of
calculators it's calculation in the case
of maps it's like positioning in in
space yeah navigation that's
um in the case of writing that's is
essentially memory right primarily
memory and so these are kinds of
categories of
Never before has it been possible to
offload and therefore atrophy our core
practical wisdom or our core
deliberation I should say that leads to
a kind of wisdom so you know now you are
talking about something that is
necessary to self-direct that is
necessary for moral judgment right it is
necessary for us to decide what is good
for us and so it's a different kind of
thing that gets offloaded the other
question you have to you have to ask
okay now that we've pinpointed that this
is a very precious thing that we should
be careful to not offload then you have
to look at how pervasive is it likely to
be and AI is clearly something that can
scale it's clearly something that can be hyperpersonalized
hyperpersonalized
already 20% of human life is mediated by
algorithms human waking life is mediated
by algorithms
social media algorithms yeah
social media al algorithmic feeds not
just LLMs but AI that um determines or
guides or shapes what information
reaches your mind, what thoughts
therefore form within them. And so the
scale is already very significant. But
what that means is that you might not
encounter different possibilities. In
other words, you might not realize that
there's something else out there if you
have been sort of epistemically narrowed
to a high degree. I think another
mechanism that's important is how do you
sort of pull out, how do you recover,
right? In the case of calculators, you
can just do kind of the inverse
operation and check it. In the case of a
lot of the things AI does for us, it's
very hard to check and it seems
authoritative. You know, it can answer
questions like what is justice? Like no
one knows, but you know, you give us
that kind of question. And if if AI
seems authoritative and fast, the
computational cost of checking it is
very high, we don't check it. This is a
common automation issue. We don't check
it. So that fact combined with the
narrowing destroys the possibility of
error correction in the long term.
So so let me summarize the conversation
so far for our audience which is that
all technology as they give you a
superpower with their left hand takes
something away through dependence on the
right hand. Um this trade-off is worth
it if what is taken away isn't so
central. What makes AI special your view
is it sounds like is that because it's
the most uh it's a technology that's
most similar to humans that it can
potentially take away practical
deliberation. So it's the thing that you
lose. It's you know practical reason
versus navigation versus calculations
versus memory as well as the scope uh
with which you lose it because it can be
embed in everything. That's what makes
this extremely dangerous and the fact
that it is hard to recover because the
way you might audit that, the way you
might pull it out is through use of the
very same thing that it atrophies.
I see for example,
right? And so obviously I imagine people
are going to be curious about the
solution. What do we do about this? But
before we go there, uh I want to better
understand the concern by giving you a
thought experiment. Okay? And I I call
this thought experiment the uh the
initiant autocomplete. So, let's say
whatever practical question it gives
you, it'll always give you the best
answer for you to do. Should I marry
Sally? Should I marry Susan? Uh, and you
know, you know that it's the best
because historically it's been verified.
Okay. So, so you run back tests, it
always gives the right answer. And every
little thing that you've tested and that
your friends have tested, it always
seems to have have given you the right
answer. So, you're you're pretty
confident empirically that it's, you
know, it gives you always the right
practical answer. How would you use this system?
system?
I want to accept the thought experiment,
but but first I want to understand is it
um omnisient through time like is it is
it an oracle that so like in the middle
of the way through an NBA season um we
don't know who wins the championship,
right? Does this omnition oracle know
that? And I I raised this because there
is a class of knowledge that can only be
that's not sort of like computationally
reducible. It's something that only is
generated through actually playing out
the thing you know right and this is for
example um how markets uh function is we
you know across many different variables
we generate knowledge uh it's knowledge
that is known to no one because it isn't
even in existence in the world so let's
say that it's not a mission through time
um but let's say
let me put this way the wisest human
ever let's say Socrates it would make
the decision that they would make like
like Given imperfect information, date
Sally, date Susan, we don't know if
Sally or Susan has cancer, but given all
the the information, it makes like the
best possible answer. Yeah.
Yeah.
The reason I asked the question is because
because
um there is a mode of operation that is
suitable to exploiting knowledge then known.
known. Yeah.
Yeah.
In other words, like kind of what an
authoritarian government does, right?
there is quite another that is suited to
the maximal generation of knowledge,
right? And um for things that are not
known and won't be known except by sort
of playing it out, right?
right?
And so if you lean too hard on the
former uh on the exploitation of the
knowledge then known, you kind of
deplete the stock, right? You cease to
generate the new. And I would argue that
the real goal from the consequentialist
frame is we should want systems that can
uh allow the anonymous individual to
achieve his or her unknown ends. And if
we want to do that, it is not by simply
exploiting the knowledge of the unknown.
It is by maximally enticing the use of knowledge.
knowledge.
Right? But but but the AI could could
tell you to do that. So the AI could
say, you know, given what what you know
now about Sally and Susan, go with
Susan, but be open to it and then ask
you questions while you're dating Susan,
you know, every time you ask it a
question, it will come out with the with
the best kind of practical uh
deliberation. Yeah.
Yeah. So, so how would you use it? Would
you have like a pair of VR goggles that
it always tells you what to do? Would
you never consult it? Would you consult
it occasionally? M mhm. So I have a
four-year-old and a six-year-old and I
kind of am raising them with the idea
that this world is the world in which
they are entering. Right.
Right.
Okay. And so what I've done is on the
one hand try to utilize that oracle to
develop their skills. So that tilos is a
self-development one. And so my daughter
will do math that AI could trivially
answer, but she will still do it. So AI
poses questions and she does the math
and this works pretty well. But then I
timed limit that very strictly and uh
this is through a pro through uh a
curriculum at alpha school which you
know um in based in Austin. So there's a
time delimiting. Then for the remainder
of the day there is an experiential
learning component that is completely
without this oracle. In other words she
goes outside she tries things in the
world. She learns to ride a bike for
five miles without stopping. She climbs
a rock wall. She speaks in front of a
hundred people. Okay. So, there's a
nonoracle component. There can be light
consultation for like if I want to learn
how to garden, how do I do that? That's
a cons consultation that I would rely on
the oracle to tell me how to do. The the
last component is then um stimulating
through probably human discussion the
kind of characteristics and habits of
mind that are necessary to retain
self-direction in a world like that
because my biggest concern would not be
correctness. Correctness is solved but
it would be the concern around
infeeblement around not living a full
human life because I no longer
self-direct because I become a sheep.
And so what do I mean by that? Well, I
want to be able to cultivate a
reflective metacognition that says what
am I versus what is the pole of the
algorithm. In other words, if this is an
exosystem around me that I'm using quite
regularly, I sure as heck want to know
what I am, right? This is an extended
part of my mind um that I endorse, that
I transparently use, but I still need to
know what my boundary is so I don't get
lost in it, right?
right?
We need to know how to think in
connection with machines that could do
the thinking for us.
Right? So let me ask you this. One way
to frame your answer is to say I will
consult your oracle but I always need to
make sure that if I follow its
directions I need to know the steps. I
need it to tell me the full reasons. Is
that is that fair?
Uh I think it's more than that. I want I
need to be able to exercise my
deliberative capacity. In other words,
knowing the reasons. It is not enough to
just know them, right? I mean, this is
the Mino, right? You know, this is the
the idea that we need to have the statue
of datalus, I think, you know, tied
down. And the only way we can uh avoid
having this thought run away is through
working through it, giving an account,
as Plato would say. And so um I I need
not just to know the reasons but I also
need to be able to work through them
um myself
if that makes sense. Yeah. So and I and
the tilos is really important as well
because there's a lot you can do like
Rouso wrote in the Emil about how to
tutor uh how to raise an autonomous
child right and this gives us monosuri
um in in the end um but one thing he
does there is he configures the
environment for the boy in a way that is
very I don't know paternalistic like
controlling that environment but there's
a progressive uh letting go because the
end goal is self-development. With an AI
system, there's no end goal of
self-development. You know, that oracle
doesn't really care about that. That
oracle would just assume you be um
perpetually dependent and that you have
a habit of passivity and you just do
what it tells you to do, right?
So, it's crucial that you set that goal, right,
right,
of of self-development and not of, you
know, unthinking dependence.
Yeah. So, um, I agree with much of what
you said. One being that the shape of a
good life is to be self-directed, right?
So, so even if you're doing you're
making all the right choices, dating
Sally and not Susan, if you're not
feeling in the driver's seat, if you
live your entire life like that, that in
it in itself, the form of that, even
beyond the content of your decisions,
robs you of the good life. IPSO facto,
right? That that's what you're getting
at. So, I would definitely not use this
Oracle in like VR glasses mode where
like it would always just tell me to
raise your left hand, raise your right
hand because like and I would gladly
trade off suboptimal decisions for the
fact that I'm making the the decisions. Yeah.
Yeah.
But here's here's why I want to
challenge you. What if the Oracle is so
advanced that we can't even understand
its deliberations? And before this
sounds too dystopian, I'll draw an
analogy to religion, right? Um Dante
when he goes to paradise and he asks the
eagle of justice, why is the poor
virtuous pagan who lived before Christ,
who never even had a chance to see
Christ, why does he deserve to go to hell?
hell?
The eagle of justice says, none of your
damn business. Yeah.
Yeah.
And in the next Kanto, the eagle of
justice says, I'm the eagle of justice.
I don't even know. That's God. Okay. I I
just delight this. These are almost
exact words. I delight in following
God's will. The structure of faith as I
take it is that you uh try to validate
it as much as as you can. You know, is
it plausible that Jesus rose from the
dead? Are are the accounts accurate? But
after you validate the oracle, the
religion, the god to your rational
faculties, you as a leap of faith are
willing to um take actions even if you
cannot see the full reasons of those actions.
actions.
So if you now transplant that to a kind
of AI oracle, I imagine you wouldn't be
comfortable with that. And so and so my
question would be do you think faith
itself and do you think faith and
religion is just a deficient way of
living the human life even now
that's a big question so the most
beautiful part of the Bible for me is
the end of Job book of Job where
much the same as the eagle of justice
you have um
yeah where were you when I was building
the cosmos right what God says
yeah exactly and these harms have
befallen job and it's this beautiful
poetic explanation of the limits of
human um capacity to understand God. Now
in that case, God looks to us as a uh
from the standpoint of some form of
self-development. In other words, like
um we have a relationship with God in
which we self in which we develop um
that doesn't exist in the case of the
omnisient AI. Um and so it it's much
more likely to form a kind of passive
relationship. I think it's a close it's
a close call because in the because we
do sort of like try to give ourselves up
to Jesus.
My life is not my own. This is what they
say. Right.
Right. Right. But I think the fact that
like um there is some reciprocal expectation,
expectation,
not to say we're equal to God, but like
there's some reciprocity sets a slightly
different frame. We also choose in a
real sense, right,
right,
to uh engage in religion and like to to
sort of legislate upon ourselves that
ultimate question. And my concern with
the habituation mode of AI is we may
cease to choose like we may unknowingly
bind ourselves to a life of dependence
where we no longer are choosers.
Right? I see. I mean, it's not just
religion, right? What I'm trying to
highlight is that there's an entire
sphere of human activity that I I mean
may maybe you'll consider it all to be
deficient but but many serious
philosophical traditions don't that has
as an epistemic mode following the
advice of an authority
whose legitimacy kindness accuracy and
truth you have empirical reason to
believe in even if you can't understand
this specific piece of advice right
right
faith is one of them I gave Yeah,
maybe the military is is another one,
right? Where you don't know the full
reason that that you are being given
orders. Uh arranged marriage might be
another one.
Uh parental relationships that that's a
bit different because you're a kid there.
there.
But but do you see like what I'm trying
to highlight that there's an entire
sphere of human activity where you don't
trust and reason yourself on the advice
itself. Yeah. But you evaluate the
deliberator. You value the the advice
giver. Well, so I think you know were we
to be first principal reasoners about
everything, I think chaos would ensue. I
think we actually have to unthinkingly
accept quite a lot and not just to be
religious but to function in society. I
think we do that.
Um that is also the way in which I think
we should relate to tradition. In other
words, we should not use reason to
create tradition or to to scrap
tradition a new. This is where I think
Mill gets into some trouble and where
Hayek's Burkian reverence comes into
play in a better way. Okay. So there's a
conservative strand that I very much um
embrace as it pertains to the epistemic
value of tradition. But um so that's one
point is like I totally agree with that.
The the the point about um the military,
right? I was in the military. I think
Kant has a good framework for thinking
about the the laws that we give
ourselves. I mean he's talking about the
moral law and the categorical
imperative, but more broadly he's
talking about the idea that we can
restrict ourselves.
Yeah. We can be autonomous and not
heteronomous in so far as we rationally
choose to put something on ourselves. So
if I choose to join the military
and then in the military no longer get
to choose, I have to follow orders.
That's completely okay. That's fine.
That's autonomous. And in fact,
I can exit the military. I can hold the
officer above me to account through
court marshall. And so it seems to
matter very much whether we choose to um
to to willingly sacrifice our autonomy.
I'm trying to think of other things like
you know we we choose to do jury duty.
We choose to do a lot of things that you
know sacrifice that.
Yeah. So so just to be clear not talking
about AI yet talking about uh like human
interaction preai. It sounds like you're
fine with evaluating
the legitimacy, let's call it, of an
authority, outsourcing partial
deliberation to that authority, meaning
uh following certain kinds of orders
from that authority without
understanding the full reasons if you
maintain the ability to evaluate it if
not fully and the ability to exit.
Right? Therefore, let's just transplant
that exact structure that you talked
about the military onto AI. Would you be
comfortable with that? Again, this is
why I set it set it up to be the
omniscient autocomplete, right? The
oracle that that given every test you've
thrown in it, it's given you the best
practical decision. So, in this case,
would you be comfortable it just telling
you date Sally or Susan and he says date
Sally and you say explain to me the
reasons and it explains to you some
reasons, but you still can't understand
the full picture. Just like the eagle of
justice, would you be okay outsourcing
that decision decisions like that to
because like your reason, right, is kind
of perfected. Your reason is as good as
it can get. But again, I'm just assuming
that there's a limitation to to human
reason, then that AI potentially can be
higher. And so if you're if you're as as
good as you can get reason-wise and you
have the ability to to exit,
are you okay following that order?
I think I would I think on a case-bycase
basis, I would be willing to do it. I
would I hesitate a little bit on
marrying Sally. Um, one because I'm
married to Adrien, but um, no, but the
other is there's a um there's a critique
of utilitarianism
that Bernard Williams brings. Um, it's
called One Thought Too Many. And he
essentially poses this scenario in which
you have um, someone about to drown in a
rescue situation. And what he says is
that though the philosophers might want
you to do a calculation, you know, um
what you should do is go with your moral
intuition about that. You should just
act, you know, in other words, you
shouldn't actually run a calculation of
any sort.
And while in this case, I'm not running
calculation. I'm just kind of deferring.
I think on questions of love, yeah,
yeah,
I would I would I think it would be one
thought too many to have an external agent
agent
But but this is how arranged marriages
in all the like the premodern societies
have worked and okay and maybe if if
love is the trip tripping factor. Let's
say starting a company. Yeah.
Yeah.
Like like should I start company A or
company B? Let's let's take love because
I I understand it's more subjective.
Would you be comfortable outsourcing
that decision if you were able to
evaluate the oracle in all the ways that
that we described?
Yeah. I mean and and maybe it's um
semantic but I would outsource and um
give consideration to that question and
you know if I accept the thought
experiment maybe I need to give very
little consideration because it's like
already answering for me not just in
general like it's not just saying what
is the most profitable you know
opportunity here but it's actually
answering at the level of me I think I
would be comfortable using it I mean I
would want
a system of safeguards in place and
provided that was in place, right, I
would take advantage and in that way I
would be competitive with others that
were that were doing the same,
right? And this is actually very helpful
this thought experiment teasing it out
that the safeguards are you want to make
sure your reason
goes as high as it can go, right? Even
if it's not to constantly in use,
it's constantly in use. It's as you're
you're as practically sharp uh as you
can be trained by the AI. uh you want
the ability to exit and maybe compare
systems. You want different points of
view. But if those conditions are
satisfied and again I'm not suggesting
that Christian faith is like this,
you're willing to make the leap of faith
and and outsource certain certain decisions.
decisions.
Yeah. Yeah. The thing I own is the means
and hierarchy towards human flourishing
is one in one way and the deliberation
therein. In other words, I can use tools
as an instrument to attain my goals, but
I don't want my goals to be set for me.
And so if one of my goals is to um start
a company, I can use AI to help me
determine what that company should be,
but I don't want it to set the goal for
me. In other words, I don't want to be a
blank canvas and just say, "What should
my life be?" Right? However, let's say
you had it as one of your goals. Uh I
want to build a company to uh let's say
just make a lot of money, right? And I
think you and I both agree that a purely
merkantile life is not the best life.
Wouldn't it be better if the AI I mean
didn't force you to to add another end
but force you into a journey such that
you discovered the end of being
missiondriven and helping others. Does
does that make sense? So, it's not the
AI. The AI isn't saying, "No, no, here's
the real end you should you should go
for." The AI is, but the AI is also not
optimizing on your end that you told it
to optimize on.
Does that does that does that make
sense? Wouldn't that be better?
Yeah, it does. I mean, it's a kind of
like adult version of the tutor and Emil
honestly because you're sort of setting
a configuration for this child, in my
case, me, to do development. Now, that
presupposes that the AI cares about my
development, right? It also assumes that
the AI understands development in the
way that I understand it, which is to
say a process of self-direction that I
do. And even then, I think there is this
nagging question of that has been
externally computed. I am now the agent
of the AI. I'm the agent of an AI that
appears to be highly benevolent and very
focused on a kind of like Humbultian
million version of vision of human
flourishing. But the AI in your thought
experiment is determining my end for me.
It's saying that I should be developed
in this way.
Yes. But crucially, it sets you up on a
journey such that
you take on that end yourself. Does that
make sense? Yeah. So it's it's not like
beating you with the head saying like
money making bad, money making bad. It's
for example uh going through an IPO uh
and then giving you a business decision
that makes you lose a lot of money and
realize the the rather the relative
worthlessness of money.
Yeah. I think a good analogy here is the
um it's parenting, right? Parenting.
Well, another good analogy is the state.
So like do you want to have a system of
laws? Do you want to have a state that
views its raised as being um
developmental like that tries to
maximally endow you with autonomy? Or do
you want to have a state that is more
like a night watchman and preserve space
in which you can experiment and try
things and learn yourself? The latter
tends to have far fewer risks of
paternalism, right? because it genuinely
is not the role of the state in this
case to uh inculcate this habit in me.
And I I think that's a foundational
position I would take. In this case,
we're saying could AI fill that like
very benevolent paternalistic role.
I'm not asking could it fill it. I'm
saying let's assume it could. Would you
be happy to have it? Right. Right.
Because obviously, you know, but neither
you nor I believe that current AI can do
this effectively at all. Yeah. And maybe
even in our lifetimes, we won't have
this. I'm asking you the philosophical question.
question.
If we if you if you take the premise
that it can do this, would you be okay
with it? I.e. shaping your ends in a way
that you wouldn't agree with now, but in
some sense is the right answer to to the
human good because because you you also
don't want to go completely relativistic
and say there's no human good
whatsoever, right? It's it's what
whatever ends I want to have now.
Yeah, I tentatively accept it in so far
as the AI was um was setting up the
maximum space for me to have this
Sedartha like journey or whatever you
know this like developmental journey. If
AI was scaffolding that for me then that
it would be highly consistent with uh
the idea of human flourishing and
especially if I'm um directing the AI to
do so. In the case where AI is doing
this kind of surreptitiously, I'm a
little less enthused. But I think I'm
I'm trying to experiment with that as a
um completely unrealistic framework. Uh
but a thought experiment that could
could support
you don't have a philosophical problem
with it, right? You have engineering
doubts or something.
I have practical problems with it. Um
beyond just engineering, but I I think
the uh the philosophical premise I like
I tenatively accept.
I see. So obviously uh none of our AI
systems today are anywhere close to this
omissioned autocomplete. Um what do you
think our conversation and your thinking
around autonomy has to offer engineers
today and builders today in building
systems that that support and enhance autonomy?
autonomy?
Well, I want to say that the mere fact
that AI can't do the omnicient
autocomplete thing is uh it's only part
of the story. And I think this really
it's a really interesting um notion that
there is on the one hand the kind of
like epistemic question of like can it
do it right we agree it can't do it the
other question is why might we feel as
though it can or we should want it to
have that role in our life and that's a
psychological question. So there's kind
of the epistemic one and the
psychological one.
I'll make it really tangible here with
an example. So, a year or two ago, there
was a 42-year-old guy named Victor
Miller in Cheyenne, Wyoming who ran for mayor.
mayor.
And the what made his mayoral bid unique
is that he ran as the meat avatar
essentially of Chat GPT. In other words,
his pitch was I'm going to run for
mayor, but I'm going to turn around
every question that I get asked, I'm
going to tell Chat GPT.
Like the admission to autocomplete that
I suggested.
Yeah, exactly. He didn't win, but it's
interesting for a few reasons. One is
that it's potentially prophetic. Like we
may have AI playing a major role in
ruling, right?
That's one reason why it's interesting.
The other reason it's interesting is
that he thought that this was a good idea.
idea.
And I don't think he was making an
epistemic claim. Like I don't think he
was deeply analyzing what AI could and
couldn't do. I think he believed that it
was a good idea. And this is again a
psychological question. Like in this
case, we want to believe that all the
blood and treasure we spill on politics
can be solved by a ruler that has access
to truth that's authoritative, that's
seemingly impartial, neutral, right? And
so I think we really have to keep in
mind that like your scenario is far out,
but it almost doesn't matter. Like
people right now believe that AI should
tell them what to do. There are Claude
boys. There are literally teenagers who
call themselves Claude boys who will
wake up and they will do what Claude
says and not otherwise. Really?
Really?
Really? So they So
what do they do? Are they are their
lives good or?
Uh no. But I mean on what basis, right?
I mean it gets back to this
philosophical question, right? Are their
lives good in so far as they make fewer
dumb teenage errors? Probably actually.
And the point is like this is a great
litmus test for like do you think this
is a good idea?
Yeah. Right. I have been in closed door
you know room like in rooms with people
who are really high up at the AI labs
they tend to be effective altruists and
they say that we are being rebels
foolish rebels if we don't listen to the
AI they they truly say this and it is because
because
well it's religious language right we're
we're like re rebellion a like the
fallen angel right it's I'm like how
dare you rebel against God right
exactly how dare you you silly you silly
rebel And that is because they have a
view that the the things that we do
should be viewed purely through the lens
of of uh through a consequential lens.
In other words, they don't have a thick
notion of what it means to be a human.
And if you don't have that, why not take
the optimal path, whatever that means. Right.
Right.
Right. This is a major issue.
And I think both what you and I are
suggesting is
the optimal path ceases to be optimal
when you only think about optimality.
Like like like when you only care about
date Sally or date Susan when you don't
care about the autonomy the agency the
self that is willing that if you give
that up your life is ipso facto going to
be bad right
what is the point of optimizing a life
that ceases to be your life to live
right yeah but in the counter and this
is what I was trying to show with with a
thought experiment is that
AI if it does become and it's clearly
not there yet this omniscient
uh uh uh practical reasoner, it
potentially can help us direct our own
lives better.
Yeah. Yeah. Right. So that's you're
willing to grant it.
Yeah. And well, so I think the mechanism
why where I would be much more
wholesomely uh uh embracing of this is,
you know, if we built into AI the goal
of um uh fostering better
self-direction, right? Right?
And this is a little bit of this is kind
of where we were getting to in the
scenario, but like how do you do that,
right? So one way you could do it is you
kind of reject the idea that AI should
be an answer machine ever and always,
right? That's what it kind of is today.
We ask it a question. Doesn't matter
what question we ask, math question or
what is justice, just here's an answer,
right? In reality, the way we think
about questions is we have a question.
It invites other questions. There's kind
of a ball of questions around the
central question. And then we have to
kind of do a navigation of these things.
We have to balance that search for you
know broad understanding with a need to
act in the world. So we have to reach
some kind of an equilibrium that
balances you know both desires. Okay. If
you had an AI system that could guide
that that could spur that could raise
questions that could help you make
judgments that would get right at the
core of what um de direction and
deliberation entails. Yeah.
And I think there's no reason why, oh,
we can't do such a thing right now, right?
right?
But the difference is that tends to open
up the possibility for self-direction.
It raises questions. It doesn't close
off and just give you an answer. In
other words, the problematic use case is
the autocomplete, right? Like we should
be very concerned when anything
approximates autocomplete. When it when
AI is used instead as a provocator,
instead as a like a razor of questions
and as a helpful tool for deliberation, then
then yeah,
yeah,
you know, that's wonderful. An example
here is uh I'm talking with uh some of
my friends who are like junior
consultants or or or investment bankers
and they're just using it uh to replace
their own work, right? And that's the
kind of problematic case or or you know
the kid who uses GPT to write his own
philosophical essay. The way I'm using
GPT right now is that I'm using as a
live tutor to kind of read the text
together with me and and ask different
questions. And so that's the two
different paths you want to put in front
of us. And this not EA, not existential
risk. This is the true challenge that's
ahead of
us. Although I will say I'm not sure
that what analysts do at consulting
firms is that valuable. And so they confidence,
confidence,
right? And so like if they can make a
PowerPoint, great. But I think ideally
what would happen is that that freed up
space for them to do more, you know,
things whether that's higher level
strategic thinking or uh reading
philosophy in their spare time. But the
point is that that goes back to the
benefits of offloading. There are some
things that are very good to offload and
automate. Um but we should be very
careful to offload and automate the core
deliberative capacities, right? And uh
hopefully our audience can see why the
answer even if AGI is around the corner
to just focus on alignment and building
AGI and for AGI to sort of focus on on
these problems autonomy is short-sighted
is because by definition we are the ones
that need to be do this or or another
way to frame it is what alignment means
in our kind of philosophical framework
is to build AGI that enhances autonomy.
Yeah. Right. And so that well that's why
even if AGI is around the corner, we
have to be building AI with this
fundamental tension of autonomy and
dependency in view.
Yeah, Nick Bostonramm is correct that
philosophy is on a deadline. He is dead
wrong about the role of philosophy in
thinking about the deeper conception of
human good that AI needs to you know uphold.
uphold.
So we talked about the existing
landscape of philosophical schools. We
talked about what you actually think
matters which is the autonomy question
that people are overlooking. uh and how
it plays out in AI. But now I want to
dive deeper into autonomy itself. Okay.
So you described in your writing of
autonomy as the central good. Okay. Be a
bit more precise here. What do you mean
by that? Is it sufficient for the good
life? Is it necessary for the good life?
Um would you ever trade it off for other
goods? Tell us about that.
Yeah. So I think it is necessary to live
an autonomous life. It's something that
I think develops like a muscle. So I
think we try things out. We self-direct
maybe badly. Like certainly as kids we
do that really badly. We don't even
select our own projects essentially and
then we develop and we get better and
better and better and as we do that it
becomes more and more an important
contributor to our happiness. In other
words, we value it more. It becomes more
and more central to how we think about
the what pleasure even consists in. And
so it's this developmental thing that
sort of happens through our life. And
that for me is why it makes it the
central good because it is the thing
that unlocks our ability to know our own
selves, our own gifts, to develop those
gifts and to use those gifts to live the
life that we want to live.
Right? However, it's not sufficient for
a good life because one can imagine uh
this is another edge case where the the
fully autonomous man or woman who is
able to go through life like this but
just fails in all of his or her
endeavors, right? if all family dies
like this is the the Aristotle example
mic ethics about prium
prium
so so this is not the only good
I agree right yeah I agree it's not it
has a relationship to other human goods
but is not the only good and I agree
that it's not sufficient I also don't
think that autonomy um presupposes
choosing well and so I think that is
actually a consequence of having
autonomy that you can choose that you
have to allow people to choose very
badly in other words people choose a
self-directed path that is very harmful
to them and you have to let them and so
in other words like uh you know we it it
does not is not a prescription I say
causally efficacious because I think it
on balance tends to lead to happiness
but it certainly is not it also comes
with the weight of responsibility I mean
there's a lot to be said about the the
the burden that one feels when one can
freely choose
yeah well I I want to push you on on
that point because uh autonomy as you
described It is one of my like as I live
life like one of my central goods which
is why I'm doing this here and not not
in a more structured setting.
But I was really surprised when I
entered the workforce that most people
don't seem to like it. Like when I
started managing people out of college,
I structured their work environment how
I would like to be structured, which is
this is our goal. I'm going to explain
to you the reasons why we're going after
this goal. You choose how you you get
there. You just need to get there.
Yeah. They mo many people hate that and
they they won't say I want less
autonomy. They won't frame it there.
They would usually frame it that I want more structure in my life. So they want
more structure in my life. So they want to be told what to do. They want Yeah.
to be told what to do. They want Yeah. And so so how do you reconcile that? Is
And so so how do you reconcile that? Is it just people like you and me who value
it just people like you and me who value this and it's subjective or Yeah. This
this and it's subjective or Yeah. This is one of the most worrisome threads
is one of the most worrisome threads that my wife will raise with me because
that my wife will raise with me because she'll be like, "Brendan,
she'll be like, "Brendan, it may be the case that you are just
it may be the case that you are just like an outlier that you care about this
like an outlier that you care about this in a way that other people don't."
in a way that other people don't." Intuitively, meaning you may be like
Intuitively, meaning you may be like rationalizing it, but you may actually
rationalizing it, but you may actually just care about it deeply. Intuitively,
just care about it deeply. Intuitively, in the way that you might like
in the way that you might like chocolate, that doesn't mean everyone
chocolate, that doesn't mean everyone else likes chocolate.
else likes chocolate. So, we have lots of individual
So, we have lots of individual variation. I totally allow for that that
variation. I totally allow for that that people would like it more or less based
people would like it more or less based on their, you know, sociobiological
on their, you know, sociobiological kind of path. Um and I think what I mean
kind of path. Um and I think what I mean by that is that it could be like almost
by that is that it could be like almost epiphenomenal uh that somebody has
epiphenomenal uh that somebody has certain genetics and they have certain
certain genetics and they have certain disposition predisposition. Okay. But I
disposition predisposition. Okay. But I gen generally think that it's much more
gen generally think that it's much more determined by the by the conditions in
determined by the by the conditions in which we live and by the way in which
which we live and by the way in which we're habituated as a consequence. The
we're habituated as a consequence. The transition that I'll mention here is
transition that I'll mention here is that in um aristocracy,
that in um aristocracy, one of the things that one of the
one of the things that one of the benefits was that people knew their
benefits was that people knew their station. They knew their role. Um I'm
station. They knew their role. Um I'm not advocating for that. Obviously, I'm
not advocating for that. Obviously, I'm not advocating for a kind of hierarchal
not advocating for a kind of hierarchal system like that. But that was one of
system like that. But that was one of the silver linings, right? In America,
the silver linings, right? In America, when Alexis Toko came, he observed a
when Alexis Toko came, he observed a generalized anxiety. He called it
generalized anxiety. He called it enkiitude. It's a word in French for
enkiitude. It's a word in French for like anxiety without an object, a
like anxiety without an object, a particular object. And why he think it
particular object. And why he think it existed was because there was no one to
existed was because there was no one to tell you what to do, you know, like you
tell you what to do, you know, like you you had to make your own way. And so you
you had to make your own way. And so you look to the majority to tell you what to
look to the majority to tell you what to do, right? You look to the state. Like
do, right? You look to the state. Like these are the kind of pathologies of
these are the kind of pathologies of democracy is that you sort of fill the
democracy is that you sort of fill the void and it appeases you especially if
void and it appeases you especially if you take religion away. By the way,
you take religion away. By the way, religion grounds you and it minimizes
religion grounds you and it minimizes this.
this. Well, it's the ultimate form of telling
Well, it's the ultimate form of telling you what to do, right?
you what to do, right? And it grounds you and the family life
And it grounds you and the family life can ground you as well. Lots of things
can ground you as well. Lots of things can ground you, but absent those things,
can ground you, but absent those things, you kind of drift and float. In America,
you kind of drift and float. In America, it still was the case that people were
it still was the case that people were very self-directed. You're, you know,
very self-directed. You're, you know, the classic example of this is the
the classic example of this is the Jeffersonian ideal, which kind of like
Jeffersonian ideal, which kind of like thought of autonomy as being the farming
thought of autonomy as being the farming life, right? It's like you grow up on a
life, right? It's like you grow up on a farm. My wife's family has a ranch.
farm. My wife's family has a ranch. They're ranchers. And um it really is
They're ranchers. And um it really is the case that like you are very
the case that like you are very self-directed. So, I get it. And I
self-directed. So, I get it. And I wonder what the consequence of being a
wonder what the consequence of being a nation of farmers. Like almost
nation of farmers. Like almost allreneurs essentially were
allreneurs essentially were entrepreneurial farmers. Yeah. I mean
entrepreneurial farmers. Yeah. I mean like 85% of free Americans were were
like 85% of free Americans were were that way and now we're a nation of
that way and now we're a nation of employees,
employees, right? It's given way to being you know
right? It's given way to being you know industrial revolution caused us to enter
industrial revolution caused us to enter employment. We now are subservient to
employment. We now are subservient to process and to people in a way that we
process and to people in a way that we were not. And I say this to bring all
were not. And I say this to bring all this together is that the conditions
this together is that the conditions either the the regime like now we're in
either the the regime like now we're in a democracy, no one tells us what to do.
a democracy, no one tells us what to do. That's very scary for people. But also
That's very scary for people. But also the fact that we we move to an
the fact that we we move to an industrial system in which people tell
industrial system in which people tell us what to do constantly. Like can you
us what to do constantly. Like can you take vacation? Like no, you know, not
take vacation? Like no, you know, not today. Like these are kind of limits on
today. Like these are kind of limits on your self-direction. And um and as a
your self-direction. And um and as a result, we've now come to like it to
result, we've now come to like it to want it less,
want it less, right? It's Stockholm syndrome almost.
right? It's Stockholm syndrome almost. of sorts, right? Yeah.
of sorts, right? Yeah. You mentioned Stockholm and it makes me
You mentioned Stockholm and it makes me think of uh this uh uh case of Germany,
think of uh this uh uh case of Germany, which I fully understand is not in
which I fully understand is not in Germany, just so people don't think I'm
Germany, just so people don't think I'm an idiot, but um East and West Berlin
an idiot, but um East and West Berlin were two radically different systems, as
were two radically different systems, as different as it gets. You had the same
different as it gets. You had the same family, same genetics split across the
family, same genetics split across the wall, right? And
wall, right? And the East German system under Soviet
the East German system under Soviet control uh was habituated to follow
control uh was habituated to follow orders. The West German was much more
orders. The West German was much more like the general west and and less so.
like the general west and and less so. And then during COVID, like much more
And then during COVID, like much more recently, you had a very different
recently, you had a very different response. You had much more obedience
response. You had much more obedience among the people who had been habituated
among the people who had been habituated by the East German system.
by the East German system. Yeah.
Yeah. Why I say that is it appears that the
Why I say that is it appears that the habituation is longlasting.
habituation is longlasting. Like you can grow up under a system in
Like you can grow up under a system in which you're told what to do and then
which you're told what to do and then maybe for the rest of your life you are
maybe for the rest of your life you are inclined to do what you're told. Whereas
inclined to do what you're told. Whereas if you grow up in Texas, um you may be
if you grow up in Texas, um you may be inclined for the rest of your life to
inclined for the rest of your life to not do what you're told.
not do what you're told. Wait, wait, hold on. But that doesn't
Wait, wait, hold on. But that doesn't seem to be such a strong response for
seem to be such a strong response for the position that autonomy is the
the position that autonomy is the central good whether you appreciate it
central good whether you appreciate it or not. Right? Because what what you
or not. Right? Because what what you were saying is that nurture can greatly
were saying is that nurture can greatly change how people valued it or not.
change how people valued it or not. Right.
Right. But I thought you would you would want
But I thought you would you would want to argue for the position that
to argue for the position that regardless
regardless of what like whether you valued it or
of what like whether you valued it or not, it is valuable objectively.
not, it is valuable objectively. Yeah. I don't want to base the idea that
Yeah. I don't want to base the idea that it is constitutive of a flourishing life
it is constitutive of a flourishing life on the idea that it is widely used in
on the idea that it is widely used in practice or even that it is like um uh
practice or even that it is like um uh valued equally by all because again the
valued equally by all because again the mechanism is one of habituation so it's
mechanism is one of habituation so it's not going to be valued equally by all
not going to be valued equally by all and I actually think having highly
and I actually think having highly autonomous people is a total historical
autonomous people is a total historical anomaly.
anomaly. Oh yeah totally. Yeah. I mean, for most
Oh yeah totally. Yeah. I mean, for most of human life, we've either been in a
of human life, we've either been in a culture with slaves or we've been in,
culture with slaves or we've been in, you know, kind of hierarchical cultures
you know, kind of hierarchical cultures that really don't have the same
that really don't have the same presuppositions. So, I think it's
presuppositions. So, I think it's anomalous and it's precious,
anomalous and it's precious, but that's not why I think it's
but that's not why I think it's constitutive of a flourishing life. I I
constitutive of a flourishing life. I I simply think that like it is consistent
simply think that like it is consistent with it is the way in which we discover
with it is the way in which we discover our nature, the way in which we express
our nature, the way in which we express it. It is one of the things that humans
it. It is one of the things that humans uniquely do is to use reason to guide
uniquely do is to use reason to guide action to develop ourselves and um and
action to develop ourselves and um and it tends to also lead to happiness.
it tends to also lead to happiness. Okay. So, so what is the what is the
Okay. So, so what is the what is the reason like what is the reason that it
reason like what is the reason that it is constitutive despite the fact that
is constitutive despite the fact that people can be habituated out of desiring
people can be habituated out of desiring it? Yeah. Well, that's why it's because
it? Yeah. Well, that's why it's because we have a nature that the only way to
we have a nature that the only way to discover our purpose or our highest end
discover our purpose or our highest end is through this autonomous
is through this autonomous experimentation is through the
experimentation is through the development that we do through
development that we do through self-direction. What about again I'm
self-direction. What about again I'm going to go back to religion here
going to go back to religion here because I think this is the this is the
because I think this is the this is the counterpoint here. What about someone
counterpoint here. What about someone who surrenders themselves completely to
who surrenders themselves completely to Jesus and claims that through that that
Jesus and claims that through that that they that they found their true selves
they that they found their true selves or I mean in the Buddhist case they
or I mean in the Buddhist case they surrender themselves completely to the
surrender themselves completely to the master become completely obedient to the
master become completely obedient to the master and through that they think that
master and through that they think that that's the route through which their
that's the route through which their their development becomes
their development becomes well I think in so far as they
well I think in so far as they surrendered that is a pretty powerful
surrendered that is a pretty powerful act of self-direction you know in like
act of self-direction you know in like the making the choice I I do I do think
the making the choice I I do I do think that is pro it's difficult to square the
that is pro it's difficult to square the autonomy lens with say Islam that views
autonomy lens with say Islam that views that views um you know the
that views um you know the I mean submission it's in the name right
I mean submission it's in the name right yeah as submission but um but here's
yeah as submission but um but here's what I'll say about that I think you
what I'll say about that I think you know I think one of the most critical
know I think one of the most critical points one can make about the dynamism
points one can make about the dynamism in the west
in the west is that it uh it is a tradition that has
is that it uh it is a tradition that has been formed through different visions of
been formed through different visions of the good life all of which allow for
the good life all of which allow for some individual choice about what the
some individual choice about what the good life is. And to to be to be more
good life is. And to to be to be more specific on that,
specific on that, yeah,
yeah, one tradition, probably the earliest
one tradition, probably the earliest western tradition is from Homer. It's
western tradition is from Homer. It's from the Mesopotamian epics. It's from
from the Mesopotamian epics. It's from the Bronze Age, and it's the idea of the
the Bronze Age, and it's the idea of the heroic life, the life of of adventure.
heroic life, the life of of adventure. Achilles is a good good example of this.
Achilles is a good good example of this. The Greek response to that is a life of
The Greek response to that is a life of science and contemplation and philosophy
science and contemplation and philosophy and things like that. That's a life that
and things like that. That's a life that is to say a life in which Achilles is
is to say a life in which Achilles is replaced by Socrates like the
replaced by Socrates like the contemplative life. And those are two
contemplative life. And those are two totally different you know visions of
totally different you know visions of the good life. A third comes in the
the good life. A third comes in the Hebrew and biblical tradition in Jesus
Hebrew and biblical tradition in Jesus which is a life of pious devotion and
which is a life of pious devotion and centers more around the family and other
centers more around the family and other things. I would say none of these are
things. I would say none of these are commensurable. They're all different
commensurable. They're all different visions of the good life, but they form
visions of the good life, but they form a kind of tension that one I would say
a kind of tension that one I would say the overlapping areas is some measure of
the overlapping areas is some measure of individual choice as to what the good
individual choice as to what the good life is. Like it's not the community's
life is. Like it's not the community's job to tell the individual. The other
job to tell the individual. The other thing I would say is that
thing I would say is that even for the Christian tradition,
even for the Christian tradition, even for the Christian tradition in so
even for the Christian tradition in so far as we get to choose that path and we
far as we get to choose that path and we navigate that path towards Jesus, there
navigate that path towards Jesus, there is a submission element of it. But again
is a submission element of it. But again like the freedom to be unfree is I think
like the freedom to be unfree is I think a a satisfactory use of freedom to to
a a satisfactory use of freedom to to well because otherwise you wouldn't be
well because otherwise you wouldn't be able to enter enter into contract right
able to enter enter into contract right we bind ourselves and a lot of the you
we bind ourselves and a lot of the you know most important American figures
know most important American figures grappled with these and synthesized them
grappled with these and synthesized them in their own way like Lincoln read
in their own way like Lincoln read Shakespeare and the Bible for example
Shakespeare and the Bible for example like he's reading about these things and
like he's reading about these things and forming his own sort of local synthesis
forming his own sort of local synthesis and so I just say that you know we would
and so I just say that you know we would do well to preserve oberve those
do well to preserve oberve those tensions. One of the things I worry
tensions. One of the things I worry about with AI is that the dominant
about with AI is that the dominant schools seek to sort of come in with an
schools seek to sort of come in with an answer that's like no this is the one
answer that's like no this is the one true thing. It's actually
true thing. It's actually those good life things those are quaint
those good life things those are quaint really what it is is maximizing utils.
really what it is is maximizing utils. Wait, but you're coming in here and
Wait, but you're coming in here and you're saying autonomy is the central
you're saying autonomy is the central good
good precisely because it is the one in that
precisely because it is the one in that leaves open the possibility of deciding
leaves open the possibility of deciding the good life. In other words, it
the good life. In other words, it preserves the plurality. It preserves
preserves the plurality. It preserves that space for the individual. I think
that space for the individual. I think it's it's a critically important thing
it's it's a critically important thing because we could just say, well, isn't
because we could just say, well, isn't isn't it again isn't it just like
isn't it again isn't it just like another vision of the good life. But it
another vision of the good life. But it is very different to say that a vision
is very different to say that a vision that preserves one's individual ability
that preserves one's individual ability to question the ultimate question for
to question the ultimate question for themselves. That is different from one
themselves. That is different from one that tries to reduce and replace and say
that tries to reduce and replace and say we have found the one true answer.
we have found the one true answer. Right? And just to be clear here, when
Right? And just to be clear here, when you say autonomy, is it a very uh uh
you say autonomy, is it a very uh uh thin kind of simplistic I'm the one
thin kind of simplistic I'm the one willing it, whether that's the right
willing it, whether that's the right thing or the wrong thing, or it's
thing or the wrong thing, or it's something more like Kant where I am
something more like Kant where I am willing the thing that also is in like
willing the thing that also is in like rational and good and
rational and good and um there's a subtlety there in because
um there's a subtlety there in because so it's the former in the sense that I
so it's the former in the sense that I believe that one must deliberate but Not
believe that one must deliberate but Not that they need to necessarily deliver
that they need to necessarily deliver well. Nor am I stating that it is a
well. Nor am I stating that it is a moral autonomy like I'm not suggesting
moral autonomy like I'm not suggesting that what autonomy means is to give
that what autonomy means is to give oneself the moral law which is to say
oneself the moral law which is to say the categorical comparative. That's not
the categorical comparative. That's not what I'm saying. I'm saying that we must
what I'm saying. I'm saying that we must have a capacity for reasoned
have a capacity for reasoned self-direction. We can do it wrong but
self-direction. We can do it wrong but we we must you know preserve that space.
we we must you know preserve that space. The the reason I say it's complicated is
The the reason I say it's complicated is because I do think Kant actually enters
because I do think Kant actually enters into it. In other words, I personally
into it. In other words, I personally draw a freedom maximizing principle from
draw a freedom maximizing principle from Kant. And this freedom maximizing
Kant. And this freedom maximizing principle is what gives people space to
principle is what gives people space to be autonomous. These are two different
be autonomous. These are two different conceptions of
conceptions of right. I see. So it's not just the
right. I see. So it's not just the simple notion of like a whim. Like let's
simple notion of like a whim. Like let's say I have a whim to jump off the
say I have a whim to jump off the building right now. Yeah.
building right now. Yeah. Because no deliberation went into that.
Because no deliberation went into that. Yeah. But it's also not the, you know,
Yeah. But it's also not the, you know, if I really really deliberate, like it
if I really really deliberate, like it would be good to steal Brendan's wallet
would be good to steal Brendan's wallet right now.
right now. Yeah,
Yeah, that would count.
that would count. It would count.
It would count. It would count because it's even though
It would count because it's even though I didn't deliberate, well, I made a fair
I didn't deliberate, well, I made a fair attempt. And your response was, "If I do
attempt. And your response was, "If I do that enough and I and I continue
that enough and I and I continue deliberating, I
deliberating, I We have to make mistakes."
We have to make mistakes." Yeah. Like that's a dumb thing for you
Yeah. Like that's a dumb thing for you to do to want to steal my wallet. It's
to do to want to steal my wallet. It's over there, by the way. But um but we
over there, by the way. But um but we have to make mistakes you know like I
have to make mistakes you know like I think part and parcel of autonomy is
think part and parcel of autonomy is doing dumb things and um you know it's
doing dumb things and um you know it's it's simply the wrong standard to
it's simply the wrong standard to suggest that we must always do things
suggest that we must always do things rightly.
rightly. Um let's say someone lives an autonomous
Um let's say someone lives an autonomous life. What else is missing to get to the
life. What else is missing to get to the good life or even the best life?
good life or even the best life? It's a good question. Um, one point that
It's a good question. Um, one point that needs to be said is that what I've
needs to be said is that what I've outlined so far is fairly
outlined so far is fairly individualistic, but I think the way in
individualistic, but I think the way in which we learn and experiment is
which we learn and experiment is profoundly social. And so we learn from
profoundly social. And so we learn from others, especially people who are, you
others, especially people who are, you know, above us in terms of
know, above us in terms of aspirationally, but not so far from us
aspirationally, but not so far from us that we can't sort of learn. And so I
that we can't sort of learn. And so I don't want to suggest that we're sort of
don't want to suggest that we're sort of like operating in in in a solopcyistic
like operating in in in a solopcyistic uh way or an isolated way. Um that's
uh way or an isolated way. Um that's important to say. The other thing way I
important to say. The other thing way I think about this is like how do I want
think about this is like how do I want to educate my kids? I want them to be
to educate my kids? I want them to be autonomous but I also want them to be
autonomous but I also want them to be autonomous and virtuous. In other words
autonomous and virtuous. In other words like I view the the role of the state as
like I view the the role of the state as being to a procedural a formal role to
being to a procedural a formal role to preserve freedom um to minimize coercion
preserve freedom um to minimize coercion that sort of thing um to provide for
that sort of thing um to provide for security. But I view the role of you
security. But I view the role of you know I view the good life as being
know I view the good life as being something more than autonomous and in
something more than autonomous and in fact like virtuous as well. Does that
fact like virtuous as well. Does that make sense?
make sense? Sometimes you need to trade off autonomy
Sometimes you need to trade off autonomy and virtue. Right? One example would be
and virtue. Right? One example would be well okay if I want I want want to jump
well okay if I want I want want to jump out this building right now uh after
out this building right now uh after poor deliberation. uh there's a good
poor deliberation. uh there's a good case where you should restrain and limit
case where you should restrain and limit my autonomy to preserve let's say my
my autonomy to preserve let's say my bodily function in order so that that I
bodily function in order so that that I can be more autonomous in the future but
can be more autonomous in the future but also potentially to be more virtuous or
also potentially to be more virtuous or to not kill a person right if and so so
to not kill a person right if and so so do you allow for a trade-off between
do you allow for a trade-off between autonomy and other goods and this
autonomy and other goods and this obviously a political question as well
obviously a political question as well one extreme idea would be no paternal
one extreme idea would be no paternal paternalism is is allowed you can never
paternalism is is allowed you can never interfere autonomy like there are other
interfere autonomy like there are other goods but autonomy
goods but autonomy can't be traded off for other goods.
can't be traded off for other goods. Yeah.
Yeah. Yeah. I mean, in general, I don't think
Yeah. I mean, in general, I don't think that the gains to welfare from
that the gains to welfare from paternalism outweigh the losses to
paternalism outweigh the losses to autonomy. And so, I would take a very
autonomy. And so, I would take a very strong um position on
strong um position on um uh not applying the tools of state in
um uh not applying the tools of state in particular to paternalistically
particular to paternalistically uh deliver welfare gains. Um and I would
uh deliver welfare gains. Um and I would apply that to things like uh you know
apply that to things like uh you know UBI for example um as well as many other
UBI for example um as well as many other many other areas but I think that um we
many other areas but I think that um we you know security is a is a good example
you know security is a is a good example of a kind of a of a vital interest as
of a kind of a of a vital interest as Mill would say that it's not clear that
Mill would say that it's not clear that autonomy has a lexical priority over
autonomy has a lexical priority over security like security seems
security like security seems preconditional to autonomy we have to
preconditional to autonomy we have to maintain security. So, so your answer is
maintain security. So, so your answer is uh it's it doesn't have lexical priority
uh it's it doesn't have lexical priority i.e. a trade-off is sometimes worth it.
i.e. a trade-off is sometimes worth it. Yeah.
Yeah. However, people overly value the
However, people overly value the benefits from uh from from welfare gains
benefits from uh from from welfare gains and under value the dangers of even
and under value the dangers of even removing a little bit of us. That that
removing a little bit of us. That that would be your answer.
would be your answer. Yeah. And this is a fundamental conflict
Yeah. And this is a fundamental conflict because we value convenience quite a
because we value convenience quite a lot. I mean this is this gets back to
lot. I mean this is this gets back to soft despatism from Toeville is that we
soft despatism from Toeville is that we welcome the incremental convenience and
welcome the incremental convenience and from a state that is farseeing and mild
from a state that is farseeing and mild you know that will offer this it's happy
you know that will offer this it's happy to offer it we're happy to welcome it
to offer it we're happy to welcome it or or from AI right
or or from AI right or from AI and uh from any sort of you
or from AI and uh from any sort of you know neutral or centralizing force but
know neutral or centralizing force but what do we give up well we give up the
what do we give up well we give up the vigorous use of our own capacities we
vigorous use of our own capacities we become infeebled we become like a flock
become infeebled we become like a flock of timid industrious animals I mean took
of timid industrious animals I mean took beautiful language around this and so
beautiful language around this and so this is a fundamental issue particularly
this is a fundamental issue particularly in democracy where again we otherwise
in democracy where again we otherwise have kind of enkitude because we have no
have kind of enkitude because we have no one to tell us what to do. So I think
one to tell us what to do. So I think like we staged then the critical issue
like we staged then the critical issue of our time which is we now have built
of our time which is we now have built something that can deliver the
something that can deliver the incremental convenience that can offload
incremental convenience that can offload our deliberation. We are going to
our deliberation. We are going to welcome it into our life. We're going to
welcome it into our life. We're going to be tempted more than we've ever been
be tempted more than we've ever been tempted before and we must find the
tempted before and we must find the resources within us to resist. Yeah, by
resources within us to resist. Yeah, by the way, this is uh as you know, I just
the way, this is uh as you know, I just uh interviewed uh the founder of Alpha
uh interviewed uh the founder of Alpha School and this where you send your kids
School and this where you send your kids and and I would I shadowed them for a
and and I would I shadowed them for a week uh and I was so excited by that
week uh and I was so excited by that project because I mean the so for our
project because I mean the so for our audience the quick pitch is that you
audience the quick pitch is that you learn basically your entire K- to2
learn basically your entire K- to2 curriculum instead of six eight hours a
curriculum instead of six eight hours a day with homework in two hours. You're
day with homework in two hours. You're like okay that's interesting. But what I
like okay that's interesting. But what I found when I talked to the kids was that
found when I talked to the kids was that the greatest benefit was a ch a
the greatest benefit was a ch a fundamental change in their character
fundamental change in their character that I am capable that I am autonomous
that I am capable that I am autonomous in this way
in this way and which is a fundamental difference
and which is a fundamental difference from how all the kids are being taught
from how all the kids are being taught today which is this kind of teacher in
today which is this kind of teacher in front of a classroom lecturing the kind
front of a classroom lecturing the kind of Prussian model to create industrial
of Prussian model to create industrial slaves industrial like employees
slaves industrial like employees essentially right
essentially right and so so that like I I almost think
and so so that like I I almost think that they're underelling underpitching
that they're underelling underpitching what what they offer. offering a fun a
what what they offer. offering a fun a fundamental building of a building of
fundamental building of a building of their own character and not just like
their own character and not just like being able to to to cram. That's
being able to to to cram. That's precisely right and I that's exactly the
precisely right and I that's exactly the benefit that I see in my own children
benefit that I see in my own children when I went there to like talk about
when I went there to like talk about entrepreneurship to eight and nine year
entrepreneurship to eight and nine year olds and I was just so moved by the
olds and I was just so moved by the extent to which the kids had
extent to which the kids had individuated they were highly individual
individuated they were highly individual and very autonomous and very um high
and very autonomous and very um high agency as well and um and I fear I just
agency as well and um and I fear I just to make it kind of a a cautionary point
to make it kind of a a cautionary point um I fear a kind of divide
um I fear a kind of divide Right. In other words, I see the alpha
Right. In other words, I see the alpha model and I see the vision by the way is
model and I see the vision by the way is to get that out there and I really hope
to get that out there and I really hope it I really hope it can can scale um
it I really hope it can can scale um massively.
massively. But I also understand that if one's
But I also understand that if one's relationship with technology is one of
relationship with technology is one of passivity, one of dependence, one of
passivity, one of dependence, one of doom scrolling um then we become highly
doom scrolling um then we become highly dependent from from the beginning
dependent from from the beginning and we almost create two classes of
and we almost create two classes of people. We almost create one individual
people. We almost create one individual who um for whom it is the best time in
who um for whom it is the best time in history to be a six-year-old and one
history to be a six-year-old and one individual that is um on the path to
individual that is um on the path to become an NPC. Right.
become an NPC. Right. And I I I think we we must avoid that.
And I I I think we we must avoid that. Yeah.
Yeah. And in some sense compared to early
And in some sense compared to early America, we're already there with the
America, we're already there with the employee with the employee versus uh
employee with the employee versus uh versus like gentleman farmership. So
versus like gentleman farmership. So what what you're saying is that there
what what you're saying is that there are a lot of uh things that don't seem
are a lot of uh things that don't seem political which build the autonomous
political which build the autonomous muscle or weaken the autonomous muscle
muscle or weaken the autonomous muscle in civil society. That's right.
in civil society. That's right. Education being one of them, workplace
Education being one of them, workplace being another. Okay,
being another. Okay, we've talked a lot about the intrinsic
we've talked a lot about the intrinsic importance of autonomy, how it's
importance of autonomy, how it's constitutive to a good life.
constitutive to a good life. I now want to move on to the exttrinsic
I now want to move on to the exttrinsic benefits of autonomy and why it's
benefits of autonomy and why it's important to defend autonomy. Not for
important to defend autonomy. Not for why it's gonna make you live a good
why it's gonna make you live a good life, but for a flourishing
life, but for a flourishing civilization. Okay. So, I know you've
civilization. Okay. So, I know you've been dying to talk about a Hayek, so
been dying to talk about a Hayek, so now's your chance.
now's your chance. H Yeah. So, um I'm kind of a Hayek
H Yeah. So, um I'm kind of a Hayek stand, but that the reason is because I
stand, but that the reason is because I think he's um desperately in need of
think he's um desperately in need of being revived for the for the AI age. Um
being revived for the for the AI age. Um where to start? I mean, he
where to start? I mean, he my favorite book of his is Constitution
my favorite book of his is Constitution of Liberty. And in that book he makes a
of Liberty. And in that book he makes a consequentialist case for liberty. So he
consequentialist case for liberty. So he says that
says that um liberty is useful. And the reason he
um liberty is useful. And the reason he does this is that when you argue for
does this is that when you argue for something and you start from axioms and
something and you start from axioms and you deduce from axioms, you just invite
you deduce from axioms, you just invite the challenge, well I don't agree with
the challenge, well I don't agree with your axioms. So like I don't care if
your axioms. So like I don't care if your deductive reasoning is good. I just
your deductive reasoning is good. I just don't agree with your axioms. So he
don't agree with your axioms. So he doesn't do that. He makes a
doesn't do that. He makes a consequentialist argument. I say that
consequentialist argument. I say that because I don't think that that is the
because I don't think that that is the only reason why Hayek thinks liberty is
only reason why Hayek thinks liberty is dear. But he makes a consequentialist
dear. But he makes a consequentialist argument and what he concludes is that
argument and what he concludes is that we should have a minimization of
we should have a minimization of coercion.
coercion. What is coercion? Coercion in for Hayek
What is coercion? Coercion in for Hayek is a kind of configuring of the decision
is a kind of configuring of the decision space such that you do the bidding of
space such that you do the bidding of another because it is the lesser of two
another because it is the lesser of two evils. Basically, you have your decision
evils. Basically, you have your decision space so so so configured by another
space so so so configured by another that you no longer are uh taking action
that you no longer are uh taking action on your own plans but you're taking
on your own plans but you're taking actions on the plan of others. Um the
actions on the plan of others. Um the steps to get there are are many but
steps to get there are are many but essentially what he says is that liberty
essentially what he says is that liberty is useful because it facilitates the use
is useful because it facilitates the use of knowledge in society and that
of knowledge in society and that knowledge is what allows the anonymous
knowledge is what allows the anonymous person to attain their unknown ends. So
person to attain their unknown ends. So that's a consequentialist view. How does
that's a consequentialist view. How does it do that? Well, what he says is that
it do that? Well, what he says is that most knowledge is practical. It's
most knowledge is practical. It's primordally practical. It's not the
primordally practical. It's not the explicit semantic knowledge that we
explicit semantic knowledge that we write down. So like the knowledge in
write down. So like the knowledge in science, what people usually think of as
science, what people usually think of as knowledge, he would say is the tip of
knowledge, he would say is the tip of the iceberg or the the droplet of the
the iceberg or the the droplet of the wave that above the ocean of knowledge.
wave that above the ocean of knowledge. What does he mean by knowledge being
What does he mean by knowledge being practical? He means the dispositions,
practical? He means the dispositions, the habits that each of us has. The way
the habits that each of us has. The way an entrepreneur thinks about an
an entrepreneur thinks about an opportunity, the way a diplomat sizes up
opportunity, the way a diplomat sizes up a room, the way we ride a bike, those
a room, the way we ride a bike, those are all things that are sort of locked
are all things that are sort of locked up and inside of us and they're either
up and inside of us and they're either inarticulated or inarticulable.
inarticulated or inarticulable. And so we have this, it drives our
And so we have this, it drives our action, but we can't share it. The way
action, but we can't share it. The way in which we can share it, the best way
in which we can share it, the best way we can share it is through the market.
we can share it is through the market. We have a low bandwidth mechanism called
We have a low bandwidth mechanism called prices, money prices that allows us to
prices, money prices that allows us to share our knowledge because as we try to
share our knowledge because as we try to do things, as we formulate ends, try to
do things, as we formulate ends, try to achieve those ends, we release bits of
achieve those ends, we release bits of this knowledge. Of course, I never like
this knowledge. Of course, I never like excavate it and share it with you, but
excavate it and share it with you, but my action is colored by it. So, it
my action is colored by it. So, it releases this knowledge. And as this
releases this knowledge. And as this happens in parallel across the entire
happens in parallel across the entire world, we pursue our independent plans.
world, we pursue our independent plans. The market allows for us to kind of
The market allows for us to kind of equilibriate those plans. preferences
equilibriate those plans. preferences change constantly. So the equilibrium
change constantly. So the equilibrium should be thought of as an asmtote, not
should be thought of as an asmtote, not a fixed raenzian equilibrium as
a fixed raenzian equilibrium as traditional economics would say. But it
traditional economics would say. But it allows us to do that. We share knowledge
allows us to do that. We share knowledge and this is what gives us a way to
and this is what gives us a way to benefit from knowledge that we don't
benefit from knowledge that we don't possess. uh part of Hayek's
possess. uh part of Hayek's consequentialist uh argument for
consequentialist uh argument for autonomy uh is uh as is described in the
autonomy uh is uh as is described in the second chapter of the of the book uh the
second chapter of the of the book uh the creative powers of a free civilization.
creative powers of a free civilization. Um what do you make of the tremendous
Um what do you make of the tremendous creative powers of unfree civilizations
creative powers of unfree civilizations as well as unfree peoples? Let me give
as well as unfree peoples? Let me give you a few few examples. The pyramids,
you a few few examples. The pyramids, the great wall built by basically
the great wall built by basically slaves. Virgil was writing propaganda,
slaves. Virgil was writing propaganda, right? Pro-mpire propaganda. Uh Dosi
right? Pro-mpire propaganda. Uh Dosi wrote in exile, imprisonment, financial
wrote in exile, imprisonment, financial uh great distress. Most religious
uh great distress. Most religious traditions had these unquestionable
traditions had these unquestionable truths. Uh and as you know, early modern
truths. Uh and as you know, early modern science and philosophy flourished under
science and philosophy flourished under severe uh censorship and sometimes even
severe uh censorship and sometimes even persecution. M so I think that if you
persecution. M so I think that if you have as your goal to exploit the
have as your goal to exploit the existing stock of knowledge then I think
existing stock of knowledge then I think unfree societies can do that. In other
unfree societies can do that. In other words if you want to demonstrate what
words if you want to demonstrate what command and control can do you build the
command and control can do you build the pyramids. Um but I think to discover new
pyramids. Um but I think to discover new production methods for example you need
production methods for example you need the um undirected experimentation and
the um undirected experimentation and the spontaneous um order that arises
the spontaneous um order that arises through the through free societies. In
through the through free societies. In other words, it's a totally different
other words, it's a totally different question to say how do we um uh deliver
question to say how do we um uh deliver something at this moment in time based
something at this moment in time based on the knowledge then known versus how
on the knowledge then known versus how do we promote how do we bear and
do we promote how do we bear and disseminate and generate more knowledge
disseminate and generate more knowledge in the in the world and make progress.
in the in the world and make progress. But but surely you're underappreciating
But but surely you're underappreciating the uh the difficulty and innovation in
the uh the difficulty and innovation in building like the first pyramid or
building like the first pyramid or starting to build the Great Wall or
starting to build the Great Wall or another from from Chinese antiquity
another from from Chinese antiquity being able to divert rivers away such
being able to divert rivers away such that uh such that you know it doesn't
that uh such that you know it doesn't flood certain parts of areas like that's
flood certain parts of areas like that's not just you know these are things that
not just you know these are things that they they didn't know how to do before
they they didn't know how to do before right well so I think you can um set as
right well so I think you can um set as a goal like you have a known goal you
a goal like you have a known goal you want to create a pyramid and then we
want to create a pyramid and then we have a lot of technical obstacles that
have a lot of technical obstacles that we need to get or um we want to go to
we need to get or um we want to go to space and we need to do it with alloys
space and we need to do it with alloys that haven't yet been invented to grow
that haven't yet been invented to grow JFK and Rice University.
JFK and Rice University. This is the kind of like top down uh
This is the kind of like top down uh it's like the pinnacle of the top down
it's like the pinnacle of the top down model is that we can set an audacious
model is that we can set an audacious goal and we can we can often realize it
goal and we can we can often realize it if we have the right structures in
if we have the right structures in place. What I think free societies do is
place. What I think free societies do is they secure a kind of like adaptation to
they secure a kind of like adaptation to the future. So as future conditions
the future. So as future conditions change, people doing lots of experiments
change, people doing lots of experiments in parallel create um varants create
in parallel create um varants create solutions that just sort of bubble up.
solutions that just sort of bubble up. Right? That's one thing. They also are
Right? That's one thing. They also are the best way to grow the stock of
the best way to grow the stock of knowledge in general. Like yes, you
knowledge in general. Like yes, you know, the creating creation of new
know, the creating creation of new alloys was probably accelerated by JFK's
alloys was probably accelerated by JFK's push, but in general, science works
push, but in general, science works through, you know, a kind of republic of
through, you know, a kind of republic of science, you know, like to quote Michael
science, you know, like to quote Michael Palani, where you have distributed
Palani, where you have distributed science. No one is setting the
science. No one is setting the direction. There's no house of Ben Salem
direction. There's no house of Ben Salem from, you know, um, the new Atlantis.
from, you know, um, the new Atlantis. There's no one saying what what science
There's no one saying what what science should do. It's just a a republic that's
should do. It's just a a republic that's loosely connected of people all trying
loosely connected of people all trying things and experimenting.
things and experimenting. Right. So your response to the seeming
Right. So your response to the seeming counter example of early modern science
counter example of early modern science like Galileo uh surviving and
like Galileo uh surviving and flourishing under persecution is to say
flourishing under persecution is to say but there is a republic that is free
but there is a republic that is free among the scientists or relatively so.
among the scientists or relatively so. Right? That's what you would be forced
Right? That's what you would be forced to I'm also saying that there's a
to I'm also saying that there's a tension where like Galileo h you know
tension where like Galileo h you know DSki they have to carve out pockets of
DSki they have to carve out pockets of freedom in very bad conditions in order
freedom in very bad conditions in order to do their work and you know the
to do their work and you know the inquisition with Galileo or the
inquisition with Galileo or the geneticists under Lysenko are good
geneticists under Lysenko are good counterexamples where you have
counterexamples where you have authoritarian you know control that for
authoritarian you know control that for non-scientific reasons wants to shut
non-scientific reasons wants to shut them down and succeeds to some degree in
them down and succeeds to some degree in those cases
those cases right so your response there um is
right so your response there um is essentially like a lack of counterfact
essentially like a lack of counterfact actual response like if Galileo did what
actual response like if Galileo did what he did surviving persecution but what if
he did surviving persecution but what if we had a a parallel Europe that that was
we had a a parallel Europe that that was free at the time think about how much
free at the time think about how much better that would be right that's
better that would be right that's I think that's brilliantly put yeah I
I think that's brilliantly put yeah I think it's you know you look at the
think it's you know you look at the examples where uh great art was created
examples where uh great art was created amidst terrible conditions but then what
amidst terrible conditions but then what you don't consider is yeah what would
you don't consider is yeah what would the parallel universe have looked like
the parallel universe have looked like yeah okay well well I'm glad uh I'm glad
yeah okay well well I'm glad uh I'm glad I trapped you there because I set up a
I trapped you there because I set up a little trap for you because America
little trap for you because America which I think any reasonable person is
which I think any reasonable person is the mo would would say is the most free
the mo would would say is the most free in the way that you're in this hayekan
in the way that you're in this hayekan way society it's not totally free but
way society it's not totally free but the most right uh a sort of nation in
the most right uh a sort of nation in human history America clearly is amazing
human history America clearly is amazing in its entrepreneurial innovations and
in its entrepreneurial innovations and its creativity there but where is
its creativity there but where is America's Virgil where is America's
America's Virgil where is America's Shakespeare we've had three thou 300
Shakespeare we've had three thou 300 years man like like like and like but
years man like like like and like but even hard sciences like America's is
even hard sciences like America's is okay for like practical applied science,
okay for like practical applied science, but like Einstein, well, he he well it
but like Einstein, well, he he well it was in America, but he's obviously he
was in America, but he's obviously he was in a different you see what I'm
was in a different you see what I'm trying to say like the counterfactual
trying to say like the counterfactual response would be a lot more compelling
response would be a lot more compelling if America became this land of infinite
if America became this land of infinite creativity and great works being written
creativity and great works being written left and right and yet
left and right and yet the only place where America seems to
the only place where America seems to extend creativity is in the uh is the
extend creativity is in the uh is the economic sphere.
economic sphere. I think it's a good point. Um I mean I I
I think it's a good point. Um I mean I I want to say that we are not without
want to say that we are not without achievements in each area, right? there
achievements in each area, right? there is an American novel, you know, there is
is an American novel, you know, there is a Faulner kind of thing. But I I think
a Faulner kind of thing. But I I think right I take your point that we're we
right I take your point that we're we have like relative greatness in other
have like relative greatness in other ways. And you know the question is like
ways. And you know the question is like does our does something about our system
does our does something about our system tend to uh uh squash the other kinds of
tend to uh uh squash the other kinds of greatness that we would see and it's a
greatness that we would see and it's a little reminiscent of like Toqueville's
little reminiscent of like Toqueville's concern obviously because you know he
concern obviously because you know he thinks that um democracy can create a
thinks that um democracy can create a kind of mediocrity and a particularly
kind of mediocrity and a particularly particularly a mediocrity of desire or
particularly a mediocrity of desire or of like aspiration. He he writes about
of like aspiration. He he writes about like um the American merchant captain
like um the American merchant captain actually I think this is a very funny
actually I think this is a very funny and not well-known uh thing which is
and not well-known uh thing which is that uh he sort of like looks at where
that uh he sort of like looks at where honor still exists very aristocratic
honor still exists very aristocratic virtue of honor uh all but lost to the
virtue of honor uh all but lost to the world but he says it exists among these
world but he says it exists among these merchant captains in America who you
merchant captains in America who you know when they do something and save
know when they do something and save somebody else they say I won't accept
somebody else they say I won't accept payment because it's a captain's role to
payment because it's a captain's role to not accept it and so he's like it exists
not accept it and so he's like it exists but I think what he says is that we need
but I think what he says is that we need to keep the memory alive of the like
to keep the memory alive of the like high aspirations. And so I would say
high aspirations. And so I would say that I probably fault not the free
that I probably fault not the free society but the system of education that
society but the system of education that doesn't like cultivate this kind of
doesn't like cultivate this kind of desire, this kind of like highest
desire, this kind of like highest desire, right?
desire, right? In other words, like your example about
In other words, like your example about the Prussian system of like industrial
the Prussian system of like industrial education, the kind of sameness that it
education, the kind of sameness that it breeds, that sort of thing I think is
breeds, that sort of thing I think is one proximal cause. And then I, you
one proximal cause. And then I, you know, I mean, I I don't I don't think I
know, I mean, I I don't I don't think I disagree that like capitalism um tends
disagree that like capitalism um tends to produce the kind of person who has
to produce the kind of person who has material desires. I'm a little bit
material desires. I'm a little bit ambivalent on this because on the one
ambivalent on this because on the one hand, I think um I agree probably with
hand, I think um I agree probably with Freriedman that um markets are a
Freriedman that um markets are a consequence of freedom. In other words,
consequence of freedom. In other words, like we truck, barter, and exchange.
like we truck, barter, and exchange. That wasn't Freeman, that was Smith. But
That wasn't Freeman, that was Smith. But and if we're left free to do it, markets
and if we're left free to do it, markets kind of arise. Now, institutions play a
kind of arise. Now, institutions play a big role. But if that's true, then on
big role. But if that's true, then on the one hand, markets are just kind of
the one hand, markets are just kind of like a product of of freedom. On the
like a product of of freedom. On the other hand, they clearly shape
other hand, they clearly shape uh the way we see the world,
uh the way we see the world, normative ends. Yeah.
normative ends. Yeah. Yeah. And um but all things do this like
Yeah. And um but all things do this like I mean like the availability of clocks
I mean like the availability of clocks shapes our the way we think about time.
shapes our the way we think about time. Yeah.
Yeah. These are unavoidable.
These are unavoidable. Yeah. Well, I'm I actually uh prepared a
Yeah. Well, I'm I actually uh prepared a quote actually from one of my favorite
quote actually from one of my favorite passages from Toqueville that kind of
passages from Toqueville that kind of gives his answer about why he would say
gives his answer about why he would say why he said uh that America would never
why he said uh that America would never produce her own Pascal.
produce her own Pascal. And I think he's been right so far,
And I think he's been right so far, right? Who who's the best American
right? Who who's the best American philosopher so far? John Dwey probably
philosopher so far? John Dwey probably not a Pascal. I quote you Toville. If
not a Pascal. I quote you Toville. If Pascal had had in mind only some great
Pascal had had in mind only some great source of profit, this is what you're
source of profit, this is what you're saying about the markets, or had been
saying about the markets, or had been motivated only by self- glory, I cannot
motivated only by self- glory, I cannot think he would have been able, as he
think he would have been able, as he was, to gather, as he did, all the
was, to gather, as he did, all the powers of his intellect for a deeper
powers of his intellect for a deeper discovery of the most hidden secrets of
discovery of the most hidden secrets of the creator. When I observe him tearing
the creator. When I observe him tearing his soul away, so to speak, from the
his soul away, so to speak, from the concerns of life, to devoted entirely to
concerns of life, to devoted entirely to this research, and severing prematurely
this research, and severing prematurely the ties which bind his soul to his
the ties which bind his soul to his body, to die of old age before his 40th
body, to die of old age before his 40th year, I stand a gasast and realize that
year, I stand a gasast and realize that no ordinary cause can produce such
no ordinary cause can produce such extraordinary effects. So um again like
extraordinary effects. So um again like I think Hayek's uh consequentialist
I think Hayek's uh consequentialist argument of creativity and freedom and
argument of creativity and freedom and which I understand is not his only
which I understand is not his only argument would be a lot more compelling
argument would be a lot more compelling if you know in the American system you
if you know in the American system you had all these great artists and great
had all these great artists and great great great creators and great writers
great great creators and great writers in addition to the great entrepreneurs
in addition to the great entrepreneurs that America clearly does have. But but
that America clearly does have. But but that seems to be the only domain that
that seems to be the only domain that American creativity expresses itself
American creativity expresses itself notably in human history. Mhm.
notably in human history. Mhm. Mhm.
Mhm. Yeah. I think this is a very tough line
Yeah. I think this is a very tough line of inquiry. The was Pascal wealthy?
of inquiry. The was Pascal wealthy? Uh I Yeah. I I think I think he might
Uh I Yeah. I I think I think he might have been a a gentleman scholar kind of.
have been a a gentleman scholar kind of. Yeah. Because I think Toqueville also
Yeah. Because I think Toqueville also shares this kind of like it's a
shares this kind of like it's a controversial point to make but that the
controversial point to make but that the the um avail the multigenerational
the um avail the multigenerational wealth that like um primogenitor like
wealth that like um primogenitor like the estate law basically the idea that
the estate law basically the idea that whether or not a country breaks up
whether or not a country breaks up estates or passes it to the firstborn
estates or passes it to the firstborn son
son has a big role in the kinds of goals one
has a big role in the kinds of goals one can pursue because
can pursue because says this exactly
says this exactly and so I think this is I say it's
and so I think this is I say it's controversial because like people don't
controversial because like people don't like to talk about you know the literal
like to talk about you know the literal elite and like you know a state uh
elite and like you know a state uh estate sort of law in that way. But I
estate sort of law in that way. But I think Toville is right that we if we you
think Toville is right that we if we you know to the extent that we sort of break
know to the extent that we sort of break up uh estates we give people uh a
up uh estates we give people uh a starting point that makes them very
starting point that makes them very hungry to but but but especially hungry
hungry to but but but especially hungry for material things. Whereas if you're
for material things. Whereas if you're born into wealth, which I was not to be
born into wealth, which I was not to be clear, you know, if you're born into
clear, you know, if you're born into wealth, then you have a different set of
wealth, then you have a different set of ideas like you kind of are are kind of
ideas like you kind of are are kind of um blasze about it and you you're like
um blasze about it and you you're like thinking and you either become, you
thinking and you either become, you know, lazy or you pursue different ends
know, lazy or you pursue different ends that are higher. Um but it I think
that are higher. Um but it I think Toqueville thinks it's a useful
Toqueville thinks it's a useful experiment to have. So what you need to
experiment to have. So what you need to concede maybe is the full creativity of
concede maybe is the full creativity of the market or or something like that. I
the market or or something like that. I mean obviously the market itself is
mean obviously the market itself is creative in the entrepreneurial sphere
creative in the entrepreneurial sphere but you still can preserve your point
but you still can preserve your point about li like liberty itself being
about li like liberty itself being important for for creativity just
important for for creativity just different kinds of liberty are needed
different kinds of liberty are needed something like that.
something like that. Yeah. Yeah. I I also will say to in
Yeah. Yeah. I I also will say to in defense of of of of entrepreneurs that
defense of of of of entrepreneurs that some of them have very great desires
some of them have very great desires about humanity that are that would rival
about humanity that are that would rival the the desires of the the most laudable
the the desires of the the most laudable aristocrats of old. Like in other words,
aristocrats of old. Like in other words, I don't think that honor and those
I don't think that honor and those questions while I do think they're
questions while I do think they're they're they're they've diminished. I do
they're they're they've diminished. I do think that people get into the game for
think that people get into the game for reasons like that, but it's the
reasons like that, but it's the expression that would be kind of totally
expression that would be kind of totally different than what Toqueville is is is
different than what Toqueville is is is is thinking of. So, I want to move on to
is thinking of. So, I want to move on to the last part of our interview, which is
the last part of our interview, which is about we talked about what right you
about we talked about what right you want to achieve mostly about autonomy as
want to achieve mostly about autonomy as it relates to AI. We talked about why
it relates to AI. We talked about why that's important, the intrinsic and the
that's important, the intrinsic and the extrinsic reasons. And now I want to
extrinsic reasons. And now I want to move on to the how and it's in this idea
move on to the how and it's in this idea that uh that you suggested of the
that uh that you suggested of the philosopher builder. So so what is a
philosopher builder. So so what is a philosopher builder?
philosopher builder? So the philosopher builder is a new kind
So the philosopher builder is a new kind of technologist. It's a technologist
of technologist. It's a technologist that thinks very deeply, contemplates
that thinks very deeply, contemplates very deeply about the alternate ends of
very deeply about the alternate ends of technology and also has the skill to
technology and also has the skill to build that in the world. When you think
build that in the world. When you think of the philosopher builder, you should
of the philosopher builder, you should think of Benjamin Franklin who you know
think of Benjamin Franklin who you know everyone will know as one of America's
everyone will know as one of America's founding fathers as the face on the $100
founding fathers as the face on the $100 bill. Um, what a lot of people don't
bill. Um, what a lot of people don't know is that Franklin is an engineer of
know is that Franklin is an engineer of a very high caliber. He invents the
a very high caliber. He invents the lightning rod. He invents the bif focal
lightning rod. He invents the bif focal lens. He coins positive and negative
lens. He coins positive and negative charge and electricity. He's also a
charge and electricity. He's also a philosopher. He lives by a 13 virtues
philosopher. He lives by a 13 virtues idea and creates this thing called the
idea and creates this thing called the junto for mutual evaluation, mutual
junto for mutual evaluation, mutual discussion.
discussion. When Franklin is at his best and when he
When Franklin is at his best and when he really brings to life this idea of the
really brings to life this idea of the philosopher builder is when he's
philosopher builder is when he's translating a philosophical idea into a
translating a philosophical idea into a practical innovation in the world. He's
practical innovation in the world. He's taking the idea that for example
taking the idea that for example knowledge should live outside of the
knowledge should live outside of the scope of authority outside of the church
scope of authority outside of the church in the state and he's translating that
in the state and he's translating that into the world through the first lending
into the world through the first lending library or the first network of
library or the first network of independent publishers in America. That
independent publishers in America. That is the essence that we want to capture.
is the essence that we want to capture. the idea that you're thinking about
the idea that you're thinking about these philosophical ideas and you're
these philosophical ideas and you're translating them into real world
translating them into real world innovation. So today, you know, it's
innovation. So today, you know, it's never been more necessary to have that.
never been more necessary to have that. But I think the institutions we have are
But I think the institutions we have are failing to produce the archetype. You
failing to produce the archetype. You know, most universities um I would say
know, most universities um I would say produce pretty narrow technicians or
produce pretty narrow technicians or conforming ideologues. I think that's by
conforming ideologues. I think that's by and large an accurate descriptor. Most
and large an accurate descriptor. Most tech companies produce people who are
tech companies produce people who are very very good at building and thinking
very very good at building and thinking about the means but who are not thinking
about the means but who are not thinking about the ends beyond just sort of the
about the ends beyond just sort of the customer use case satisfaction and then
customer use case satisfaction and then think tanks create theorists that don't
think tanks create theorists that don't tend to build. So that's what I'm
tend to build. So that's what I'm focused on. The the inspiration that I
focused on. The the inspiration that I draw is that there have been moments
draw is that there have been moments when institutions
when institutions uh uh really rose to the challenge.
uh uh really rose to the challenge. institutions that come to mind are
institutions that come to mind are Cambridge during the industrial
Cambridge during the industrial revolution took a lot of mathematicians
revolution took a lot of mathematicians and turned them into the engineers that
and turned them into the engineers that powered that. Um MIT during World War
powered that. Um MIT during World War II, the Rad Lab in particular took
II, the Rad Lab in particular took physicists, made them into inventors,
physicists, made them into inventors, help the war. And then Chicago more
help the war. And then Chicago more recently um took economists and made
recently um took economists and made them into uh reformers who freed markets
them into uh reformers who freed markets across five continents. So we can do it
across five continents. So we can do it like when a mission has that as their or
like when a mission has that as their or when an institution has that as their
when an institution has that as their purpose and acts urgently we can do it
purpose and acts urgently we can do it that is what we're doing in cosmos is
that is what we're doing in cosmos is creating that new kind of technologies
creating that new kind of technologies and in some ways it's reflective of your
and in some ways it's reflective of your own story right because you you started
own story right because you you started off on the building side uh and then you
off on the building side uh and then you got into philosophy uh later on in life
got into philosophy uh later on in life so tell us that story
so tell us that story yeah I mean it even goes back a little
yeah I mean it even goes back a little further like my mom was an educator she
further like my mom was an educator she taught special needs kids for 36 years
taught special needs kids for 36 years and she brought us up in what people
and she brought us up in what people call a virtue culture So arisatilian, an
call a virtue culture So arisatilian, an idea that things like courage, honor
idea that things like courage, honor mattered. And this was very effective
mattered. And this was very effective and it is what caused my sister, my
and it is what caused my sister, my brother and I to all go into the
brother and I to all go into the military. I was a submarine officer, my
military. I was a submarine officer, my brother was as well. My sister was the
brother was as well. My sister was the lead medical person when we defeated
lead medical person when we defeated ISIS in battle with Mosul. And it's
ISIS in battle with Mosul. And it's because that was a natural expression
because that was a natural expression for this um desire for public good that
for this um desire for public good that that she and my father had inculcated in
that she and my father had inculcated in us. I then went, you know, went to MIT,
us. I then went, you know, went to MIT, joined the military, went to Harvard
joined the military, went to Harvard Business School. So, I had this kind of
Business School. So, I had this kind of classic STEM in business. And it was
classic STEM in business. And it was only after selling two AI companies and
only after selling two AI companies and having my second of two kids, my son
having my second of two kids, my son who's four, that I really was hit with
who's four, that I really was hit with this like big perennial human question
this like big perennial human question like what what do you what is the good
like what what do you what is the good life? What do you do with the rest of
life? What do you do with the rest of your career? What do you model for the
your career? What do you model for the little humans? That's a profound thing
little humans? That's a profound thing that happens when you have kids is you
that happens when you have kids is you you realize that you're on the hook. And
you realize that you're on the hook. And I didn't have answers and I was very
I didn't have answers and I was very dissatisfied with the depth. I had
dissatisfied with the depth. I had cocktail party level answers. And so I
cocktail party level answers. And so I started to read and I had a mentor uh
started to read and I had a mentor uh named Michael Strong who gave me the
named Michael Strong who gave me the gift of a lifetime, a 17page reading
gift of a lifetime, a 17page reading list that really started with the
list that really started with the ancients philosophy and went up to the
ancients philosophy and went up to the enlightenment, the American founding,
enlightenment, the American founding, contemporary debates. It changed my
contemporary debates. It changed my life. It transformed me. It made me more
life. It transformed me. It made me more interesting to myself and it totally
interesting to myself and it totally changed the trajectory and that's how I
changed the trajectory and that's how I shifted from being an entrepreneur to
shifted from being an entrepreneur to being well I'm still an entrepreneur but
being well I'm still an entrepreneur but to creating to be a philosopher.
to creating to be a philosopher. Entrepreneur.
Entrepreneur. Sure. Philosopher.
Sure. Philosopher. There we go.
There we go. Exactly.
Exactly. Um you said that you didn't grow up in
Um you said that you didn't grow up in wealth. Uh and in some sense what's
wealth. Uh and in some sense what's quite crazy about your entrepreneurial
quite crazy about your entrepreneurial journey was how quick the exits came.
journey was how quick the exits came. Right. It was a span of what 18 months
Right. It was a span of what 18 months or or something like that that there was
or or something like that that there was like 400 million uh exits. What was like
like 400 million uh exits. What was like coming into that much wealth in that
coming into that much wealth in that short period of time? How did that
short period of time? How did that manifest for you?
manifest for you? So, I had people around me that um
So, I had people around me that um played a big role in this. So, I saw
played a big role in this. So, I saw examples of lives that I didn't want to
examples of lives that I didn't want to live. And these are people that I'm in
live. And these are people that I'm in some cases friends with. So, I don't
some cases friends with. So, I don't want to be too uh too specific here, but
want to be too uh too specific here, but I saw examples of lives that I did not
I saw examples of lives that I did not want to live. And I also saw examples of
want to live. And I also saw examples of lives I did. And so I had a friend of
lives I did. And so I had a friend of mine who is part of a philanthropic
mine who is part of a philanthropic network of young people like in our 30s
network of young people like in our 30s and 40s who are doing pretty serious
and 40s who are doing pretty serious philanthropic work. And I saw him do
philanthropic work. And I saw him do something that I felt was
something that I felt was transformational at a young age. And I
transformational at a young age. And I remember the moment when I heard this
remember the moment when I heard this and he had given, you know, $5 million
and he had given, you know, $5 million or something to this amazing cause. Um,
or something to this amazing cause. Um, and I I just talked to Adrian. I
and I I just talked to Adrian. I thought, you know, we could do something
thought, you know, we could do something like that. you know, we could do
like that. you know, we could do something big and the counterfactual of
something big and the counterfactual of not doing it was felt huge. In other
not doing it was felt huge. In other words, not having the benefits of not
words, not having the benefits of not just money but like time and effort and
just money but like time and effort and talent applied and not compounding that
talent applied and not compounding that felt like a huge missed opportunity. So,
felt like a huge missed opportunity. So, I really credit it to being uh able to
I really credit it to being uh able to surround myself with examples of what I
surround myself with examples of what I thought bold action looked like and I
thought bold action looked like and I chose, you know, very deliberately the
chose, you know, very deliberately the path that I wanted to to emulate.
path that I wanted to to emulate. I see. Oh well, let's let's go back to
I see. Oh well, let's let's go back to the philosopher builder archetype
the philosopher builder archetype because um I think about it as fleshing
because um I think about it as fleshing out this third option um in platonic
out this third option um in platonic political philosophy. So obviously uh
political philosophy. So obviously uh Plato separates his polus into three.
Plato separates his polus into three. There's the ruling class, uh there's the
There's the ruling class, uh there's the the army, right? The military uh and
the army, right? The military uh and then then there's the producers like the
then then there's the producers like the the merchants, the builders essentially.
the merchants, the builders essentially. And Plato obviously advises the rulers
And Plato obviously advises the rulers to be philosophers. This is the
to be philosophers. This is the philosopher king. Um I think we see in
philosopher king. Um I think we see in the end of the Roman Republic the
the end of the Roman Republic the philosopher general. These are people
philosopher general. These are people like uh Cicero like uh like um Caesar
like uh Cicero like uh like um Caesar who was ve very learned. He was like
who was ve very learned. He was like composing uh tracks about anomaly and
composing uh tracks about anomaly and analogy in Gaul while like arrows were
analogy in Gaul while like arrows were flying in his face. Um and now you're
flying in his face. Um and now you're suggesting is that it's the third class
suggesting is that it's the third class which to Plato was in some sense the
which to Plato was in some sense the lowest class the builders
lowest class the builders should be philosophers. Why? Why is
should be philosophers. Why? Why is that? Well, yes. I mean, I think it
that? Well, yes. I mean, I think it draws some inspiration but differs in
draws some inspiration but differs in really import one really important way
really import one really important way from those those archetypes. The
from those those archetypes. The inspiration is that there is a belief
inspiration is that there is a belief that you can have not a unity of
that you can have not a unity of contemplation and power, wisdom and
contemplation and power, wisdom and power as you see in the philosopher
power as you see in the philosopher king, but a unity of the wisdom and the
king, but a unity of the wisdom and the contemplation that gets you there with
contemplation that gets you there with uh the ability to create to create
uh the ability to create to create worlds to build. Um what I think it it
worlds to build. Um what I think it it makes the distinction clear is actually
makes the distinction clear is actually the Greek concept of order which divides
the Greek concept of order which divides into two distinct words. There is taxis
into two distinct words. There is taxis and there is cosmos.
and there is cosmos. Taxis is the order that we impose on the
Taxis is the order that we impose on the world from the top down. It's taxonomy,
world from the top down. It's taxonomy, right? Or it's more generally a kind of
right? Or it's more generally a kind of top down order, right? Um and then the
top down order, right? Um and then the alternative is the bottomup emergent
alternative is the bottomup emergent order that is Cosmos. And obviously I've
order that is Cosmos. And obviously I've embraced this uh in so far as we're
embraced this uh in so far as we're named the Cosmos Institute. But what it
named the Cosmos Institute. But what it means in practice is that we're not
means in practice is that we're not looking for one individual who has a
looking for one individual who has a kind of blueprint that we look to to
kind of blueprint that we look to to rescue us in difficult times um and who
rescue us in difficult times um and who can implement that plan sort of from the
can implement that plan sort of from the top. That's the philosopher king. We are
top. That's the philosopher king. We are looking for a much more bottom-up
looking for a much more bottom-up distributed approach where people may
distributed approach where people may have slices of truth, slices of the
have slices of truth, slices of the solution and are working in their corner
solution and are working in their corner of the world to project that that vision
of the world to project that that vision forward. That's the cosmos approach and
forward. That's the cosmos approach and the philosopher builder approach. It's a
the philosopher builder approach. It's a different archetype, much more like
different archetype, much more like Franklin, much more distributed and one
Franklin, much more distributed and one that I think is
that I think is right,
right, you know, necessary for the the current
you know, necessary for the the current moment.
moment. Right. So taking the insight from Plato
Right. So taking the insight from Plato about the importance to join worldly
about the importance to join worldly activity and and and philosophy and and
activity and and and philosophy and and uh and contemplation but given
uh and contemplation but given everything you said about autonomy
everything you said about autonomy transplanting that into an autonomous
transplanting that into an autonomous decentralized way. Yes.
decentralized way. Yes. And hence you find the philosopher in
And hence you find the philosopher in the third class and not in the not in
the third class and not in the not in the ruling class. I I was going to give
the ruling class. I I was going to give a very different answer to to why this
a very different answer to to why this is important today. please
is important today. please which is that technology like you can
which is that technology like you can argue in some sense that in Plato's time
argue in some sense that in Plato's time uh ruling and political power was the
uh ruling and political power was the dominant pole right um whereas today I
dominant pole right um whereas today I think it's the market uh as well as
think it's the market uh as well as technology that is the dominant pole of
technology that is the dominant pole of the three um you see how uh I mean
the three um you see how uh I mean obviously you see how technology is
obviously you see how technology is dominant over politics given everything
dominant over politics given everything we've described so far about how
we've described so far about how technology can form political citizens
technology can form political citizens to be more autonomous or less autonomous
to be more autonomous or less autonomous uh and you canso O see with things like
uh and you canso O see with things like Anderil or or Palunteer how an AI of
Anderil or or Palunteer how an AI of course has has dual use technology uh
course has has dual use technology uh you see what I'm trying to say here
you see what I'm trying to say here right like the technology like it's
right like the technology like it's important for the philosopher builder
important for the philosopher builder not just because of what you said but
not just because of what you said but because technology
because technology now supersedes the other two spheres.
now supersedes the other two spheres. Yeah. And it becomes the driving logic
Yeah. And it becomes the driving logic of the two spheres.
of the two spheres. Absolutely. No. And it's a break with
Absolutely. No. And it's a break with the ancient idea that politics is
the ancient idea that politics is architectonic. the regime sets the frame
architectonic. the regime sets the frame for what kind of technology could even
for what kind of technology could even be done, right? I am persuaded by that.
be done, right? I am persuaded by that. But as you as you sort of look at where
But as you as you sort of look at where we are in 2025,
we are in 2025, it's opposite.
it's opposite. It's flipped a bit. And so yeah, so you
It's flipped a bit. And so yeah, so you could talk about it being upstream, but
could talk about it being upstream, but I like your idea that it sort of is like
I like your idea that it sort of is like techn technology is kind of
techn technology is kind of architectonic. Like it's kind of the if
architectonic. Like it's kind of the if you think about who the best reformers
you think about who the best reformers are, the most capable reformers are,
are, the most capable reformers are, it's not people like John Stewart Mill
it's not people like John Stewart Mill who whom I love, but it's people like
who whom I love, but it's people like Elon Musk. And I I don't say that to
Elon Musk. And I I don't say that to endorse
endorse everything he stands for.
everything he stands for. That's not what I'm saying. I'm saying
That's not what I'm saying. I'm saying his position as a profoundly capable
his position as a profoundly capable builder gives him enormous leverage on,
builder gives him enormous leverage on, you know, what we thought of as like
you know, what we thought of as like political questions of old or or but
political questions of old or or but again I just want to tease out the the
again I just want to tease out the the mechanism because one of it is him able
mechanism because one of it is him able to use his money um and to be able to
to use his money um and to be able to for example fund Trump. Um but and I
for example fund Trump. Um but and I think alpha school is the better example
think alpha school is the better example here like the technology of AI when it
here like the technology of AI when it applies to education might be a a much
applies to education might be a a much more powerful political tool for liberty
more powerful political tool for liberty than you know like doing anything with
than you know like doing anything with the government that can possibly done
the government that can possibly done today. Yeah. Yeah. Because because
today. Yeah. Yeah. Because because precisely because technology is going
precisely because technology is going through all of our lives in this way.
through all of our lives in this way. Yeah. That's right. And alpha school
Yeah. That's right. And alpha school example gives also color on the
example gives also color on the interdependencies because if you don't
interdependencies because if you don't have a political order that is capable
have a political order that is capable of uh sustaining that kind of innovative
of uh sustaining that kind of innovative school model um you at the very least
school model um you at the very least have a it's hampered
have a it's hampered right so so you're saying it is the
right so so you're saying it is the dominant poll but it's not the only poll
dominant poll but it's not the only poll that is constrained by the others. Um,
that is constrained by the others. Um, one can respond to to this and say that
one can respond to to this and say that what we may say is true, but all the
what we may say is true, but all the other waves of technology, whether it's
other waves of technology, whether it's printing press, industrial, certainly
printing press, industrial, certainly nuclear, web, PC, would also have
nuclear, web, PC, would also have benefited from philosopher builders. Um,
benefited from philosopher builders. Um, why is there something specific about AI
why is there something specific about AI that that makes this more urgent?
that that makes this more urgent? I think that gets back to autonomy.
I think that gets back to autonomy. Yeah. But it also, you know, more
Yeah. But it also, you know, more generally, I think there have been
generally, I think there have been epical moments in science and tech that
epical moments in science and tech that have made us question what it means to
have made us question what it means to be human.
be human. And, you know, I think of Galileo or
And, you know, I think of Galileo or Capernicus and and and and Darwin. I
Capernicus and and and and Darwin. I think about pushing us out of the center
think about pushing us out of the center of the universe, putting us among the
of the universe, putting us among the animals. Those were major major
animals. Those were major major reorienting moments in human life as a
reorienting moments in human life as a consequence of technological insight and
consequence of technological insight and breakthrough. AI is similar. The age of
breakthrough. AI is similar. The age of Turring brings this question of what
Turring brings this question of what does it mean to be human in a world in
does it mean to be human in a world in which we are no longer you know the most
which we are no longer you know the most intelligent being um or at least
intelligent being um or at least plausibly uh uh becoming so and so
plausibly uh uh becoming so and so that's one kind of broad impetus broad
that's one kind of broad impetus broad strokes but I think the question of like
strokes but I think the question of like because AI substitutes for this
because AI substitutes for this essential maybe central human good then
essential maybe central human good then it becomes a very philosophical
it becomes a very philosophical technology. It operates through
technology. It operates through language. It has a semantic um
language. It has a semantic um interface, you know, with with humans
interface, you know, with with humans and um and has a mediating effect
and um and has a mediating effect between us and the world, especially the
between us and the world, especially the world of words and information, right?
world of words and information, right? Whereas it's just as important, maybe
Whereas it's just as important, maybe more so, you could easily make the
more so, you could easily make the argument, to get nuclear, right? Right.
argument, to get nuclear, right? Right. To get the the game theory of nuclear
To get the the game theory of nuclear right. um it doesn't raise as many
right. um it doesn't raise as many interesting philosophical uh questions
interesting philosophical uh questions and problems as AI does because of how
and problems as AI does because of how humanlike it is. Um and it's not
humanlike it is. Um and it's not manipulable through the semantic way,
manipulable through the semantic way, but but rather through the numeric way.
but but rather through the numeric way. Yeah. I I also think there's an argument
Yeah. I I also think there's an argument to be made that AI is plausibly the end
to be made that AI is plausibly the end of the modern technological project. In
of the modern technological project. In other words, is plausibly a technology
other words, is plausibly a technology that can create other technologies or
that can create other technologies or create other scientific breakthroughs.
create other scientific breakthroughs. We we are barely scratching the surface
We we are barely scratching the surface there. But if you go back to Bacon and
there. But if you go back to Bacon and the beginning of this whole modern
the beginning of this whole modern scientific project, the thought of a
scientific project, the thought of a technology that could discover other you
technology that could discover other you know breakthroughs would have been held
know breakthroughs would have been held in a special category. So I want to
in a special category. So I want to offer uh you a few critiques and see how
offer uh you a few critiques and see how you respond to this philosopher builder
you respond to this philosopher builder ideal. And the first one is a catalyst
ideal. And the first one is a catalyst critique and that's to say look uh it's
critique and that's to say look uh it's redundant. We don't need our builders to
redundant. We don't need our builders to aim towards any higher normative end uh
aim towards any higher normative end uh other than profit because this is one of
other than profit because this is one of the key tenants of of uh classical
the key tenants of of uh classical liberal thinking and markets thinking
liberal thinking and markets thinking about capitalism that the invisible hand
about capitalism that the invisible hand of the market will turn private vice
of the market will turn private vice into public virtue. Uh and so it seems
into public virtue. Uh and so it seems like the urgency by which you uh uh
like the urgency by which you uh uh recommend the philosopher builder kind
recommend the philosopher builder kind of undermines that that piece.
of undermines that that piece. Well, I think the market does turn
Well, I think the market does turn private vice into public virtue by and
private vice into public virtue by and large. I also think it permits private
large. I also think it permits private virtue. In other words, the market
virtue. In other words, the market permits you to um act in a way that is
permits you to um act in a way that is uh yours uh you know as as it is chosen
uh yours uh you know as as it is chosen and you can uh do that in so far as you
and you can uh do that in so far as you offer value to others. But what I would
offer value to others. But what I would say here is like you need to think about
say here is like you need to think about the two poles of kind of how an
the two poles of kind of how an entrepreneur is seen to act in a in a
entrepreneur is seen to act in a in a capitalist system. One is Freriedman who
capitalist system. One is Freriedman who I think occupies the the most narrow
I think occupies the the most narrow farthest poll 1970 what is the corporate
farthest poll 1970 what is the corporate social responsibility business. He
social responsibility business. He writes this writes this article and it
writes this writes this article and it says that the the role of an
says that the the role of an entrepreneur is to uh is to seek
entrepreneur is to uh is to seek shareholder value to deliver that and
shareholder value to deliver that and the other end of the extreme is ESG I
the other end of the extreme is ESG I would say where you have a kind of
would say where you have a kind of unitary goal a United Nations goal for
unitary goal a United Nations goal for example to to do something that we we
example to to do something that we we view as being laudable and then
view as being laudable and then entrepreneurs are seen as the execution
entrepreneurs are seen as the execution instrument for that goal and I would say
instrument for that goal and I would say both are troubling in a sense I'm much
both are troubling in a sense I'm much closer to the Freriedman view, but they
closer to the Freriedman view, but they they both position the entrepreneur in a
they both position the entrepreneur in a low agency way in so far as on the one
low agency way in so far as on the one hand we're we're we're supposedly very
hand we're we're we're supposedly very limited in what we can do. we just think
limited in what we can do. we just think very financially about the shareholder
very financially about the shareholder value and on the other we're we're seen
value and on the other we're we're seen as the agent of another who's
as the agent of another who's implementing this you know this this
implementing this you know this this goal and I would say there's an
goal and I would say there's an alternative that and the alternative is
alternative that and the alternative is the entrepreneur as a pioneer the
the entrepreneur as a pioneer the entrepreneur who sets the norms and
entrepreneur who sets the norms and builds that future and there is simply
builds that future and there is simply nothing that restricts that one needs to
nothing that restricts that one needs to be creative I would argue that's what
be creative I would argue that's what entrepreneurs do today in many cases um
entrepreneurs do today in many cases um but I would say one needs to be creative
but I would say one needs to be creative because the um the customer may not may
because the um the customer may not may or may not care about your underlying
or may not care about your underlying philosophy. And so I'll give you an
philosophy. And so I'll give you an example is like if we want to think
example is like if we want to think about autonomy, there's a lot of stuff
about autonomy, there's a lot of stuff that we could do architecturally.
that we could do architecturally. But if we consider the case Bloomberg
But if we consider the case Bloomberg terminal for anyone who's not in finance
terminal for anyone who's not in finance is a kind of mainstay of finance and the
is a kind of mainstay of finance and the use of it is about improving decision
use of it is about improving decision quality. That's the the point when you
quality. That's the the point when you use social media. Often the point is to
use social media. Often the point is to kind of check out like that's the job to
kind of check out like that's the job to be done is like you had a hard day and
be done is like you had a hard day and you just want to scroll and so you need
you just want to scroll and so you need to think about that. You need to think
to think about that. You need to think about your business model as well. Are
about your business model as well. Are you paying money in a subscription or
you paying money in a subscription or are you being subsidized because the
are you being subsidized because the model is adriven? These are really
model is adriven? These are really important design questions. But in so
important design questions. But in so far as opportunities do exist that
far as opportunities do exist that resemble the Bloomberg terminal where
resemble the Bloomberg terminal where you're improving decision quality and
you're improving decision quality and getting paid for it on subscription
getting paid for it on subscription basis, there's an entirely consistent
basis, there's an entirely consistent pro- capitalist way to build autonomy
pro- capitalist way to build autonomy producing tools.
producing tools. Right. Right. Uh so when I say the
Right. Right. Uh so when I say the philosopher builder undermines a key
philosopher builder undermines a key tenant of capitalism,
tenant of capitalism, I'm not taking the capitalist position
I'm not taking the capitalist position to be that you can't aim towards
to be that you can't aim towards normative uh goals. But I'm taking it to
normative uh goals. But I'm taking it to to say it's not necessary for the
to say it's not necessary for the positive benefits to ensue. And yet what
positive benefits to ensue. And yet what I hear you saying is if we don't have
I hear you saying is if we don't have philosophers who are builders, then
philosophers who are builders, then entrepreneurs driven by the profit
entrepreneurs driven by the profit mechanism can build systems that just
mechanism can build systems that just turn people into automatons. You you see
turn people into automatons. You you see you see the difference I'm trying to
you see the difference I'm trying to draw.
draw. And I also So one is like profit pools
And I also So one is like profit pools are incredibly important. They tell us
are incredibly important. They tell us what to go figure out about. They're
what to go figure out about. They're they're things that people value based
they're things that people value based on their preference preference ordering
on their preference preference ordering sort of so to speak. But also someone
sort of so to speak. But also someone needs to care about preserving the
needs to care about preserving the institutions that make free markets
institutions that make free markets possible. Someone needs to care about
possible. Someone needs to care about preserving the habits of mind that do
preserving the habits of mind that do that. And also markets are while
that. And also markets are while incredible a means by which we attain
incredible a means by which we attain human flourishing. And so someone also
human flourishing. And so someone also needs to care about that. And so those
needs to care about that. And so those are things that are entirely consistent
are things that are entirely consistent with free markets. It's a bit of a
with free markets. It's a bit of a thicker conception of liberalism than I
thicker conception of liberalism than I would say you know Freriedman would
would say you know Freriedman would hold. I grant that. But I think it's
hold. I grant that. But I think it's critical that we we we we consider those
critical that we we we we consider those other dimensions.
other dimensions. Right? So now let me push you from the
Right? So now let me push you from the exact opposite direction. Right? That
exact opposite direction. Right? That was the pro- capitalist critique. This
was the pro- capitalist critique. This is the anti- capitalist critique. And I
is the anti- capitalist critique. And I think OpenAI is a is a good example
think OpenAI is a is a good example which is let's say there is an
which is let's say there is an entrepreneur who is motivated by more
entrepreneur who is motivated by more than the profit motive.
than the profit motive. There are two kinds of pressures upon
There are two kinds of pressures upon him or her that might deter his genuine
him or her that might deter his genuine desire to do this. Number one is when a
desire to do this. Number one is when a company scales and brings on investors
company scales and brings on investors and other shareholders, they just might
and other shareholders, they just might need to optimize shareholder value.
need to optimize shareholder value. Regardless of what the entrepreneur,
Regardless of what the entrepreneur, what the philosopher builder wants and
what the philosopher builder wants and number two is competitive pressures. You
number two is competitive pressures. You might build a social media platform that
might build a social media platform that genuinely helps people that gives them
genuinely helps people that gives them socratic dialogues, but people might
socratic dialogues, but people might just uh gravitates towards the addicting
just uh gravitates towards the addicting one and then the profit incentive drives
one and then the profit incentive drives growth towards that.
growth towards that. Yeah, I would I would add another which
Yeah, I would I would add another which is that for early stage companies which
is that for early stage companies which I think a lot about investors the
I think a lot about investors the incremental financing if you fund your
incremental financing if you fund your company at a seed stage and you have a
company at a seed stage and you have a real mission behind it but then the
real mission behind it but then the series A investor doesn't care they just
series A investor doesn't care they just reduce you to a metric because they're
reduce you to a metric because they're they're chasing DPI um they are not
they're chasing DPI um they are not going to be um uh kind if you sort of
going to be um uh kind if you sort of deviate from a growth plan and so you
deviate from a growth plan and so you have to kind of this is a big design
have to kind of this is a big design question for like how you bring these
question for like how you bring these companies into the world. You need to
companies into the world. You need to have a group of investors capital that
have a group of investors capital that is
is aligned
aligned aligned philosophically and this is a
aligned philosophically and this is a big reason why we started Cosmos
big reason why we started Cosmos Holdings which is a complimentary
Holdings which is a complimentary portion of this that focuses on venture
portion of this that focuses on venture creation and venture backing is because
creation and venture backing is because you need to have align capital to do
you need to have align capital to do this. What you can't insulate from is
this. What you can't insulate from is the customer. In other words, like right
the customer. In other words, like right you can provide insulation in terms of
you can provide insulation in terms of the capital and have very principled
the capital and have very principled capital, but you can't insulate from the
capital, but you can't insulate from the customer, which is why you need to kind
customer, which is why you need to kind of be inspired by the Bloomberg model,
of be inspired by the Bloomberg model, which is funny to say because in 2025
which is funny to say because in 2025 it's not exactly the greatest, you know,
it's not exactly the greatest, you know, user interface or whatever. They they
user interface or whatever. They they they beat their competitors though
they beat their competitors though because they've chosen a part of the
because they've chosen a part of the market where it's real mutual benefit
market where it's real mutual benefit like they're really offering benefit for
like they're really offering benefit for their customers. You have to build in
their customers. You have to build in those kinds of pockets and directions.
those kinds of pockets and directions. What about the other pockets? Well, so I
What about the other pockets? Well, so I think once you once you demonstrate
think once you once you demonstrate that, I think the costs and the design
that, I think the costs and the design patterns will come down and I think you
patterns will come down and I think you will be able to kind of infect the other
will be able to kind of infect the other regions of the world, right?
regions of the world, right? I'm not sort of utopian in thinking, but
I'm not sort of utopian in thinking, but I do think there's a path, there's a
I do think there's a path, there's a trajectory, there's a way to enter into
trajectory, there's a way to enter into this market, right?
this market, right? And then improve and learn about this in
And then improve and learn about this in such a way that we can kind of cross
such a way that we can kind of cross apply our lessons into the more
apply our lessons into the more difficult areas. I also think that
difficult areas. I also think that there's something to be said about like
there's something to be said about like there's a great deal of entrenchment
there's a great deal of entrenchment when you have one business model like
when you have one business model like adriven model that is this entrenchment
adriven model that is this entrenchment is not felt by new challengers. In other
is not felt by new challengers. In other words, if you set up your business to
words, if you set up your business to monetize entirely differently then you
monetize entirely differently then you don't bind yourself in the way that
don't bind yourself in the way that Google has bound itself
Google has bound itself like Substack for example.
like Substack for example. Substack is a great example. Yeah.
Substack is a great example. Yeah. Exactly. And so this is a challenge for
Exactly. And so this is a challenge for incumbents. I think if I were an
incumbents. I think if I were an incumbent trying to drive change there,
incumbent trying to drive change there, it would be hard. And that's why a lot
it would be hard. And that's why a lot of the top researchers that we know are
of the top researchers that we know are people who worked at at big tech
people who worked at at big tech platforms who built something that was
platforms who built something that was incredibly I think potent for you know
incredibly I think potent for you know promoting human flourishing. It went
promoting human flourishing. It went against the business model for them. The
against the business model for them. The right answer is you need to either build
right answer is you need to either build it, you know, in academia and open
it, you know, in academia and open source it or you need to start a startup
source it or you need to start a startup and find a business model that can take
and find a business model that can take that into the world. Let me give you uh
that into the world. Let me give you uh a different line of critique. Um and
a different line of critique. Um and I'll begin by quoting uh part of an
I'll begin by quoting uh part of an essay that we are soon to publish. The
essay that we are soon to publish. The way to acquire more stable views is
way to acquire more stable views is almost paradoxically more inquiry. Okay,
almost paradoxically more inquiry. Okay, you're explaining why it's important for
you're explaining why it's important for the builder to philosophize. In Plato's
the builder to philosophize. In Plato's Mino, Socrates describes this with
Mino, Socrates describes this with reference to the legend of the statues
reference to the legend of the statues of Datalus which were said to run about
of Datalus which were said to run about if not tied down. The idea being that
if not tied down. The idea being that opinion is made valuable via inquiry
opinion is made valuable via inquiry which helps to ground our knowledge and
which helps to ground our knowledge and holds it more stably in place. Inquiry
holds it more stably in place. Inquiry improves our convictions even as it
improves our convictions even as it replaces them. You're talking about here
replaces them. You're talking about here the importance of the builder, many of
the importance of the builder, many of whom in Silicon Valley are just uh
whom in Silicon Valley are just uh unthinking in their building of
unthinking in their building of philosophizing that one of the benefits
philosophizing that one of the benefits is that it grants them more conviction.
is that it grants them more conviction. However, many of the of the uh Socratic
However, many of the of the uh Socratic dialogues end in appar
um uh uh sort of answer to the what is X question is arrived at. And even
question is arrived at. And even stronger for some of them, the
stronger for some of them, the interlocular seems to be made worse,
interlocular seems to be made worse, more puzzled, more angry, more
more puzzled, more angry, more humiliated through the conversation. So,
humiliated through the conversation. So, and of course, there's an entire school
and of course, there's an entire school that's been founded from Socratic
that's been founded from Socratic apparcics,
apparcics, right?
right? And so, why are you so certain that kind
And so, why are you so certain that kind of philosophical questioning is going to
of philosophical questioning is going to lead to more certainty for builders who
lead to more certainty for builders who are even considering to take this path?
are even considering to take this path? Yeah, it's an interesting question. So
Yeah, it's an interesting question. So the um
the um the kind of thing that we want to do is
the kind of thing that we want to do is to inculcate a habit of mind. We want
to inculcate a habit of mind. We want people who inquire, who think deeply
people who inquire, who think deeply about the alternate possibilities, who
about the alternate possibilities, who understand that philosophy, the love of
understand that philosophy, the love of wisdom and the pursuit of wisdom is a
wisdom and the pursuit of wisdom is a kind of quest for knowledge but never an
kind of quest for knowledge but never an attainment and who are satisfied to some
attainment and who are satisfied to some degree by simply knowing more about what
degree by simply knowing more about what they don't know. In other words, we can
they don't know. In other words, we can we can look to I mean a lot of the most
we can look to I mean a lot of the most inspirational philosophy for me is
inspirational philosophy for me is people who have un who have tried to
people who have un who have tried to demarcate the limits of reason what we
demarcate the limits of reason what we can know. Yeah. Exactly. Kant Hume many
can know. Yeah. Exactly. Kant Hume many others. Hayek and so I think that's a
others. Hayek and so I think that's a perfectly acceptable place to land in
perfectly acceptable place to land in philosophy that requires a um one to
philosophy that requires a um one to have a constitution that isn't like
have a constitution that isn't like pathologically certainty seeking. But if
pathologically certainty seeking. But if that's true, then we would rather have
that's true, then we would rather have questioners who are constantly
questioners who are constantly questioning, which by the way is a habit
questioning, which by the way is a habit that I think is very consistent with
that I think is very consistent with company building. It's like you're
company building. It's like you're asking questions, you're deeply curious,
asking questions, you're deeply curious, you're fostering that. It's just you're
you're fostering that. It's just you're asking more capacious questions, more
asking more capacious questions, more expansive questions that um get at what
expansive questions that um get at what the technology is actually going to do
the technology is actually going to do for what matters, which is human
for what matters, which is human flourishing,
flourishing, right? Um what do you think is ideal
right? Um what do you think is ideal path to train these philosopher
path to train these philosopher builders? Is it to take builders such as
builders? Is it to take builders such as yourself uh and then try to teach them
yourself uh and then try to teach them philosophy when they're ready or is it
philosophy when they're ready or is it to take the best philosophers uh and uh
to take the best philosophers uh and uh teach them how to build?
teach them how to build? Good question. So my focus is to try
Good question. So my focus is to try primarily to take the builders who have
primarily to take the builders who have this kind of sense that they want to
this kind of sense that they want to help humans and just like I did but it's
help humans and just like I did but it's untutoed just like mine was and then be
untutoed just like mine was and then be able to give them tools to um really try
able to give them tools to um really try to derive a set of principles or ideas.
to derive a set of principles or ideas. The method I think must be a combination
The method I think must be a combination of education and that education must
of education and that education must blend the kind of textual and technical
blend the kind of textual and technical meaning if we want to think about
meaning if we want to think about collective intelligence we might read
collective intelligence we might read mill on how to correct collective error
mill on how to correct collective error but we should also understand research
but we should also understand research at the very front you know very forward
at the very front you know very forward parts of the frontier like what
parts of the frontier like what midjourney is doing on collective
midjourney is doing on collective intelligence that's one part of it the
intelligence that's one part of it the other part is practice it's action it's
other part is practice it's action it's translation So you can think about
translation So you can think about translation in two senses. One is how do
translation in two senses. One is how do you create prototypes that test ideas
you create prototypes that test ideas and for that what we do at Cosmos is we
and for that what we do at Cosmos is we back you know individuals to build
back you know individuals to build projects in 30 60 90 days with micro
projects in 30 60 90 days with micro grants inspired by Tyler Cowan inspired
grants inspired by Tyler Cowan inspired by emergent ventures.
by emergent ventures. The other method is to do deeper
The other method is to do deeper research. And so for some questions that
research. And so for some questions that deal with like the heart of what is
deal with like the heart of what is inquiry or what it what does it mean for
inquiry or what it what does it mean for a machine to uh promote virtue? These
a machine to uh promote virtue? These are big questions that I think
are big questions that I think prototypes would be valuable for. But
prototypes would be valuable for. But there's a much deeper kind of question,
there's a much deeper kind of question, right? And so for that we helped to you
right? And so for that we helped to you know um facilitate this by setting up an
know um facilitate this by setting up an AI lab at the University of Oxford
AI lab at the University of Oxford called the human- centered AI lab where
called the human- centered AI lab where you have some of the top philosophers
you have some of the top philosophers Philip Coralus leads this um he's a very
Philip Coralus leads this um he's a very unique philosopher who thinks about
unique philosopher who thinks about reason but you combine that with people
reason but you combine that with people who are fresh out of open AI anthropic
who are fresh out of open AI anthropic deep mind that kind of thing and they're
deep mind that kind of thing and they're building systems and doing research that
building systems and doing research that I don't think could be done anywhere
I don't think could be done anywhere else I don't think it could be done
else I don't think it could be done within
within I don't think it would be done within
I don't think it would be done within academia in a traditional way. So you
academia in a traditional way. So you kind of put that alchemical set of
kind of put that alchemical set of philosopher and technologists together
philosopher and technologists together and you do incredible research. Then as
and you do incredible research. Then as I mentioned the last part of it which I
I mentioned the last part of it which I still view as part of like this journey
still view as part of like this journey of transformation is you kind of
of transformation is you kind of graduate and you build a company. That's
graduate and you build a company. That's what what makes this so distinctive is
what what makes this so distinctive is that, you know, we want to take ideas
that, you know, we want to take ideas and then scale them out into the world.
and then scale them out into the world. And the way to do that is through
And the way to do that is through markets, is through entrepreneurship.
markets, is through entrepreneurship. And so that's where Cosmos Holdings
And so that's where Cosmos Holdings comes in.
comes in. Well, it's really interesting because uh
Well, it's really interesting because uh most of the people we interview in this
most of the people we interview in this series, the really successful
series, the really successful philosopher builders, all went the
philosopher builders, all went the opposite direction.
opposite direction. They all studied philosophy. Reed
They all studied philosophy. Reed Hoffman, Peter Teal, Colin Moran, uh
Hoffman, Peter Teal, Colin Moran, uh Marcus Rue, and then they were pulled
Marcus Rue, and then they were pulled into the real world. Yeah. I'm not
into the real world. Yeah. I'm not saying there's a necessary tension
saying there's a necessary tension there. I I just think that's it's very
there. I I just think that's it's very interesting. Yeah,
interesting. Yeah, absolutely. Well, I mean, I think it
absolutely. Well, I mean, I think it hits people at the right moment for
hits people at the right moment for them. So, I don't mean to dismiss people
them. So, I don't mean to dismiss people who come at it from the other direction.
who come at it from the other direction. We're very welcoming of that. I know for
We're very welcoming of that. I know for me, I took one philosophy class at MIT.
me, I took one philosophy class at MIT. I didn't like it that much.
I didn't like it that much. Yeah.
Yeah. And then it had to be the case that I
And then it had to be the case that I found it in my, you know, mid30s after
found it in my, you know, mid30s after selling the companies, after having the
selling the companies, after having the kids. It took hold. So, it's very unique
kids. It took hold. So, it's very unique the journey that each individual
the journey that each individual and uh same for me. it could just be be
and uh same for me. it could just be be a matter of months or even a single year
a matter of months or even a single year where for me uh I took Colombia's Great
where for me uh I took Colombia's Great Books core curriculum after I filled the
Books core curriculum after I filled the company, dropped out and went back. Um
company, dropped out and went back. Um and uh that was life-changing.
and uh that was life-changing. Yeah.
Yeah. If I were forced to take in my freshman
If I were forced to take in my freshman fall, I think I would have hated it.
fall, I think I would have hated it. Yeah.
Yeah. Because I didn't realize the importance.
Because I didn't realize the importance. So yeah. Uh let me end this interview
So yeah. Uh let me end this interview with a final challenge and it is a
with a final challenge and it is a challenge from an unsuspected friend.
challenge from an unsuspected friend. Hayek. Okay.
Hayek. Okay. This is uh constitution of liberty.
This is uh constitution of liberty. Coercion however cannot be altogether
Coercion however cannot be altogether avoided because the only way to prevent
avoided because the only way to prevent it is by the threat of coercion. Free
it is by the threat of coercion. Free society has met this problem by
society has met this problem by conferring the monopoly of coercion on
conferring the monopoly of coercion on the state and by attempting to limit
the state and by attempting to limit this power of the state to instances
this power of the state to instances where it is required to prevent coercion
where it is required to prevent coercion by private persons. So for our audience,
by private persons. So for our audience, what Hayek is saying here is that if you
what Hayek is saying here is that if you don't want
don't want individuals, private citizens to coersse
individuals, private citizens to coersse each other, you need to have the
each other, you need to have the ultimate form of coercion or at least a
ultimate form of coercion or at least a singular form of coercion, which is the
singular form of coercion, which is the monopoly of violence of the state
monopoly of violence of the state guarding these rules.
guarding these rules. Might there be an uncomfortable
Might there be an uncomfortable structure as it relates to building
structure as it relates to building non-coercive AI in the sense that right
non-coercive AI in the sense that right now with Cosmos and philosopher
now with Cosmos and philosopher builders, you're going for the
builders, you're going for the non-coercive approach to build that,
non-coercive approach to build that, right? They're saying, "Let's let's give
right? They're saying, "Let's let's give people these fast grants. Let's build
people these fast grants. Let's build these these these uh these companies.
these these these uh these companies. Let's let markets do their trick."
Let's let markets do their trick." But, you know, if someone is about to
But, you know, if someone is about to build an AI and push it on the masses
build an AI and push it on the masses that is going to turn them into
that is going to turn them into automatons, might we need a coercive
automatons, might we need a coercive measure to ensure that coercion does not
measure to ensure that coercion does not happen either through regulation or
happen either through regulation or other means. So what Hayek is
other means. So what Hayek is identifying there, I think you could
identifying there, I think you could call the paradox of government, which is
call the paradox of government, which is to say you've got to have a government
to say you've got to have a government that is sufficiently powerful to defend
that is sufficiently powerful to defend to defend liberty. Um
to defend liberty. Um that invites uh huge difficulty if you
that invites uh huge difficulty if you grant a government that much power that
grant a government that much power that monopoly and violence as you said. But
monopoly and violence as you said. But it's it's it's essential um that we live
it's it's it's essential um that we live with this paradox as it concerns
with this paradox as it concerns regulation. I would call to mind three
regulation. I would call to mind three three tests. Um one is the test of is
three tests. Um one is the test of is that consistent is it consistent with
that consistent is it consistent with the rule of law. In other words, with
the rule of law. In other words, with the idea that each um each law needs to
the idea that each um each law needs to be general, abstract and perspective.
be general, abstract and perspective. That's f the first. And so there are
That's f the first. And so there are lots of ways in which regulation can can
lots of ways in which regulation can can violate that and be commands to specific
violate that and be commands to specific groups and things like that that violate
groups and things like that that violate the generality so on and so forth. So
the generality so on and so forth. So that's a baseline test that again draws
that's a baseline test that again draws from K. The second is is the regulation
from K. The second is is the regulation something that is made based on
something that is made based on knowledge that we have no reason to
knowledge that we have no reason to believe that we possess. In other words,
believe that we possess. In other words, if we had made regulation at the
if we had made regulation at the beginning of the internet era, would we
beginning of the internet era, would we have gotten it right? I can tell you we
have gotten it right? I can tell you we would not have. You know, I I can tell
would not have. You know, I I can tell you that we would have been in profound
you that we would have been in profound ignorance about what was to come. We
ignorance about what was to come. We would have gotten it wrong. And this
would have gotten it wrong. And this gets at the idea of um the dominant
gets at the idea of um the dominant paradigm which is ex anti trying to
paradigm which is ex anti trying to regulate before some hypo hypothesized
regulate before some hypo hypothesized harm occurs versus the much more
harm occurs versus the much more adaptive incremental evolutionary
adaptive incremental evolutionary approach which is expost through the
approach which is expost through the common law. So expost adjud adjudication
common law. So expost adjud adjudication I think should be favored on an
I think should be favored on an epistemological basis because we can't
epistemological basis because we can't make such predictions.
make such predictions. Thirdly any intervention that we might
Thirdly any intervention that we might have should be evaluated not just on the
have should be evaluated not just on the basis of the local cost benefit. In
basis of the local cost benefit. In other words if we if we intervene here
other words if we if we intervene here and we intervene here and we intervene
and we intervene here and we intervene here then this proximal harm that we can
here then this proximal harm that we can that we can foresee. We may not have
that we can foresee. We may not have experienced it because again we're not
experienced it because again we're not taking the ex post approach but if we
taking the ex post approach but if we can foresee it and articulate it is it
can foresee it and articulate it is it sort of worth it and the cost benefit
sort of worth it and the cost benefit analysis that really I think we should
analysis that really I think we should do in that case is don't just look at
do in that case is don't just look at the harm that you're trying to solve
the harm that you're trying to solve proximally look at the overall harm to
proximally look at the overall harm to the system and so if you if you buy the
the system and so if you if you buy the idea that the system is just especially
idea that the system is just especially because that tends to allow the
because that tends to allow the anonymous individual to achieve his or
anonymous individual to achieve his or her unknown ends. Then you have to be
her unknown ends. Then you have to be committed to some idea that we care
committed to some idea that we care about the way in which knowledge
about the way in which knowledge propagates through the system and gets
propagates through the system and gets generated through the system. And in
generated through the system. And in fact, our interventions tend to harm
fact, our interventions tend to harm that crucial crucial attribute of
that crucial crucial attribute of spontaneous order. And so
spontaneous order. And so that is something that needs to go on
that is something that needs to go on the other side of the ledger, right?
the other side of the ledger, right? And this is a good place to end it
And this is a good place to end it because it mirrors what you said about
because it mirrors what you said about the trade-off function of autonomy in
the trade-off function of autonomy in the good life, which is that it's not a
the good life, which is that it's not a lexical priority. It's not like no
lexical priority. It's not like no regulation whatsoever. It's that simply
regulation whatsoever. It's that simply people underestimate the cost of doing
people underestimate the cost of doing business, so to speak, right? Of
business, so to speak, right? Of intervening. All right. Thank you,
intervening. All right. Thank you, Brendon. Thank you for a fascinating
Brendon. Thank you for a fascinating conversation.
conversation. Thank you.
Thank you. Thanks for watching my interview. If you
Thanks for watching my interview. If you like these kinds of discussions, I think
like these kinds of discussions, I think you'd fit in great with the ecosystem
you'd fit in great with the ecosystem we're building at Cosmos. We deliver
we're building at Cosmos. We deliver educational programs, fund research,
educational programs, fund research, invest in AI startups, and believe that
invest in AI startups, and believe that philosophy is critical to building
philosophy is critical to building better technology. If you want to join
better technology. If you want to join our ecosystem of philosopher builders,
our ecosystem of philosopher builders, you can find roles we're hiring for,
you can find roles we're hiring for, events we're hosting, and other ways to
events we're hosting, and other ways to get involved on jonathanb.com/cosmos.
get involved on jonathanb.com/cosmos. Thank you.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.
Works with YouTube, Coursera, Udemy and more educational platforms
Get Instant Transcripts: Just Edit the Domain in Your Address Bar!
YouTube
←
→
↻
https://www.youtube.com/watch?v=UF8uR6Z6KLc
YoutubeToText
←
→
↻
https://youtubetotext.net/watch?v=UF8uR6Z6KLc