Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
The 300-Year-Old Physics Mistake No One Noticed | Curt Jaimungal | YouTubeToText
YouTube Transcript: The 300-Year-Old Physics Mistake No One Noticed
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Video Summary
Summary
Core Theme
This content features an interview with Professor John Norton, who challenges fundamental assumptions in physics, particularly regarding determinism in Newtonian mechanics, the nature of causation, and the interpretation of Landauer's principle. The discussion also touches upon Einstein's contributions to quantum theory and the philosophy of thought experiments.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
That it produced so much fuss was somehow shocking. This literature has been teetering
on the edge of nonsense for a hundred years. Professor John Norton of the University of
Pittsburgh has spent decades systematically dismantling sacred assumptions of physics.
Norton's dome, for instance, demonstrates fundamental indeterminism in Newtonian physics
itself. Now, you may be thinking of quantum uncertainty, but I'm talking about classical
physics, which is breaking down in terms of unique predictivity. Beyond determinism,
Norton critiques notions of causation itself. Physicists routinely invoke causal language, but
what if causation isn't fundamental? Even further, Norton's critique extends to thermodynamics.
Landauer's principle, for instance, has guided decades of research into computing limits,
and some even use it as the physical basis of Wheeler's It from Bit. Norton demonstrates this
principle misunderstands thermodynamics and entropy, both of which we talk about
in extensive detail. We then cap it off with Einstein's contributions to old quantum theory,
and Einstein's disagreements with the new quantum theory. A special thank you to The
Economist for sponsoring this video. I thought that The Economist was just something CEOs read
to stay up to date on world trends, and that's true. However, that's not only true. What I've
found more than useful is their coverage of math, of physics, of philosophy, of AI,
especially how something is perceived by countries and how it impacts markets. Among
weekly global affairs magazines, The Economist is praised for its non-partisan reporting and being
fact-driven. This is something that's extremely important to me. It's something that I appreciate.
I personally love their coverage of other topics that aren't just news-based as well. For instance,
The Economist had an interview with some of the people behind DeepSeek, the week DeepSeek
launched. No one else had that. The Economist has a fantastic article on the recent Deci dark energy
data, and it surpasses, in my opinion, Scientific American's coverage. The Economist's commitment
to rigorous journalism means that you get a clear picture of the world's most significant
developments. It covers culture, finance and economics, business, international affairs,
Britain, of course, Europe, the Middle East, Africa, China, Asia, the Americas, and yes,
the USA. Whether it's the latest in scientific innovation or the shifting landscape of global
politics, The Economist provides comprehensive coverage that goes beyond the headlines. If you're
passionate about expanding your knowledge and gaining a deeper understanding of the forces that
shape our world, I highly recommend subscribing to The Economist. It's an investment into your
intellectual growth, one that you won't regret. I don't regret it. As a listener of TOE, you get
a special 20% off discount. Now you can enjoy The Economist and all it has to offer for less. Head
over to their website, www.economist.com.TOE to get started. Make sure to use that link,
that's www.com.TOE to get that discount. Thanks for tuning in, and now, back to the exploration of
the mysteries of the universe, with John Norton. All right. Professor John Norton, you're a legend
in the physics scene and the philosophy of physics scene, so it's an honor to be with you here. Oh,
thank you very much. It's very kind of you. You're known for Norton's dome, for indeterminism and
systematizing material induction, your views on thought experiments, the history of Einstein,
and disproving Landauer's principle. We'll attempt to get to all of these today. Now,
before we get to these, let's pick one. Norton's dome. Why don't you tell me, how did you arrive at
that construction? What were you trying to show? Were you trying to be contentious? Were you trying
to disprove a colleague? Did something just not make sense? Walk me through leading to Norton's
dome. It was actually completely trivial, and that it produced so much fuss was somehow shocking. So,
here's the background. In the late 1980s, my colleague John Earman wrote a book,
A Primer on Determinism, in which he pointed out that indeterminism was actually rampant throughout
physics. And one of the places where it's quite rampant is in Newtonian physics, when you have
systems with infinitely many degrees of freedom. So, if you have infinitely many masses bouncing
around in various ways, their behavior is going to be generically indeterministic. So, John and
I were teaching a graduate seminar on causation and determinism. And I think that afternoon or
the next day, I was committed to giving a section on determinism, and I was going to present the
idea that Newtonian physics is generically indeterministic when you have infinitely
many degrees of freedom. Well, what about the case of finitely many degrees of freedom? I was
going to say, well, when you only have finitely many degrees of freedom, then you just always
get determinism, everything's fine. And then I thought, I'll be saying that in front of a bunch
of smart graduate students. You know what's going to happen next. So, I said, I'd better
have a look to see if there are counterexamples. So, you know, a Lipschitz condition guarantees,
unique solutions for differential equations. I looked up standard counterexamples to a
Lipschitz condition. I took one of those standard counterexamples and said, how do you realize it
physically? And the answer was quite simple. You have this dome shape, a very particular shape.
You would put a mass point at the top that can move frictionlessly. And the conditions violate
a Lipschitz condition. And so, the particle can spontaneously set itself free. So, spontaneously
set itself into motion. And the mathematics is very simple. It's two or three lines. And there
it is. So, I used that in teaching. The students didn't seem terribly impressed. I was writing a
paper on causation at the time. And I wanted to point out that the idea that Newtonian physics has
always been deterministic was actually a mistake because the theory itself is not intrinsically
deterministic. So, I included the dome in section three. And almost immediately, I started getting
emails from people correcting my mistake. And I realized, oh, there's something more going on
here. That's the story. So, what's the something more that's going on? What's going on is that the
idea that Newtonian physics is deterministic is so deeply entrenched in the psyche of many physicists
that it somehow seems that I'm at some sort of an apostate if I would be saying anything otherwise,
that I must have made a mistake and they have an obligation to discover what the mistake is. And
that was the character. They weren't hostile, the responses that I was getting. They were all very
friendly, but friendly of the form of, dear Professor Norton, I saw your analysis of this
dome. I just want to point out, you're making a terrible mistake here. And then something follows,
which never works. Let's make this clear for people. So, there are different types
of continuity. Usually we'll say that a function is continuous, but there are various types like
absolute continuity, uniform, and then there's Lipschitz, which then is used in ODE classes
to show that there are unique solutions. Now, if you remove this Lipschitz continuity condition,
then you get non-unique solutions. So, multiple solutions. And I'm not sure, I believe
Lipschitz is necessary, but not sufficient for non-uniqueness. You can. I think it's sufficient,
but not necessary. But if you find very simple systems that violate the condition of Lipschitz
continuity, I mean, the mathematics of the dome is just a very simple example. If you
have the first derivative of a function varies with the square root of the function, that
already violates the Lipschitz condition at the origin, when the function has zero value. I mean,
it's as simple as that. And that's the example instantiated in the dome. I went to the second
derivative, so I could use Newton's f equals ma, but I think it already happens with the
first derivative. Just d dx equals square root of x, and solve that when x is equal to zero,
you already have non-unique solutions. I think, going from memory, that works. Okay, so then what
are people supposed to imagine as a consequence of this? Are you saying that Newtonian physics
thus needs to assume Lipschitz in order to prove this uniqueness? And thus, if you're trying to say
Newtonian physics is deterministic, you're already inserting that determinism. You're not concluding
that deterministic. That's exactly right. Whether a particular Newtonian system is deterministic or
not is something to be discovered, not stipulated. And I'll mention again the important case,
if you have infinitely many systems interacting, then you get indeterminism generically. Well,
why does this matter? Well, it's going to matter in the infinite case when you look at something
like the thermodynamic limit. So this is a case that I've calculated. We'd like to think of a very
simple Newtonian model for a crystal, consists of a whole bunch of mass points that are connected
together by springs. And they're thermally agitated, and so they're wobbling about. Now,
the idea is that as the number of mass points gets larger, as this lattice gets larger and larger,
its behavior becomes closer and closer to a system that is going to behave thermodynamically
in the ways that we expect. You're going to get Boltzmann's, the Boltzmann distribution is going
to come out and so on. But you need to look at the very large lattice. So it's standard to say,
if you take the infinite limit, that's when you get thermodynamics back. Well, you have to be
very careful about how you take that infinite limit. If taking the infinite limit just means
I will consider crystal lattices of arbitrarily large size, always finite, but arbitrarily large
in size, then the sequence of lattices that you're considering will eventually stabilize out to have
nice thermodynamic properties. But if you mean I'm going to consider an infinite lattice and then
investigate its properties, you'll discover that the lattice dynamics have become indeterministic.
I've not kept this secret. It's in a paper I wrote, published in 2011, 2012, called
Approximation and Idealization. That's one of the main points of the paper. It just says, be careful
taking infinite limits. You can really get into trouble. So there are other types of continuity
as well. So the underlying space is continuous. So the function itself is continuous, and the
function operates on a domain, and that domain is spacetime. Now, I know we're dealing in Newtonian
physics, so maybe not spacetime, but it doesn't matter. We say some manifold. Now, is it your
contention that the manifold itself is also going to ultimately be discontinuous? Do you have an
intuition there that it's going to be discretized, or do you think that you can zoom in all the way
and it looks like Rn? Well, the example of the dome, the dome surface is an ordinary Euclidean
surface, and it does have a curvature singularity at the apex, but curvature singularity is at a
point, and nothing extraordinary in idealized Newtonian systems. Think about the sharp edge
of a tabletop. It's a horizontal and a vertical and they meet, and we don't have any trouble
shooting a particle across the horizontal surface. It then comes to the curvature singularity at the
edge and then shoots off in a parabolic arc. It's the standard sort of idealization that we
talk about. The singularity at the sharp edge of a tabletop is one order worse than the singularity
at the apex of the dome. At the apex of the dome, it's singularity in the curvature. At the apex,
the singularity at the sharp edge of a tabletop is a singularity in the tangents. The tangents
jump discontinuously when you go over the edge. So many ordinary Newtonian systems are deterministic,
and we're entirely used to that. They always work out that way. Is it so surprising that if we go
to extreme cases that we don't normally look at in ordinary life that we... that we end up with
something a little different? The case of the dome is not something we could ever realize in real
life because it requires multiple violations of quantum mechanics. You know, you've got to put the
point... the mass point has to be located at rest exactly at the apex. You need to have a surface
that has exactly the right properties. The more interesting case is when you have infinitely
many masses. That's the sort of idealization that people will take more seriously. Why doesn't the
infinitely many masses also contradict quantum mechanics? Newtonian theory contradicts quantum
mechanics and its foundations. So yes, it does, as does every Newtonian analysis. So I'm not sure
what's worrying you here. This, for me, has been the perpetual puzzle. I think the dome is just a
rather ordinary piece of Newtonian physics. There's nothing very special about it. It
just happens to have this odd property. But then some people I talk to just say, yeah, yeah, well,
what's the big deal? Other people rant saying there's something deeply troubling about this,
and I just don't know what that is. So it turns out that some Newtonian systems, in these cases,
rather exotic ones that could never be realized, are deterministic. What are the implications for
quantum mechanics and relativity? I think that was behind one of your remarks. Not very much,
because they're different theories. Relativity theory and quantum mechanics are very different
theories. They turn out to be indeterministic in their own ways. In the case of quantum mechanics,
the standard interpretation is indeterministic. I think you know that's just the beginning of a
long discussion. If you're a Bohmian, you won't think that, but that's another story. In order
to realize determinism in relativity, you need a Cauchy surface. You need all the nice conditions.
If you don't have a Cauchy surface, then you can't even state the conditions of determinism,
which are the present fixes the future, but you don't have a present, so you can't have
any fixing. I think if there's a moral, the only moral is the following. Be careful about what you
assume about the world. Don't go into physics assuming antecedently that you have a wisdom
that transcends what the empirical science will tell you. If you want to see what goes wrong,
if you do that differently, think about what happened when quantum mechanics appeared in
the mid-1920s. It became very apparent then that the theory was going to be indeterministic. Up
until then, everyone had simply assumed that for a system to be causally well-behaved,
it had to be deterministic. Then quantum mechanics comes along, and it's indeterministic,
and there was this tremendous outpouring of anxiety. Causality is lost was the plea. We would
now say determinism. They then said causality, but in retrospect, it was simply an artifact
of 19th-century thinking. In the 19th century, they had identified causation with determinism,
so for the world to be well-ordered causally, then the world had to be deterministic. Quantum
mechanics says it's not deterministic. Oh no, you know, we've lost an absolute fundamental. Well no,
you've just learned something new about the world. So let's talk about this graduate seminar then on
causation. Did anything else controversial come out, and what is causation? Well, as you know,
I've written fairly extensively about this. I have a particular critique of causation.
It is a critique of causal metaphysics. It is not a critique of causation per se. To be very
clear here, I am quite comfortable with the idea that things interact with each other and connect
with each other in all sorts of fascinating and interesting ways. Voltages drive electric
currents, and gradients of free energy produce thermodynamic effects, and so on and so on and so
on, all the way through here. You can go to any science and you find all sorts of claims of how
this causes that. My critique is the following. Causal metaphysics seeks to do something that
is antecedent to these empirical investigations. A causal metaphysician says we cannot talk about
causality empirically until we have sat down and done some conceptual work and figured out
what causation is. Once we have done that, then we understand what causation is, and then scientists
can come along and do the cleanup operation of figuring out how this causal principle that they
come up with is going to be instantiated in the particular sciences. The general run of a causal
metaphysician is saying, I know what causation is. It's this. Scientists, your job is just to
show me how that works in the world. That is just a completely failed enterprise. The difficulty is
that metaphysicians have not been able to come up with any principle of causation that has any
empirical content and that also succeeds in the world. We have thousands of years of failure at
that particular enterprise. I'm rejecting the causal metaphysicians project completely. The
question then becomes, what is the place of causal talk in science? Why is it so pervasive? Why
do scientists care about causation if there's no metaphysics underlying it? It's simply a matter of
labeling. What happens is we notice that there are all sorts of processes that we find comfortable to
describe causally. Take Einstein's famous A and B coefficient analysis of stimulated emission.
The idea is that if you have an excited atom in a radiation field where the frequency of radiation
is at the right frequency, that will stimulate an emission that will stimulate the excited atom to
drop back to a lower state. I would like to say that causes it to do so. I have no objection as
long as you realize you're just declaring how you intend to use a word. And so my general
claim is that when we have causal talk anywhere in science, it is actually a veil definition. It
is simply someone who is saying, oh, I find it very convenient. I find it pragmatically
useful to describe this process using the word causation. What they are not doing, although
they mightn't realize this, but what they're not doing is saying, oh, I have discovered the
instantiation of some deep metaphysical truth that lies antecedent prior to any science. They haven't
discovered that at all. So what are the advantages in using causal language in various places? It
can almost immediately be psychologically helpful. It's very helpful when I think of Einstein's A and
B coefficient paper to say, oh, so the external radiation field is stimulating an emission. It is
causing an emission. And that's how lasers work. Right. And that's the way we think about lasers.
It's certainly very, very helpful. Otherwise you just have a bunch of equations which gives you
probabilities of various transitions, right? Or in the case of Jim Woodward's interventionist account
of causation, he says that a causal process arises when we have two variables that are
related by some connection, often probabilistic, but not necessarily if you read his account fairly
carefully. And if an intervention on one of them is associated with a change in the other,
right, then we have a causal relationship. I just regard that as a definition, but it's an
immensely useful definition because if you tell me that this causes that, I now know that if I
interfere on this, then I will produce an effect on that, right? So if you tell me that certain
medical interventions will improve the health of the population, then I've learned something
enormously useful. So if we abandon that causation is somehow fundamental or refers to something that
has an essence, then is there anything that is in fundamental physics that's lost? So for instance,
is there any theorem in quantum mechanics like Bell's theorem that then loses its power
because Bell's theorem implicitly has a notion of causality in it? I don't know. I don't think so,
no. I looked into this, I did an inventory of all the places where the term causation appears
in physics, and I found that almost invariably the term causation denoted one of two things. Either
it was talking directly about the fact that we're in a Minkowski space time or at least something,
or at least the space time that had a light cone structure, right? And so we talk about the causal
structure of space time. We're actually talking about the light cone structure of space time. Or
the other one was that propagations of physical processes are confined to lie on or within the
light cone. And that seemed to exhaust almost all of the causal talk that I could find. I can't
swear that I picked up every single case, but that pretty much covered everything. Notice what you're
looking for here. You're looking for a sense of loss. Well, you never had it in the first place.
The effort of causal metaphysicians is to do a priori physics. If they're providing you some kind
of empirical fact about the world, they are trying to do it prior to experience. And if we've learned
one thing about thousands of years of scientific investigation is that really doesn't work. The
world is far more creative than our imaginations. We always get into trouble when we try and guess
ahead of time how things have to be. And if it's a causal metaphysics or a striking example of that,
this is not to say that we have lost some sense of how things connect together. Space time has
a light cone structure. Call it causal structure. That's fine with me. Ordinary propagations, right?
I confine to it. That's fine with me. Where's the loss? So there are other counterfactual accounts
of causation. Do you reject those? Um, no, they're just definitions. Nothing, nothing wrong with
that. Um, that, uh, uh, this cause that, because if I hadn't done this, that wouldn't have had,
would have happened. Wouldn't have happened. Fine. Uh-huh. What, what, what's, uh, you know,
you just told me how you plan to use a word. Hi everyone. Hope you're enjoying today's episode. If
you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal
reflections, you'll find it all on my Substack. Subscribers get first access to new episodes,
new posts as well, behind the scenes insights, and the chance to be a part of a thriving community
of like-minded pilgrimers. By joining, you'll directly be supporting my work and helping keep
these conversations at the cutting edge. So click the link on screen here, hit subscribe, and let's
keep pushing the boundaries of knowledge together. Thank you. And enjoy the show. Just so you know,
if you're listening, it's C U R T J A I M U N G A L.org. CURTJAIMUNGAL.org. Okay. So let's get to
thought experiments. Sure. So what is the standard view on thought experiments? And then where do you
stand on that view? Um, I don't know that there's a standard view, but I can tell you there's
a longstanding debate. This goes back to the 1980s when the literature on thought experiments
exploded. Uh, there were essentially two extremes in our understanding of thought experiments. One
extreme, uh, is a completely deflationary view, uh, that just says that thought experiments
are ordinary argumentation. Uh, they don't do anything that ordinary argumentation cannot do.
They just do it in a rather pretty and picturesque way. The other extreme is it says, no, there's
something wrong with this, something more going on. There's some magical power that our capacity
to do thought experiments, um, uh, is realized. Uh, and of course the question is to articulate
what that magical power is. And the clearest articulation came from my colleague, Jim Brown
at Toronto. He said, uh, we can understand some thought experiments to be platonic in character,
uh, a really good thought experiment of just the right type. Literally opens the window onto
Plato's heaven, where we can actually see the, um, actually see the laws of nature. He supports
that with the experience that we have with a good thought experiment. There's this wonderful aha
moment when suddenly you see it, right? And that's the moment of platonic perception. He's wrong of
course. I've spoken to James, James Robert Brown. Yeah. Yeah. Yeah. Yeah. I've spoken to him. So
one of the great thought experiments is that, how is it that we could tell a priori that something
should fall at the same rate of different masses? We can't. Do you mean to tell me that a priori,
um, Aristotle's, uh, account of the motion of bodies was, was false a priori? You could have
a world in, in, in which things in, in, in, in which, um, um, um, you, you, you have a force
needed to keep things moving. All right. Well, I'm not articulating my view. I'm articulating James's
view from when I interviewed him. If I'm recalling correctly, it went something like this. If heavy
objects fall faster than dropping, say a heavy bag of marbles, okay. Comprises 300 marbles next
to a single marble means this single marble will fall slower, but then you look, the bag is just
filled with many marbles. So those marbles each individually should be falling at a rate similar
to, if not equivalent to this marble, and then you have a contradiction. So thus they all must fall
at the same rate. That seems powerful. So tell me what your views are on that. It's very simple. Uh,
why were you convinced by what you just said? Why were you convinced that the, that the marble? So
the single marble and the bag of marbles have to fall at the same rate. There was an argument
there. Yeah. That, that's the thought experiment. It's an argument. That's all I'm saying. You just
ran an argument. What's. Yeah. But I thought thought experiments are arguments, no? Or am
I saying a view, am I saying a view that's controversial by saying that? You're agreeing
completely with me. Um, uh, what, what you didn't have is any extra piece that Jim would want, uh,
where Plato's world of form somehow entered into the, into things. Um, you just look every time
someone, I mean, that exchange that we had now is what happens all the time, right? Someone has a
thought experiment. They run through the thought experiment. I'm listening to them go through
the thought experiment and I'm hearing, okay, this is just an argument. It's a very simple,
straightforward argument. If you know, um, I, I, I can, um, you know, I can, I can refute,
um, Galileo's, you realize that this is not Jim's example. This is one of the great classics in
history of science. It goes back to Galileo, blah, blah, blah, blah. Right. Okay. But, but it's not,
you know, Galileo had a, I think a musket ball and a cannonball or something and then connected them
with the thread. Um, but you know, you just, you just ran an argument and that, that's all thought
experiments are that, but they're picturesque. I mean, they're compelling because you, you get this
lovely mental picture and so it's easy for you to, to, to run through, but, but if, but if it's
simply, uh, purely a picture, I don't think it has any compelling force. There has to be an argument
there or you, or it's, you know, um, uh, so, so for example, um, can I prove the possibility of a
perpetual motion machine by imagining one, right? I visualize it. It's a big brass gadget. It's got
valves and there's steam coming out and so on and, and, oh, look, the wheel just keeps spinning and
producing endless amounts of power, right? Just imagining it doesn't, doesn't do anything, right?
You, there has to be that argument there or you don't have a cogent thought experiment. And I say,
that's all, that's all that's ever going on. Jim and I have been at this debate for 40 years now.
It's, it's a little striking for me to say that. Yeah. Because what you're saying sounds sound and
ordinary. So I don't understand what Jim would be objecting to. Because even with this articulation,
my articulation of this argument is an argument. I'm saying like, if there's this, then you have
this, then you have that. Their contradiction. Therefore the premises can't be true. So it isn't
just picture something. Now you have it. You're, you're in the same position as I am. This is my
view. We talked about the Duhem early run. I don't understand why people are troubled by
it. I articulate the, you know, it's the argument view of thought experiments articulated and I'm
thinking, well, that's kind of obvious. I wrote this paper, I think in 1986. I thought, well, this
is a bit of a doubt of a paper, you know, I'm just saying something that's so completely obvious,
but then you discover there are all these people who want to take issue with you and you're trying
to figure out why it's completely straightforward. Well Jim's a friend of yours and you've spoken to
him, like you've said for decades now. Oh yeah. Yeah. Yeah. We get along. What do you think he
would say other than you have to connect to a platonic world? I imagine that's not his sole
point. Well he runs lots of examples. I think it's, I, I'd have to refresh my memory on his
writing. I'm a little nervous about trying to channel him, but I think the thing, Jim,
if you're watching, I apologize for getting it wrong. I think the thing for him is this moment of
understanding that somehow seems to surpass just, you know, just ordinary argumentation. So he's
got a, he likes doing philosophy of mathematics, he's got an example where you can sum numbers one,
two, three, four, five, and you've got a little stack of blocks and you can look at the stack
of blocks and suddenly you see, Oh, it's going to be five plus six divided by two. Right. You
can just see the way that, that, that, that, that, uh, that works and you just suddenly,
you just suddenly see it instantly without apparently having to think about it. Those
are the sorts of examples he likes. I see. I just think they're arguments still because, you know,
because I say to him, well, I didn't see it. How does it work? And then he explains it to me
and then he gives me an argument. Okay. Yeah. This moment of understanding sounds similar to Penrose
when he's articulating the Lucas argument or his version of the Lucas argument with Gödel implying
that the mind isn't computational. I don't know that argument well enough. I know all of it, but I
don't know the details. Okay. So how about let's get to something that you know inside and out,
Landauer's Principle. Why don't you outline what Landauer's Principle is and then what your precise
statement is, either that Landauer's Principle is false or it needs to be modified. The argument
that, or the project that Landauer had was a very practical one. One of the things that we
notice in computing devices is that they always produce heat, right? And that heat, of course,
is work that's been degraded. And so it is a cost for computation. It's been a longstanding
problem. We always need to cool down our computing processes. I don't know if you remember the Cray
computers going back many years, but they would sit in, if I recall correctly, in vats of Freon
in order to, you know, so the generation of heat in a computing device is a big deal. The question
he was addressing is how far can we go before we have reached a limit beyond which we cannot go any
further? In other words, how far can we reduce the amount of heat that's being generated in
computing systems? The calculation can be done in terms of entropy. How much entropy is a
computing device creating? If you think of the entropy as sitting in an isothermal heat bath,
then the entropy creation is going to correspond to the heat passed to the environment divided by
the temperature of the heat bath. That'll give you a first pass at how much the entropy is. Now,
his argument, as embellished and developed by Charles Bennett, is that the logic of the
process being implemented determines the minimum amount of heat generation, right? And if the
process is logically reversible, something like a bit flip, right, then in principle
you can execute that with minimal heat generation, with minimal entropy creation. If, however, it is
a logically irreversible operation, the classic case being erasure, right, then necessarily
there's going to be a certain amount of heat generation that's going to correspond to the
Shannon information that you calculate for the two states. So if you've got two states, zero and one,
probability p and one minus p on the two of those, you calculate the Shannon entropy, add a Boltzmann
k to it, right, and then you know how much entropy will be created when you erase it. Now,
what's wrong with that? What's wrong with that is just a very basic fact of the thermodynamics of
systems at the molecular scale. You cannot do anything at molecular scales without creating
entropy. So something as simple as a bit flip, you can't flip a bit without having some driving force
to push the bit from one state to another. So very simplest case is you might have a charge and you
want to move it from one location to the other. You only can be able to move it from one location
to the other if you have some kind of electric driving force that will push it. And what is that
driving force working against? Remember, we're at molecular scales, and at molecular scales, that
individual charge has its own thermal energy. It's bouncing around, right, and so you have to confine
it. And in the process of confining it, you compress it over to one particular part, right,
you're going to be doing work on it. That work is going to be lost as heat. This is an extremely
general result. This simply is Boltzmann's S is K log W. The best you can ever do for any process at
molecular scales, right, is to have a probability of success of completion. And Boltzmann's W tells
you the probability of success of completion, and the S associated with it tells you how much
entropy you're going to have to create. So if you don't confine the charge very much, right,
then it has its own thermal energy. It can jump out, right, and so you have a probability that the
charge is going to go back to the original state. But because you didn't confine it very much, you
haven't created much entropy. But if you confine it a lot, right, so you really force that charge
deeply into some kind of potential well, right, then you'll have a good probability of success,
but there'll be a lot of heat generated, a lot of entropy created. So the bottom line
is the following. The amount of heat that will be generated in molecular scale processes is not
determined by the logic. It's simply determined by the number of steps that you want to complete and
the probability of completion that you determine for each step. Again, this seems so elementary.
I've been arguing this for, I don't know, a dozen years now. I just don't understand why
the Landauer principle talk continues. If you're interested in the question of, yeah, what's the
minimum heat generation that you can have in any kind of molecular scale process, computational or
not, it doesn't matter. Ask how many steps are there in the process and what's the probability
of completion that I want for each step, and s is k log w will tell you the answer. And it's done.
So ordinarily in the calculation of Landauer's principle, they use a principle of indifference
to put 50-50% odds for the zero and the one, and you're saying that the probabilities need to be
physically dynamical? Yeah, I don't. Yeah, my recollection in Landauer's original paper
was he talked about computing systems and the frequency in which they might be in different
states. But go ahead. So let me try and summarize. If you try and form a lower bound based on logic,
well, you shouldn't. You should look at the precise implementation or the procedure.
If you do this, you'd find that the minimum should be higher than k log 2. Yes, absolutely
higher than k log 2 if you just want a single process. I've done the calculations. I can't
remember what the exact numbers are now, but to get a a really modest probability of completion,
I don't know, 90% or something, I can't remember the exact numbers, you will certainly create more
entropy than the k log 2. That's 0.69k that is the one bit erasure case. If I remember correctly,
if you want, I think, 95% probability of success, you create 3k of entropy, something like that,
but that's only one step. Remember, in a computing device, many, many, many steps, right? There isn't
just one step. You've got all these steps chained together and every single one of them is going to
be dissipative. This is just a completely basic fact of molecular scale physics. It doesn't take
massive, complicated, fancy derivations. The whole thing's done in two lines. You just write down
s is k log w, or you can find different expressions of it. If you go into the Gibbs
formalism, s is k log w will be expressed in terms of free energies, but they're all essentially the
same result. This is interesting. I'm currently writing an article. Maybe it's published already.
I'll place it on screen if it's already out. It's about this word in-principle,
in-principle arguments. My contention is that when most people just use that word, they use
it loosely and you need to scrutinize what kind of in-principle are they invoking? Are they referring
to epistemological modalities or nomological or metaphysical or logical possibility or something
else entirely? Even with these categories, there are frequent ambiguities and doubtfulness within.
What you're suggesting aligns with this. People invoke, in my interpretation of what you've said,
Landauer's principle. They're also employing, well, let's just idealize this scenario. Let's
say it's an in-principle argument, but then even in such cases, you have to be careful
and consider, okay, what are the practical implementations? Yeah, I'll say more than
that. It's an inconsistent application of the idea of in-principle. I think you know a bit of
the literature here. It goes back to Szilard's 1929 paper in which he introduced the Szilard
engine. This was a version of the Maxwell demon. The idea was that you had a one-molecule gas that
would bounce around inside a chamber. You would insert a partition, trapping the gas on one side,
and then you would isothermally expand it, thereby taking heat from the environment and converting
it into work. Now, the key thing to understand about that is that the phenomenon that Szilard
was looking at is a thermal fluctuation. This was the literature in which he was writing thermal
fluctuations, going back to Szilard and Einstein and Brownian motion and so on. The fundamental
question that was being asked is, if you look at thermal fluctuations, to what extent can they
reverse the second law of thermodynamics? If you look at Brownian motion, for example, think about
the Brownian motion in a fluid when the Brownian particle goes up and down. When it's going up,
heat from the environment is being converted into some microscopic notion of work because
it's being elevated in the gravitational field. Poincaré remarked that in this sort of system,
we see through our microscope a Maxwell demon in action. The question then became, is it possible
to accumulate all of these microscopic violations of the second law of thermodynamics in order to
produce a macroscopic violation of the second law of thermodynamics? That was a serious project that
was undertaken in the first decade of the 20th century. Smoluchowski came up with the answer,
and the answer is yes, you get fluctuations that you might try and exploit, but every time you try
and exploit those fluctuations, you will use other processes that have their own thermal
fluctuations that will reverse everything. This is the example of the Smoluchowski trapdoor. Let's
now go back to the Szilard engine. The single molecule bouncing backwards and forwards is a
case of a dramatic density fluctuation in a gas. It's the most extreme case. When you have larger
numbers of molecules, the fluctuations are very small. As you decrease the number, then
the fluctuations become large in relation to the total energy of the gas. Szilard's question was,
can we somehow exploit those fluctuations and add them up to get a violation of the second law? The
trouble is, when people analyze that, they don't account for all the fluctuations that are in the
apparatus that they're using. Think about the way the apparatus works. You start out with the gas,
the one molecule gas bouncing around. You put in a partition. The mere fact of putting in a partition
is itself a thermodynamic process. If that partition is very light, it's going to have
its energy of a half kT. You have to suppress that energy to get the damn thing to stick.
That's going to be creating entropy. If you make it very massive so that the amount of half kT is
not going to produce much motion, then you need friction to damp it. It stops moving. It doesn't
bounce out. The short answer is, the analysis of the Szilard engine from Szilard's time up
to the present simply ignores the totality of fluctuation phenomena that have to be suppressed
in order to get the process to go through. To go back to your original point, it is a selective
and incorrect use of in-principle idealization. You're idealizing away half of the fluctuations,
but not the other half. And then you're claiming, then you're claiming a result. If you're going to,
if you're going to try and exploit fluctuations, you have to treat them consistently and look at
the fluctuations throughout the system. If you just pick one particular subset of the
fluctuations, you're going to get nonsense results. And that, and so, I mean, you,
you probably sense frustration in my voice. This literature has been teetering on the,
on the edge of nonsense for a hundred years. All right. This kind of selective treatment
of fluctuations is just disastrous. Of course, you get completely bogus results. The trouble
is that every time a formula, P log P appears, there's a tendency, a natural reaction that says,
oh, we have a P log P, that must be thermodynamic entropy. No, it must not be. The conditions for a
P log P to be associated with heat, in the way that Clausius says, requires that that P come
about in a very particular way. And the mere fact that I don't know whether I have a coin,
I put it in my pocket, I don't know whether it's heads up or tails up, that isn't, that isn't
the right way for there to be a thermodynamic entropy of K log two associated with the coin,
but that's the fallacy that's being committed over and over and over again. So there it is. There
was even a nature article that says that they've experimentally validated Landau's principle. Oh,
my. Yeah. Again, they're doing exactly what they shouldn't be doing. What they showed is that you,
I don't remember the details now, but what they showed is that you have a little tiny particle,
a colloid or something, that's free to move around like a Brownian particle. And if you,
if you compress it, right, by moving a barrier in, you do it slowly so you can get a reversible
effect here. Then you, then you pass heat of, of K log two to the environment. Well, of course, this
has been, this has been standard in thermodynamics for over a hundred and I don't know how many
years. This is just the basic gases of the basic thermodynamics of ideal gases. I did a lot of work
on Einstein. It's abundantly clear in Einstein's work on Brownian motion that he understands this
perfectly well. It is quite fundamental. If that experiment had failed, right, then we
would have to rethink the thermodynamics of ideal gases. So what's wrong with the experiment? Well,
they've just, they just looked at how much heat gets generated when you compress a, a one molecule
gas in effect, it's actually a particle, but it's close enough to being a one molecule gas
and it's degrees of freedom. If you want to say that with now instantiated Landau's principle,
and that's the, and that's the lower limit. Well, that experiment doesn't show it. What about all
the entropy that was created in all the other bits of apparatus that were being used? All right. It's
a fluctuation phenomenon that you're looking at. What about all the fluctuations that were,
that was suppressed in order that you could move your partition inwards? All right. That's all got
to be part of the calculation or you simply don't have a result. And of course they didn't calculate
any of that. So it's, it just, you know, I mean, I certainly accept the result that, you know,
a two to one isothermal compression of a, of a single molecule and an ideal gas will pass
occasionally log two of heat to the environment, the same thing will happen if you're in a fluid,
right? You have a single Brownian particle, right? That Brownian particle is going to behave like a,
like a one molecule gas. This was the brilliance, by the way, of Einstein's analysis of Brownian
motion. He realized that you could treat Brownian particles in the same way as you treat molecules.
It was a, it was a very beautiful analysis. Yeah. I do want to get to Einstein's views on old
quantum theory versus new quantum theory. We'll get to that shortly. So it sounds like what you're
saying is nature is the article that I've shown, or maybe it'll be on screen again right now,
is not validating Landauer's principle. This is something that was predicted before Einstein died
and Landauer came with the principle of 1960s or so. It's worse, it's worse than that. It is,
it is a, an easy consequence of the standard thermodynamics of ideal gases. I mean, I mean,
it's undergraduate physics stuff. Okay. It's lovely that, that we've done the very particular
experiment and seen the result. But boy, it had to be right. You know, if they got, if,
if they had any other result coming out, right. And, and it wasn't the result of some kind of
procedural error, it would have been traumatic for, uh, for statistical physics because that,
that is so fundamental that if you just have a single component, like, like a molecule,
you know, one molecule gas and you compress it two to one, you're going to isothermally, you're going
to pass a KT log two of heat, reversibly, by the way, what if someone says, okay, so what, so what
if Landauer's principle isn't the minimum bound? I mean, I can say Kurtz principle and set the
minimum bound to zero. And then I'm still correct. If you show that something's higher because, Hey,
my minimum hasn't been violated. Remember, remember the idea, the idea is that we can
understand the minimum amount of heat generation in a computing device by looking at the logic,
right, of the, of the processes being implemented. So if we want to minimize heat generation,
then what we need to do is look carefully at the logical processes and minimize any irreversible
irreversibility in the logic that is just mistaken and will mislead you. That's the wrong answer.
The right answer is, uh, what matters is how many steps you are expecting to complete, whether it's
a computing system or any other system, whatever. And the degree of probability of completion,
and you need to understand that if you're serious about reducing the amount of feet, the paying
attention to the logic being implemented in the computational device is really not going
to help you. It's how many steps it matters. The implementation matters massively. So is
something now allowed that we previously thought was disallowed because of this analysis, your
analysis, or is something now disallowed that was previously thought to be allowed? Like what is the
consequence, the practical consequence of this? The practical consequences is what I just said. If
you want to minimize the amount of heat generation in your computing systems, pay attention to how
many steps you've got and the probability of completion. That's what you should be looking at.
And you also believe that this distracts researchers from simpler, more general solutions
to Maxwell's demon, like Liouville's theorem. Oh, yeah. Oh, yeah. No, yes, this is one of the,
one of the, one of the papers that I wrote. I do my research. You did. Thank you. You know,
the, you know, the idea that, um, um, that notions of information and computation are going to help
us understand why Maxwell demon must fail. Right. That has so distracted everyone that we spend all
our time arguing about it. Right. So for a long time, John Earman and I first wrote papers on
this. I wrote them by myself. I kept saying, no, no, no. These ideas aren't helping us. Right. Um,
it doesn't work. We don't learn why Maxwell demon must fail. We don't know that it must fail,
uh, from these considerations. And we spent all our time thinking about that. We just
hugely distracted by that. Then one, one day I was sitting on the bus coming into the office and I
thought to myself, you know, maybe, maybe I should ask the question, is a Maxwell demon possible?
Forget about all this information stuff. And, and within five minutes, I mean, in the course of a
short bus ride, I realized, Oh God, the Liouville theorem just prohibits it. That's all. Um,
if you've got, if you're assuming that the demon is to be implemented within classical physics,
you can see with essentially no calculation at all that the Liouville theorem is going to block
it. Um, you know, so I published that somewhere in the paper. Uh, and then after a while I thought,
you know, this is not a, um, this is not a, uh, a, a really decisive argument because nothing at
the scale that we're concerned about is actually classical is all quantum mechanical. And so, um,
I, and so, so I asked, is there an analog of the Liouville theorem? Uh, in quantum mechanics? Yes.
There's an analog and quantum mechanics. And, and the, and you can run an exactly analogous
argument. And so I've got another paper in which I show I've got a two column, two columns. You've
got the classical analysis on one side, quantum analysis on the other one, they just match up, uh,
match up perfectly. Uh, so, so yeah, um, uh, we, we know that a Maxwell demon, uh, is impossible
insofar as those versions of the level, uh, uh, theorem apply. And that, that explains why with
all the, um, all the nanoscale, uh, physics that we've been doing, no one's produced a, uh, a, a, a
Maxwell demon. And before we move on to Einstein's views, people are terribly interested in
entropy. And you mentioned a couple of different definitions of entropy like Boltzmann and Shannon.
So there are a variety of entropies. Can they be arranged such that one is a subset of the other,
like Boltzmann is a special case of Shannon, or are there entropy notions that are incompatible
and why are they all called entropy if they're incompatible? So if they are indeed some that
are incompatible, so why don't you outline what is entropy supposed to be quantifying and then the
different definitions and their relations? Um, the basic idea of thermodynamic entropy is articulated
well by Clausius in his early papers. I think it was at 1865 or something. I don't remember the
original papers date. Uh, the, the idea is, uh, that that will tell you which processes will move
forward spontaneously, uh, which thermodynamic processes will move forward spontaneously. All
right. Uh, now the, uh, the notions of entropy that appear in thermodynamics, uh, um, adhere
well to that. So Boltzmann's notion of entropy, S is K log W, is going to tell you which processes
move forward. This was the, the rule that I told you before, if you want a process to advance, um,
you want to have an end state that has a higher probability than the starting state, S is K log W,
then tells you that the entropy of the end state is going to be greater than the entropy of the
initial state. All right. And then that notion of entropy, when you start to move into equilibrium
systems is going to mesh nicely with the, uh, uh, with the notion that Clausius, uh, developed, uh,
in the Gibbs formalism, it's more complicated. Uh, in the Gibbs formalism, you can connect,
uh, the Gibbs entropy, the P log P with, um, um, uh, with, uh, thermodynamic entropy by,
um, by giving an analysis that both Gibbs gives and Einstein also gives in one of his early
papers where you look at a thermodynamically reversible process and you idealize it and you
can match up all the quantities. Then there's Shannon entropy. Shannon, um, uh, the sort of
entropy that appears in information theory is a parameter for a probability distribution.
And that's what it is. It's a, it's a measure of how, of how smoothed out and how, of how uniform
the distribution is. The highest entropy arises when you have a uniform probability distribution.
And as the probability distribution becomes more peaked, uh, then you're going to have a lower and
lower entropy. Um, it's just a different thing. I mean, there, there are connections. I mean,
uh, you know, take, uh, take, take probabilities. There are many ways that probabilities appear, uh,
in, in a, in a usage in the, in the, in the world, I don't know that I want to nestle one inside the
other, but I'm kind of comfortable that, um, that, uh, Boltzmann's notion of entropy and the Gibbs
notion of entropy and, um, uh, and the Clausius notion all fit together very nicely. Now there are
complications because when you move into quantum mechanics, there's the von Neumann entropy and
there's a literature saying, well, this doesn't exactly match up. And, you know, I'll, I'll,
I'll defer on that because, uh, because we're now getting into very delicate territory. Uh,
and the delicate territory is we don't know how to interpret. At least I don't know how to interpret.
Uh, the density operators that appear in, in quantum statistical physics. I don't, you know,
um, uh, you know, when you give them a matrix form, when you have a nice diagonal with, with
P's that add up to one, are they probabilities or what are they? Right. And if you can't answer that
question, then you don't really know what P log P is, uh, which is going to be the, uh, which is
going to be the entropy. So anyway, not my area. I've deferred to other people who, who write
on this because, um, uh, because I, I, I think, uh, we have walked directly into the measurement
problem in quantum mechanics here. So I don't think anyone really knows how to handle this,
um, other than pragmatically. Great. Well, we can turn this into your area by talking about
Einstein. Okay. So Einstein has some criticisms of new quantum mechanics and its statistical
interpretations. And then I believe you mentioned that Einstein's fundamental contributions to old
quantum theory have been forgotten because of these new criticisms. So firstly, let's talk about
the criticisms to get them out of the way, please. And then let's talk about his contributions to the
old theory. Um, well, I think his, his objections are very widely known. He simply did not believe
in the indeterminism of quantum mechanics. Um, he argued that there was some, he was arguing
for some sort of a hidden variable theory. Uh, the sort of hidden variable theory that he was arguing
for, um, I don't think is anything like the sort that we're thinking of. You might think now of the
Bohm theory as a kind of hidden variable theory. Of course, Einstein was encouraging the bone, but,
um, that it's pretty clear that wasn't his theory. Einstein's hope was that his unified
field theory would, uh, would somehow return this hidden variable theory. Uh, and I think you,
you know, the, the basic layout of the hidden variable, uh, Einstein's unified field theory
that the program was pretty straightforward. Uh, he'd found that you could represent gravity, uh,
in the same structure as the inertial, inertial, uh, properties of space and time, right. And, uh,
in the metric field, um, I'm not using geometric language here because he didn't. Uh, if you,
if you're, if you're curious on that, I just, I just wrote a long paper on this explaining,
um, just a bit of a digression because, um, please, um, you know, when you first,
uh, when you first get a, uh, a, um, uh, a class nowadays in general relativity
and you learn about the Schwarzschild metric and you learn about the Schwarzschild radius,
right. One of the first things you're told is, oh, but don't make the mistake of thinking that
that's a singularity. I know the formula blows up, but it's just a pure artifact of, of coordinates.
Don't make that mistake. And it's sort of, you know, you're warned it's a silly novice mistake,
but why, why is it talked about so much? Well, who made the mistake answer prior to about 1950,
everybody, right. Einstein was very clear that he regarded, uh, that he regarded the Schwarzschild
radius as singular. And he convinced everybody else with that as well. Now, when I say everybody
else, I don't mean, you know, I don't, I don't mean trivial figures. I mean, the world's greatest
mathematician of the time, Hilbert. And I mean, the world's greatest geometer of the time, Felix
Klein, they all agreed with him. All right. And Hermann Weyl, they, they, they all agreed what on
earth was going on. You know, I do a lot of work in history. I'm fascinated by, by, by history of
physics. And I can only just tell you very briefly what the answer is. There are multiple ways of,
of treating general relativity mathematically. The geometrical approach that we now use is,
I believe the right approach and the correct way to do things. And the one that gives us the best
and most productive results. I don't want to in any way detract from that. Einstein disliked
the geometric approach completely, right? He preferred a kind of algebraic analytic approach,
which was all dependent on very particular expressions and their behaviors and their
transformation properties. And in the context of that approach, it makes sense that he would come
to the conclusions that he did. Now he wasn't coming to those conclusions in ignorance of the
possibility of another analysis. It was, um, um, Lemaitre who had already discovered that,
um, you could transform away the Schwarzschild singularity. And, um, and also, um, uh, Felix
Klein had pointed out that the so-called, uh, mass horizon in the De Sitter space time was, could be,
you know, could be transformed, transformed away. He knew all of that, but still knowing that he
said, I don't like this geometrical approach. I don't take it seriously. We have to approach
it analytically. If you want to get a sense of how someone could possibly think that, uh, look at the
way that Einstein's 1917 cosmology was introduced. He wanted to have a spherical geometry for space,
right? So where does he get the spherical geometry from space where he wants to get the line element
for a spherical geometry? But he says, imagine a four dimensional Euclidean space with three
dimensional sphere embedded inside and look at the geometry that is induced on the, on the three
dimensional sphere and bang there, you get the nice, uh, line element. But now do you take this,
this geometrical picture of a four dimensional space inducing, right, um, in, in, in inducing
a geometry on the three dimensional space? Do you take that seriously? Do you really think there's
a four dimensional space there? No, all of this geometrical thinking is just confusing you. All
that matters in the end is line element in the space, right? So he called in correspondence with
Reichenbach, um, he called in Germany, he called it a donkey bridge, right? A bridge of asses. In
other words, it's a, it's a kind of easy way for people to, to, to the novices to learn things,
but don't take it seriously. Oh, I think Dennett would call those intuition pumps. Yeah, maybe. So
ass bridges were intuition pumps. Yes. You, you, you, I suppose he didn't use that expression,
but I guess that would fit. I, you know, I hate to speak for him. Okay. So, so those are, those are
his objections. And the, uh, the, the EPR thought experiment is, I think it's transparently trying
to argue that there's more to the system, um, uh, than the standard quantum mechanics
allows. In other words, you know, it's in the title, um, uh, what is it is quantum mechanical
description and complete something I can't remember. And there's a criterion of reality.
You know, if you can predict, uh, with certainty properties of some system, um, without interfering
with it, then the system has those properties. This is the EPR argument. Everyone, everyone knows
it. So what were his contributions to, to quantum mechanics? Um, really quite massive. I think the,
um, the major one was the light quantum of 1905, a completely extraordinary idea. When you look
at Einstein's, uh, annus mirabilis, his year, his year of miracles in 1905, everything that
he's doing there, accepting that is a completion of 19th century physics. So we can just go down
the list. Um, his argument for, um, uh, for the reality of atoms Brownian motion is completing
the Maxwell Boltzmann tradition as statistical physics that had been well developed in the
19th century, but it was making a great deal of resistance because there were no new phenomena
that needed atoms. Uh, if you understand special relativity, you realize that the basic content is
implicit in Maxwell Lorentz electrodynamics. Uh, Lorentz had discovered in effect, the
Lorentz group mathematically articulated better by Poincaré. And once you have the Lorentz group and
you understand how to think about it, you realize there's ineffective kinematics there in space and
time. Einstein is excavating that and saying, Oh look, there's a kinematics of space and time built
into, you know, this is the big discovery of the 19th century electrodynamics that, that space and
time is actually a special relativistic equals MC squared. It's already there in special cases in
electrodynamics. Uh, in amongst all of this, the huge discovery of the 19th century was the wave
theory of light, right? The, um, um, you know, that they are electromagnetic waves, the Maxwell
Lorentz theory. Then in 1905, Einstein says, no, wait a minute. In some thermodynamic sense,
uh, heat radiation has a particular character. Now, if you, you know, I've,
I've, one of the things that has fascinated me for a long time is how Einstein made his discoveries,
right? He, he didn't have anything that everyone else around him didn't also have. He basically
had a pen and a paper and journals to read. He did very little experimentation. Wasn't, wasn't
terribly good at it. Yeah. So what was different about how he came up with his discoveries? Well,
he could, he could, in this particular case, the key thing about the results of 1905 is that you
could see significance and empirical results that other people couldn't see. So let me,
um, let me give you the example with the, uh, with the light quantum. If you try and understand what
he did with the light quantum as a correction to electromagnetic theory, it's unintelligible. How
could this possibly be? We have Young's two slit experiment. We have all of the massive successes
of, um, uh, of electro electrodynamics. How's it, how is it possible that, uh, that Einstein
can come along and say, Oh no, wait a minute. Um, you know, there are particles there. And he talks
about the photoelectric effectors and sample. How could it be? Well, what you're doing is you're,
you're not putting the discovery in the right context. The discovery lived in Einstein's work
in thermodynamics in the years leading up to 1905. Einstein was already working in thermodynamics.
He was trying to understand the, um, uh, the microscopic or the, I want to say molecular,
I guess, molecular scale, um, properties of matter. And what he recognized was that the
molecular scale properties of matter get imprinted on their thermodynamic properties. So the classic
example, the simplest example is this. If you have a system whose pressure and temperature and
volume conform with the ideal gas law, then you know that it's molecular constitution consists
of localized points of, uh, of, uh, of matter bouncing into each other, but, but independently
moving otherwise. All right. I mean, that that's where PV equals any NIT comes from. You model the
gas as a whole bunch of molecules that move independently of one another, but they bounce
off the walls and they bounce into each other. All right. The key thing is that PV equals NIT at the,
at the thermodynamic scale is a signature of that constitution. This, by the way, is why,
um, um, uh, osmotic pressure obeys the ideal gas law. All right. When you first learn this in a,
in a statistic thermodynamics class, it's why the hell should a dilute salt solution exert a
pressure that's the same as the ideal gas law? Well, because it's dilute, the salt, the salt
molecules are, or the salt ions are moving around like independent, um, like independent molecules.
Okay. So what does Einstein do? He's looking at the latest results on the thermodynamics
of heat radiation. And what he recognizes in that thermodynamics is that same signature of a
particulate constitution. And in particular, he just, he realizes that if you take the,
um, the Planck distribution, which had been empirically established by the experiments of,
of, uh, Lummer and Pringsheim in Berlin in 1900, and you wrote the entropy as a function of the
volume, right? You got that the entropy of high frequency heat radiation, right? He's
now actually looking in the Wien regime. So Lummer and Pringsheim don't come into this,
but nevermind. Um, if you look at the, at the, in the Wien regime, you get that the entropy of heat
radiation varies with the logarithm of volume. And that is the same, right? That's the same as
the ideal gas law. So Einstein says, oh, look, here we have the fingerprint, the thermodynamic
fingerprint of the molecular constitution. And just as you can calculate the size of molecules,
once you, once you know how big Boltzmann's constant is, and you've got the ideal gas law,
so you can calculate the size of the energy particles, uh, that are giving you S is K log
W. And of course, what comes out of that is that the size of the little localized energy bundles,
right, is given, it depends on the frequency and it's what we now call Planck's constant times the
frequency. That's the big argument, right? And he gives a very simple derivation of, of, um,
of Boltz, what we now call Boltzmann's principle, S is K log W is actually Einstein's principle. He
calls it Boltzmann's principle in this paper, and he gives a very simple derivation of it. And then
he says, this is now instantiated. Uh, when you add in the various conditions that apply here,
you get S is K, S entropy goes with the logarithm of the volume. It's, I think,
one of the most beautiful, most extraordinary, uh, of Einstein's contributions. I mean, there
are many more, I'll just, I'll just mention others that are important. Um, now the next thing that
comes up is the following. He's established that there's a particulate character, right? Um, but
he's only established that by looking at the, at the vein regime in the, in the blackbody spectrum.
What happens if you look at the total regime going all the way down to the, um, uh, to the,
to the lower frequency end, right? Well, if you give a similar analysis of the thermal properties,
in particular, you look at the fluctuations, uh, of radiation pressure and, and, and, and energy,
you discover that the, that the expression that you derive for the fluctuations in heat radiation
is the sum of two terms. One term has a particular character and the other has a wave character and
they are, they are arithmetically added together. This is the origin of wave particle duality. This
is where it first appears that radiation has this dual wave and particle character. And so it keeps
going like this. And I just mentioned, of course, uh, the A and B coefficients paper on basis of,
of lasers. This was 19, what, 16 and 17. And then of course, um, those Einstein statistics in the
early, uh, 1920s. So the idea of the light quantum was greatly resisted. Bohr did not
like it one bit. And Einstein, it was regarded as a heterodoxy for 20 years until the Compton
effect. It was the Compton effect that finally, um, uh, drove home the idea to physicists that,
um, uh, that Einstein's light quantum was in fact, um, a good description of what was happening with,
uh, uh, with heat radiation or radiation in general, electromagnetic radiation. So professor,
why don't we end this on what insights from your research into the beauty of how Einstein
thought differently than his peers? Because as you mentioned, Einstein had access to the same
data as his peers. What insights exist that you've gleaned that can be applied to young researchers
today such that a young researcher can watch this and say, okay, I should do more of that. Um,
I think a lot of it is good fortune. Um, so let me, let me say a couple of things.
One thing that I don't think works is the following, there's this idea that you have
to be young and in your twenties to make a great discovery, that there's something about youth. Um,
what's happened is we have a correlation, but not a causal connection. Uh, the process that seems to
be at work is the following. When a new science opens up, right, when a new science opens up,
that's where the new discoveries are going to be made. The established figures are working on the
old sciences that they have put together, right? And so they keep working on those,
uh, the, the new figures come along and they're asking, where's something new happening? Oh, it's
over there. So they're going to work in the new science and that's where the new discoveries are
made. And that's why they're made more commonly by younger, by younger people. So don't, you know,
so, so don't, don't feel bad that you're young and you haven't made an Einstein discovery yet. It's
got nothing to do with you, uh, with your age. But also don't feel bad that you're old. Yes, exactly.
Um, um, in fact, this is one of the things that I follow in, in, in my own research. This is just a
side thing, but I'm, but I mentioned before I get, get to the other point of what I wanted to make.
Someone pointed out to me quite early in my career that the, um, that when you enter a new field,
most of the, of the important novel ideas you have will come to you pretty early and then you won't
get much more. And I think that's right. Right. So how do you exploit that answer? You keep jumping
around. Right. Right. Exactly. So if you've been, if you've done your homework and you've looked at,
you've looked at the sort of papers that I publish, I'm all over the place. You know,
we've just looked at a few of the things that I, that I've done in, in, in philosophy of physics.
I'm, you know, I've, I've written a whole bunch on inductive inference. I'm writing a
book on empiricism at the moment. Um, you know, um, it's all, it's all over the place because
every time I go into a new field, I'll have a new thought, right? And if it's genuinely new,
I'll, I'll publish it. So don't, don't be afraid to jump around. This is one of the
traps for young physicists. This is why you should be a philosopher of physics and not a physicist.
Because if you're a philosopher, if you're a physicist, you're trapped by the need to keep
grant money going, which means you have to develop an expertise of sufficient caliber to enable you
to keep the grant money going, which means, and to keep your lab going and to keep your,
your graduate students going. And so you can't, you can't escape a philosopher as a physics is
supported by teaching and we can, we can switch on a dime. I can change my mind tomorrow about
what I'm working on, work on something, something else. Um, you know, as long as I keep teaching
my classes, right. I'm, I'm supported. Okay. Now let's get back to Einstein. What, what have you,
what have we learned from Einstein? Um, Einstein had a remarkable ability to look at results and
see the significance in the results that empirical results that nobody else could see. I've already,
I've already mentioned that with the light quantum, right? He could see the signature of, of,
of distributed atoms there in special relativity. He could see that the Lorentz group was actually a
kinematics of space and time, right? All of this is empirically in, in the theory, it has this
property, Lorentz, Poincaré, they fully understood the mathematics. They just didn't, they just,
they just didn't see it. This was Einstein's, and he used this over and over again. This was
Einstein's magical power that he could read in, in experimental results. Uh, things started to change
with general relativity. It had the same origin there. He, he recognized that the fact that,
um, that all bodies, the Galileo result, but all bodies fall with the same acceleration had to
be implemented exactly and perfectly right now. Uh, when, um, uh, when people like Poincaré and
Minkowski were relativizing theories of gravity as they tried to do, they discovered that that law
was broken in second order quantities. You would get a V on C squared dependency on the sideways
velocity. So things that were moving with velocity sideways, right, would, um, um, uh, would not fall
at the same rate as something that was falling vertically straight down. This Einstein tells
us just bothered him massively. He just didn't see that that would be the right, uh, the vector
possibly, uh, be right. You can think of other cases and understand why that would be so. Would
that mean that a kinetic gas would fall slower when it's hotter because there's a lot of sideways
motion, maybe, maybe not, turns out not to be that simple. So what does Einstein do? He says, well,
we need to construct a theory in which that result is preserved. It's so important. And how did he
construct that theory? Well, with the principle of equivalence. So if you have two bodies,
one at rest and one moving inertially to the side, and then you view that from an accelerating frame
of reference, then the resting body will fall, and so will the body that has sideways motion, but
they will remain at exactly the same altitudes. So Einstein says, that's the way a gravitational
field has to be. All right, so let's ask, what sorts of theories of gravity come out of that?
And since he's working in a Minkowski spacetime, he very rapidly gets, rapidly, hell, four or five
years, he gets to the idea of a semi-Riemannian spacetime. He moves from a Minkowski spacetime to
a semi-Riemannian spacetime. So, okay, so that's the thing. You need to have, there has to be a
match. This is now the general model. There has to be a match between the problems that are right for
the picking, and your particular talent and expertise. Now, how does that work out with
Einstein? Well, Einstein then moved on to his unified field theory, and he stopped using that
facility. He started saying, I'm going to find the simplest possible rules that we can have for
physics, and from the mid-twenties onwards, when he was doggedly pursuing his unified field theory,
he just never produced anything that we know actually works. He was no longer well-matched
to the problem. If you ask, who was well-matched to the problem? Well, when quantum mechanics came
along, it was just crazy. You had to be someone who could tolerate bizarre contradictions and
manage with them. And who could do that? Who could do that better than anybody else? Answer,
Niels Bohr. The Bohr theory of the atom of 1913 is just crazy. I mean, you come along and you say,
I'm just going to turn off electrodynamics. I'm just going to assume electrons can orbit without
radiating. Completely crazy. He had this ability to just say, I know it's crazy, but what's the
quote? Is it crazy enough? I don't think it's Bohr. I think maybe that was Pauli or someone.
And that was terrific because he could actually produce this theory, the Bohr-Sommerfeld theory
of the atom, that led directly up to what happened in the 1920s. Of course, just as with Einstein,
then Bohr's facility to tolerate silliness and contradiction became a massive liability
because he then produced this inchoate idea of complementarity, for which I don't think there's
any precise sense. And he somehow managed to convince a whole generation of physicists to take
this silly idea seriously. It took a long time for people to get past the incoherence of Bohr's
ideas. And I can see you flinch there because there's a sub-community in philosophy and physics
who hang on to the idea that Bohr had some kind of deep and profound insight. No, we bifurcate.
I'm clearly in the school that thinks, no. I've no doubt that Bohr had strong, powerful intuitions
that he could communicate to other people, but they are, at their core, incoherent. Anyway,
so the moral is, if you're a young guy starting out, just do the work on what interests you. Look
for places where you can see further than other people can see. That's your secret skill. When I
talk to philosophers of science and we're trying to figure out where they should work, I often ask
them this question. I say, can you remember when you've been in a discussion group and everyone
gets tangled up over something, and you're sitting there thinking, I don't get it. I don't get it.
It's perfectly clear and perfectly obvious what's going on. I can see straight through this. Ah,
there's your magical power. The difficulty is that because you could see it so clearly,
you think it's trivial and you think it's easy, right? And so you tend not to value it. Rather,
you look at someone who can do something that you absolutely can't do, and you're in amazement and
you want to be them. Big mistake. They're good at it. You aren't. Do the things that you're
good at. Do the things where you see your way through clearly, faster than other people do.
And that's where you'll make the breakthroughs. Anyway, look, that's the advice I give people,
and it's as good as they paid me for it. So free advice is only as good as what you paid. I love
that. Okay, so most of the time we'll look at gymnasts and we'll just be wowed and we'll think,
okay, I should do that because that's difficult. But then there are other tasks. That's exactly
right. I mean, it's taken me a long time. So I've worked hard on exactly where I have a skill. I'm
not very good at the mathematics. I mean, I can do mathematics competently, but I don't have the sort
of beautiful insight that a good mathematician can have. But my background is clinical engineering.
I can tolerate the kind of vagueness that engineers thrive in, right? I can survive
when the situation is unclear. Do you thrive? Do you not just survive when the situation is
unclear? Do you actually prefer that and do better in it than in situations where it's clearer? Oh,
yes, absolutely. And so I'll give you an example of that. I wrote a paper recently on the nature
of thermodynamically reversible processes. I think that's roundly misunderstood all the
way through here. And it's not a question of mathematics. The mathematicians, Caratheodory,
going back to the Göttingen group, they gave a beautifully mathematized version that they missed
the essential point over what's really going on with thermodynamically reversible processes.
I can see that one of the things that chemical engineers have to be good at is thermodynamics
because processes and chemical plants are all thermodynamic processes. So I was taught
thermodynamics from scratch four times in my engineering degree. And it was only on the third
time that suddenly I got it. I can still remember there was this moment when I realized, oh, hell,
it's all about thermodynamically reversible processes. That's the key concept. If you don't
get that, you know. And so I'll just mention to you, maybe this will be helpful to you. A
standard mistaken view amongst physicists is that a thermodynamically reversible process is just a
really slow process. No, here's a really slow process. Get a balloon and inflate it and then
put a tiny little pinhole in it. That balloon is gonna deflate as slowly as you like just by
making the hole as small as possible. But that is an irreversible expansion of the gas. That
is the entropy increase. Now, a thermodynamically reversible process has to be one where you have a
near perfect balance of driving forces. The forces that are pushing the process forward
have to be balanced almost perfectly exactly by the processes that are pushing it back. Now,
that runs automatically into trouble because if you then, notice I had to use weasel terms,
almost exactly, almost perfectly. Well, there's a reason for that. If the forces balance exactly,
nothing happens, right? When you have a perfect equilibrium of all driving forces, no change
happens, right? So you have to have some sort of an imbalance. And if you have an imbalance,
right, then you have an entropy-creating process. So how are we to think of these things? Well,
there are ways of doing it and that's what the paper's about. It includes a historical
survey of everything I could find. People have written on this. But that comes out of a kind
of engineering thinking that I learned to make my piece with these ideas. This is interesting. You
learn thermodynamics three times from scratch in order to truly, four times. Okay, great, because
I was going to say something that relies on the number four. Okay, I wonder if this is a general
rule because it's common to hear that one has to learn quantum field theory four times from scratch
before one groks it. And I just applied that to QFT. I didn't apply that to computer science or
to stat mech. But I'm wondering if maybe it's the case and you seem to validate the thermodynamic
case. Yeah, yeah, no, I think that's right. Maybe it's the case in general. Now, what does it mean
to learn something from scratch again? Because you could just take one course, thermodynamics one,
and then you take thermodynamics two the next year and then they reteach you the fundamentals. Or you
could take thermodynamics one, take a year off, retake the same course. Tell us, what
exactly does it mean from scratch? I'll give you my experience. Yeah, I'll give you my experience
with thermodynamics. Chemical engineers have an odd place in engineering because we don't just do
one sort of engineering. We have to have control of all of the different branches of engineering,
right? So in a chemical plant, I have to understand the chemical processes. I have
to have some understanding of the mechanical engineering of the structures of the pressure
vessels that are being used. I have to have some understanding of the electrical system that's
being used. And also, chemical engineers are often involved in finance. So we had courses in
discounted cash flow. We had courses in operations research. You're torturing yourself. This is so
messy. Yeah, yeah, yeah. So we had to be a jack of all trades. And I enjoyed that immensely. So
we went to different departments, right? So we learned thermodynamics in the physics, because
you need to know physics. So you go to the physics department, you learn thermodynamics there. Then
you go to an engineering school, right? Because you have to know the engineering, and they teach
you thermodynamics as well. Then you go to the chemistry department, where chemical engineers,
when you need to know chemistry, they teach you thermodynamics there. And then you come back to
chemical engineering, right? And then they've got their own version. Now, if you think across all of
those different groups, they all have different ways of representing things. So for example, the
way a physicist will talk about, will talk about thermodynamics, is going to involve, you know,
quasi-entropy, blah, blah, blah, blah, blah, and so on. When you go to a chemistry department, the
interesting thermodynamics is the thermodynamics of chemical reactions. So it's going to be things
like fugacities and so on. What is it that drives a chemical reaction forward? It is going to be an
increase of entropy, but how do you represent the entropy so it is applicable to the chemical
process? Or if you're in an engineering school, the thing that really matters is the efficiency of
engines, right? So what's the best efficiency you can get out of an Otto cycle in a gasoline engine,
right? Now, all of it, it's all thermodynamics, that they're being applied in so many different,
so many different ways all the way across the board. And it's getting all of those different
perspectives. Now, the thing about thermodynamics is that there's an intrinsic beauty to it,
but a massive incompleteness, because what thermodynamics actually talks about is never
the complete theory. You need to have, in addition to the basic thermodynamic concepts,
a theory of the matter that's being involved. You need to understand the mechanics of fluid
flow. You need to understand, if you're doing thermodynamics of quantum systems,
you need to understand the peculiar quantum mechanics of those particular systems. So one of
the questions that I got interested in for a while is, what's the maximum efficiency of a solar cell,
right? Well, they are heat engines. They're taking in heat radiation and producing electricity,
but that's very much a quantum mechanical process that's doing it, or something like,
what do you call these cells that, Peltier junctions, you know? Have you ever played with the
Peltier junction? You connect an Otto battery, you put your hands on either side. One side gets hot,
the other side gets cold. What's going on there? And so, there are many different ways in. Now,
I know only a little of quantum field theory, but my impression is that it has a very similar sort
of character. You know, there are basic ideas. You need to know the Hamiltonians or the Lagrangians,
but then you might be looking, for example, at Feynman diagrams and scattering processes and so
on. I see. Yeah, or you might be looking at quark confinement, or you might go algebraic. You might
have a course from one of the mathematicians who will get you to read Streater and Whiteman.
But what you're doing is you're approaching the one phenomenon in the world with many different
theoretical devices. And it's only when you get a grasp on how all of these are bearing down
that you see the commonality. I think quantum field theory is an especially difficult case.
It is justly reputed to be a very difficult theory to learn. And I think that's right,
because, well, I mean, part of it is you start to try and compute Feynman diagrams and very quickly
you realize you've got a lifetime of integrals ahead of you. And so do you really want to get
into that? And then you've only learned scattering theory, right? And then there's all this stuff
about renormalization and what do I make sense of that and the renormalization. Oh, by the way, when
I started studying the renormalization group, it looked more like engineering to me than anything
I've seen in fundamental physics before. It really got my engineer juices going. I thought, boy,
that's how we do things in chemical engineering. Sorry. Yes, well, I was going to say I very much
like this idea of approaching something from multiple points of view in order to understand
it. So one analogy is that you could take a look at a cone, and if the light is shown from above,
it just looks like a circle. If it's from the side, it just looks like a triangle. If it's from
an obtuse angle, then it looks like an ice cream cone, like there's a little bit of a bulge there.
And it takes you a while to understand the three-dimensional structure. Yeah,
yeah, yeah. And all you have access to are the projections. And so to move around,
and that also jives with your previous answer of, well, it's something I thought of as well, that
maybe it's not mere youth that enables creativity. It's instead the entry into a field that fosters
that innovation. So Schrodinger was 40 or 50 when he began contributing to biology. Maybe
it's just he had that foray into the unfamiliar that enabled the contributions. Yeah, yeah. I
noticed this in philosophy as well. People look at some major work of philosophy, and they say, well,
but that answer to the problem is easy, right? I don't really understand what the fuss is. Well,
the fuss is not the answer. It's the question. The creativity in philosophy is framing things
so that an analysis is possible. And if you do that, you create a new field. And because you're
the first person there, you can jump on what is likely the correct answer almost immediately. And
so you kind of win the day. I mean, this is what I feel happened with the stuff I did with thought
experiments. I just got very insistent on arguing that there's an epistemic problem here. How is it
possible for thought experiments to give us novel knowledge of the world? And I made that
the framing. I call that the epistemic problem of thought experiments or the empirical problem. I
can't remember which one of those two. And once you ask it very pointedly, and then you're very
rigorous in giving an answer, it's easy. Yeah, okay. It's the obvious answer. But you got there
first. And people say, what's the big deal? Well, the big deal is I knew the right question to ask.
It's the same thing with causation, right? I knew the right question to ask. Of course,
causal metaphysicians aren't happy with me, but yeah, that's their problem. Is there any epistemic
gain that can come from thought experiments that cannot come from formal deductions? Yes. You've
narrowed things down by saying formal deductions. By argument, I have a much looser and more general
idea. I mean, informal argumentation. And that certainly includes inductive inference. And
you'll find in some of the most famous thought experiments, a lot of inductive inference going
on. You know, Einstein's magnet conductor thought experiment? I'll just say in the
abstract what the point is. Some of the key steps in thought experiments are inductive inferences.
You produce an effect in a particular case, and then you say, and this is general, right? It's
an inductive inference where you generalize from the one case. But because the particular case is
so compelling, people are willing to go along with the inductive inference, which might be
good or it might be bad. We saw it in Einstein's principle of equivalence. We have all bodies will
fall the same in the uniform accelerating frame of reference. That's a gravitational field. And
then Einstein says, and everything else will go the same as well. That's one hell of an inductive
inference at that point. We've only got the effect for falling bodies. We haven't got the effect for
light propagation, right? But it's going to work for light too. It's going to work for everything,
he says. But you happily generalize. You're going to say that all gravity is like that. It isn't
just uniform acceleration. It's gravitational fields that are in homogeneous. There's lots of
inductive inference going on here. Yes. Now your work on material induction, if I recall correctly,
is against this. It's more like saying there are local ways that we can do induction,
but you can't globally apply them. It's not as if there's a one-size-fits-all induction. Yeah. Yeah.
Correct. Yeah. So this comes out of the fact that I'm a science lover, right? And I love science. I
love history of science. And I want to be able to say that our best science is somehow privileged
over other endeavors. And it is privileged for empirical reasons. It's because it is
well-supported by the evidence. And the character of that support is inductive. I did not find
accounts of inductive inference in the philosophy of science literature that were able to sustain
that result. But we find just a fragmentation of many different accounts. And you kind of go doctor
shopping. You find some particular example, and you want to say, well, why is this a good
use of evidence? Well, you shop around until you found the account of inductive inference
that fits it. Then you slap it on. No, we need a single account that is to be applied everywhere.
It took me a while to see this, but after a lot of probing, what I realized is that there
are no universal rules of inductive inference. I don't apply everywhere that's the uniformity
that you're talking about rather what you have are inductive systems that apply locally and they are
specifically warranted by facts so i should give an example. Okay i'm the simplest example one that
i use in chapter one of the book is marie curie prepares a tenth of a gram of radium chloride
it's the only sample of radium chloride any laboratory in the world in 1903 she looks at its
crystallographic properties and declares radium chloride has such and such a crystallographic
properties um i think we'd now same i think monoclinic was the was wayward but she says um
it's the same as barium chloride now if you think about that in terms of other accounts of inductive
inference um what would you what would it be well it could be an enumerative induction this a is b
therefore all azb boy that's a bad form to use because almost every every occasion when this a
is b all a's are not b right um so this sample of radium chloride was prepared by marie curie
it's not going to be true university the sample of radium chloride is in paris they won't all be
the sample of radium chloride is a tenth of a gram they won't all be a tenth of a gram and so or all
swans are black or all swans are white so the idea that that you can authorize that that inference
um by uh by looking at uh at a general rule just doesn't work so but she wasn't doing that right so
why is she so secure in making the inference that it was so secure it was even unremarkable well the
answer is um factual investigation of the nature of crystals all the way through the 19th century
right people had looked at you know what sorts of forms do crystals have uh this was uh work in in
atomic theory this was work in mathematics this is one of the places where the discrete uh the theory
of discrete uh finite groups uh got underway and it turns out that if you build up lattices right
um they fall into one of six or seven families depending on uh depending on how you count them
so if you find. Crystalline substance that falls into one of those families. Then you know that
many more of those samples will fall in that one particular family and so you can make the
generalization it is inductive it's a little bit risky. Because there are some substances
that are dimorphic or polymorphic which means that they have forms that exist in multiple different
families the familiar case of polymorphism isn't exactly doesn't exactly map on to here but it's
a case of carbon. It can be a diamond or can be graphite but there are many other cases
of minerals that have that have this so so what was justifying her entrance. Was with facts about
crystalline substances hard one through the course of the 19th century very difficult facts to learn
because to characterize these families took a tremendous amount of work. I am and i got it got
regularized just think Hauys principle after one of the early started so the fact is the principal
and so on the argument of my of the material theory of induction is it's all like that.
Whenever someone's doing an inductive entrance if it's cogent and you want to ask why is this an
appropriate entrance the answer is gonna come back to a fact this is gonna apply specifically also to
people using probabilistic inferences inductively if you're going to use probabilities. I'm the way
i got out is the following there is no default that every time you're uncertain about something
you can rep you can responsibly represent the uncertainty by probability you can't do that.
You have a positive obligation to demonstrate that a probabilistic representation is appropriate to
the case in hand so for example in population genetics. You know you typically what you will
do is you say this particular instance has been has been randomly sampled from the population
so we can do dna typing and you want to say oh yes it's very very probable that you know that this
perpetrator has a blood sample that matches the blood found at the. I am at the site i'm perfectly
happy with those probabilities but it is essential that the probabilities are anchored. Buy some fact
and the fact is that we can treat the case as if the person was randomly sampled. If that isn't
the case if you can't treat that that suspect as being randomly sampled from the population that
all bets are off. I'm gonna say that they might have been planted in some way but they might
be playing to the end of you can figure out all sorts of ways you could come could come unstuck
this no what what happens when you when you don't do this seriously when you run into all sorts of
silly arguments that don't work have you seen the simulation argument yes the one that says that we
are very probably a simulation yeah tell me about that. That's a spectacular example where we using
probabilities without any factual so it works is the following we end up with a position where we
convince ourselves somehow that there are very many possibilities for the way out. Experience
of the world come about right and the idea and we somehow convince ourselves i think these
arguments already pretty shaky but i'm looking at a particular fallacy we convince ourselves that
they're vastly many ways. That experiences could come about if we were computer simulations and
relatively fewer cases. I'm in which they could come about if if the world is truly a as it seems.
Let's just take that as a starting point i think it's already do this that we got there. Now we ask
now we ask the question now what's the next step with the next step is to say we have no idea which
is ours. And i would say you stop at that point you have no idea which is ours but wait a minute
but i'm going to say no i'm going to represent my uncertainty by probability. Right and when
i represent my uncertainty by probability i find that the vast. Mass of the probability ends up on
the on the computer simulation case and only a very small amount ends up well what's the
fallacy. Will the fallacy is you have no factual grounding for that probability you have just let
it fall from the sky and the result is simply an artifact of a misapplied inductive logic. It's
it's it's as simple as that it's an egregious fallacy but you need something like a material
theory to tell you if instead you say i'm going to use the principle of indifference and i can
use probabilities well. You're gonna be in big trouble because the principle of indifference
contradicts probabilities in cases of genuine and extreme ignorance and that's in this is a
case of genuine and extreme ignorance. Also another simple case is like with a die and
you just color two of them blue and then the rest of them red and you could say okay well
is going to be red or is going to be blue well we're indifferent and so it's fifty fifty but
that's not exactly. This goes back to Keynes. You'll find Keynes in his, I think it's called
"Credence and Probability," from the early 1920s. He has all the classic examples there.
Professor, thank you for spending so long with me. It's been a blast. Well, thank you. I've
enjoyed talking to you. You've got a really wonderful podcast. There's something subtle;
you know the questions to ask. I've received several messages, emails,
and comments from professors saying that they recommend Theories of Everything to
their students, and that's fantastic. If you're a professor or lecturer and there's
a particular standout episode that your students can benefit from, please do share.
As always, feel free to contact me. A new update: I started a Substack.
Writings on there are currently about language and ill-defined concepts,
as well as some other mathematical details. Much more is being written there. This is content that
isn't anywhere else; it's not on Theories of Everything, it's not on Patreon. Also,
full transcripts will be placed there at some point in the future. Several people ask me,
"Hey Curt, you've spoken to so many people in the fields of theoretical physics, philosophy,
and consciousness. What are your thoughts?" While I remain impartial in interviews,
this Substack is a way to peer into my present deliberations on these topics.
Also, thank you to our partner, The Economist. Firstly, thank you for watching, thank you for
listening. If you haven't subscribed or clicked that like button, now is the time to do so. Why?
Because each subscribe, each like, helps YouTube push this content to more people like yourself,
plus it helps out Curt directly (aka me). I also found out last year that external links
count plenty toward the algorithm, which means that whenever you share on Twitter,
say on Facebook, or even on Reddit, etc., it shows YouTube, "Hey, people are talking
about this content outside of YouTube," which in turn greatly aids the distribution on YouTube.
Thirdly, you should know this podcast is on iTunes, it's on Spotify, it's on all of the
audio platforms. All you have to do is type in Theories of Everything and you'll find it.
Personally, I gain from re-watching lectures and podcasts. I also read in the comments that, "Hey,
TOE listeners also gain from replaying," so how about instead you re-listen on
those platforms like iTunes, Spotify, Google Podcasts, whichever podcast catcher you use.
And finally, if you'd like to support more conversations like this, more
content like this, then do consider visiting patreon.com/CURTJAIMUNGAL and donating with
whatever you like. There's also PayPal, there's also crypto, there's also just joining on YouTube.
Again, keep in mind it's support from the sponsors and you that allow me to work on TOE full time.
You also get early access to ad-free episodes, whether it's audio or video. It's audio in
the case of Patreon, video in the case of YouTube. For instance, this episode that
you're listening to right now was released a few days earlier. Every dollar helps far more
than you think. Either way, your viewership is generosity enough. Thank you so much.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.