Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
The Physics of Euler's Formula | Laplace Transform Prelude | 3Blue1Brown | YouTubeToText
YouTube Transcript: The Physics of Euler's Formula | Laplace Transform Prelude
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Video Summary
Summary
Core Theme
This video introduces the concept of exponential functions, particularly $e^{st}$, as fundamental building blocks for understanding differential equations and the upcoming Laplace transform. It visually demonstrates how the behavior of these functions, especially when the exponent $s$ is complex, relates to physical phenomena like oscillation and decay.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
[Submit subtitle corrections at criblate.com] This is the first video in a trilogy aimed at demystifying the Laplace transform,
a powerful tool for studying differential equations.
Although we won't dig into the Laplace transform itself until the next two chapters,
everything that we cover here sets up the mental frameworks and the prerequisite
knowledge that make understanding that transform as easy as I know how.
This video is a lot more than just preamble, though.
It's a very fun lesson in its own right about how one of the most
famous equations in all of math is enables a bizarre trick for
solving an equation that is used ubiquitously throughout physics.
The main characters throughout this chapter and the next two are exponential functions,
and I'm always going to be writing these as e^(st).
Here we think of t as being time and then s as just some number determining which
specific exponential we're talking about.
One of the big aims of this video is to motivate using physics why it's useful to
give s the freedom to take on not only real number values, but complex ones as well.
But wait a minute, what does it even mean to shove a complex number into an exponent?
Here I imagine there's a bit of a divide in the audience.
You see, there's some regular viewers of math videos online for whom the
specific case of plugging in π times i is a little bit cliched by this point.
It is amply covered by many videos on YouTube, but on the other hand,
most students find this to be an understandably baffling notion.
Given how absolutely fundamental this is to everything that follows,
even if you fall into that first camp, I hope you'll agree that it's worth kicking
things off here by reviewing a very beautiful and visual way to understand what this
idea is all about.
The nice part is that what follows doubles as a gentle warm
up for visualizing and thinking about differential equations.
You start with the fact that e to the t is its own derivative.
And really, you should think of this as what defines the number e.
Exponentials with other bases will have derivatives that are proportional to themselves,
but e is the special number such that that proportionality constant is 1.
Now, very often in a calculus class, you visualize derivatives as slopes of graphs.
That's all well and good, but it's not the only way.
You should get in the habit of flexing your mind a bit more.
For example, let's say you think of this as telling you the
position of some point on the number line as a function of time.
Then what the derivative expression is telling you is that at every
moment the velocity vector must look identical to the position vector.
And more specifically, because you know that e^0 equals 1,
anything to the 0 is 1, you also have an initial condition.
It's telling you you start at the number 1.
So at the very first moment, the velocity is also 1, meaning it's pointed to the right.
But the farther to the right the position gets, the faster it must move.
So even if you had never heard of the function e^t or exponential growth,
this property alone is enough to give you a very visceral feeling
for how it gives a value that grows and at an accelerating rate.
But what if there was some constant in that exponent like e^(2t)?
Well, by the chain rule, the derivative is then two times the function itself.
And reading this dynamically, it's telling you the
velocity vector is always two times the position vector.
Again, the farther to the right the position gets, the faster it must move.
But this time the feeling is that that growth gets out of hand all the more quickly.
What if that constant was negative, say negative 0.5?
Well, once again, by the chain rule, the derivative of
this function is minus 0.5 times the function itself.
So at every moment in time, that velocity vector looks like a 180 degree
rotation of the position vector, but scaled down to be half its length.
This means you start moving to the left, but as you approach zero with a
smaller position vector, that velocity must get proportionally smaller.
So it approaches zero, but at an ever slowing pace.
This, of course, is exponential decay.
But now for the fun part, why we're here in the first place.
What if that constant was an imaginary number i, the square root of negative one?
Again, the chain rule tells us that the derivative of
this function is going to be i times the function itself.
Geometrically multiplying by i acts like a 90 degree rotation,
so this is telling you velocity always has to be perpendicular to position.
For anyone who's a little bit rusty or needs a quick review with complex numbers,
let's say you have an arbitrary complex value a + bi,
which you typically draw in the 2D plane.
Like this, the easiest way to think about multiplying
by i is to go component-by-component.
a times i ends up on this vertical imaginary line.
And then bi times i is b(i^2), which is -b.
This is where you actually use the defining property of i.
You'll notice each of those two individual components is rotated 90 degrees.
So the sum as a whole also has to get rotated 90 degrees.
Looking back at the equation, where this time,
evidently we must be thinking about position as being in the complex plane.
Even if you had never heard of raising e to an imaginary number,
and even if it's not clear at first what that would actually mean,
the expression is telling you that this value has to move in such a way that
the velocity vector is always locked to be a 90 degree rotated copy of the
position vector.
The only motion that satisfies this criterion is rotation around a circle,
and you can be more specific because the initial position is 1,
that velocity vector always has a unit length.
So this tells you how fast you have to move.
Your point wanders around that circle in such a way that
it traces one unit of arc length for every unit of time.
For example, to get one of the most famous equations in all of math,
if you wait for π units of time, you end up precisely halfway around the circle.
This is why e^(πi) is -1.
One thing that's always worth emphasizing to any students initially
confused by this expression is just how misleading the notation is.
When you input a complex value, the expression really has very little
to do with repeated multiplication, and honestly,
not that much to do with the number e. The computation that it refers
to is plugging in the input into this infinite polynomial, the Taylor series for e^x.
It's actually very fun, I think, to take a moment to interpret the literal meaning
of plugging in something like π times i for each one of these polynomial terms.
As you take a higher power, each extra factor of i rotates you another 90 degrees,
and then the π^n / (n!) terms initially grow,
but then they shrink as that denominator takes over,
and you end up with this spiraling sum that converges to minus one.
Now, that said, it is not at all obvious, just looking at this infinite polynomial,
that if you change the value t in the expression e^(i*t),
you're going to end up walking around a unit circle.
This is why focusing on the property of being its own derivative
is a lot more helpful than focusing on the underlying computation.
In practice, people get used to using the expression e^x as a notational shorthand here,
and really think nothing of it.
Throughout this lesson, I'm going to be pulling up a complex plane representing
possible values for this number s. And for each point on this plane,
I want you to be able to think about the corresponding exponential function e^(st).
We were just talking through what happens when s is equal to i.
On the lower right I can show you how that output changes with time.
And then on the top right, I might show a graph where, in order to fit it on the screen,
I'll typically only graph the real part of that output with respect to time.
In this case, it looks like a cosine wave.
If s is a different imaginary number, something like two i,
it means you rotate at a different rate.
And on the one hand, this is obvious.
Throwing in a 2 in front of the time obviously moves you twice as fast.
But again, I think it's kind of fun to read what the
derivative expression is telling you dynamically.
In this case, you can read it as saying the velocity is always a copy of
the position vector rotated 90 degrees, but stretched to have a length of 2.
More generally, it's common to label this imaginary part with the Greek letter omega,
which describes the angular frequency of the motion.
In other words, how many radians of arc length
does it traverse around the circle per unit time?
This is just the imaginary axis.
But what about when s has both a real and an imaginary part?
Something like minus 0.5 plus i.
On the one hand, you can split up the exponential,
and this part here is telling you that the magnitude decays over time.
And then this part is telling you that there's rotation.
That's all well and good, but for fun, another way that you
can think about it is to continue with the previous intuition.
To understand what multiplication by any complex number looks like,
you can ask what combination of rotation and stretching would place
the vector at 1 onto this new value s. For this example here,
that would look like rotating a little over 90 degrees and stretching it out a bit.
So the derivative expression is telling you this is always
the relationship between the position and velocity vectors.
And I think geometrically, this gives you a very visceral
way to see why the motion must be spiraling inwards.
You'll sometimes hear engineers refer to this as the S plane,
and which essentially means you should think of each point on that plane as encoding
the entire function e^(st).
The imaginary part of s is always telling you how
rapidly the function oscillates and in which direction.
And then the real part of s is telling you whether the magnitude grows or shrinks.
Positive real parts corresponding to exponential growth,
negative real parts corresponding to exponential decay.
This key equation telling us that velocity is some modified version of position.
Position is basically a differential equation.
And for you and me, having just seen how an intuition for reading
off a differential equation like this can explain what complex exponents mean,
at least if you want e^x to retain its core property.
Let's flip this around to get back to our core question and see
how a different set of differential equations can motivate why
you would ever care about complex exponents in the first place.
The easiest case to show you what I mean by this is a very central example
used all throughout physics, where you imagine having a mass on a spring.
We're going to describe the position of this mass as x, which will change over time.
The value 0 is going to correspond to the equilibrium position.
The derivative of this position versus time function,
of course, gives you the velocity of that mass.
And then the second derivative, the rate of change of the velocity,
gives you its acceleration.
The key feature of this spring setup is that the more you pull that spring to one side,
the more strongly it accelerates the mass towards that equilibrium position.
More specifically, we say that the force which is equal to mass times acceleration,
that's Newton's second law, is often well approximated as k times the position.
K here is just some positive proportionality constant.
It tells you how strong the spring is.
And this whole equation is telling you force is proportional to position.
Now, the reason this example is used all throughout physics
is because there are lots of other situations where you
approximate a force and being proportional to some kind of offset.
Often it's not exactly that, but as a first order approximation,
it really helps you model what's going to happen.
It's also common to include a term here proportional to the velocity.
We call it a damping term.
This Greek letter mu is representing another positive coefficient.
And the negative sign is telling you the faster this mass is moving,
the stronger that damping force.
Maybe you think of it as friction, maybe you think of it as air resistance.
I remember as a physics student, always being bothered by the fact
that neither friction nor air resistance actually behave like this.
But the better way to view it is that, again, this is a first order
approximation of whatever the slowing forces might be on this mass.
The point is, we now have a differential equation.
The position over time is an unknown function,
but we know it has to adhere to this constraint.
And your physical intuition probably tells you loosely
what you expect the solution to look like here.
There's going to be some oscillation back and forth,
and then as it loses energy to whatever those damping effects are,
the amplitude of that oscillation is going to decay.
I'm going to go ahead and take this equation and move everything to
one side of it so that we're setting a bunch of stuff equal to zero.
As a quick reminder, with differential equations,
there's not one specific function that solves it, per se.
For different initial positions where this mass might be,
you're going to get distinct functions that also solve the equation.
And that's actually only one out of two free parameters that we can change here,
because all of these are solutions where the initial velocity is zero.
But you could also imagine that the mass starts out with some other non-zero velocity,
and every one of these combinations of an initial position and initial velocity
corresponds to a distinct function that also solves the equation.
So to solve this, really you're looking for a family of functions that solve it.
And preferably, you'd like some way to be able to narrow down which
member of that family solves it for your specific initial conditions.
So how do you solve it?
Well, there's this one very bizarre trick which I remember really bothered me when I was
a calculus student who first saw it, which is where you simply guess that the answer
looks like e^(st), where s is just some constant,
something that you're going to solve for.
The reason this really bothered me is that guessing and checking like this
sort of just feels like asking the student to know the answer ahead of time.
And also your physical intuition is telling you that an exponential
is probably not really how this mass on a spring behaves.
And yes, this is, frankly, unsystematic.
But the point I want to make is that a desire to make this
trick more systematic and more generalizable is going to be
one of the things that leads you and me to the Laplace transform.
Right here, let's just run forward and see what it gives us.
If that position versus time really did look like e^(st),
then when you take its derivative, you.
You get the same function, but by the chain rule, you multiply it by s,
and then the second derivative again looks like the same function,
but it's picked up another factor of s. And then all of the other constants just kind
of come along for the ride.
What's very nice here is that you can factor out that e^(st),
and now everything that depends on time is tied up in this term right here.
And moreover, exponentials will never equal zero.
So if this equation is going to be true, it means
that this part right here has to equal zero.
So what you're left with is a piece of algebra.
Solve this quadratic equation, one that looks kind of like a
mirror image of the original differential equation that we had.
The easiest case here is if we ignore that damping coefficient.
Basically setting mu equal to zero with a little bit of rearrangement and taking a square
root, what you find is that s is going to be plus or minus the square root of -k/m.
Now, k and m are both positive numbers, so that means, whether you wanted it or not,
i, the square root of negative one, has now entered the game.
This square root of k/m term is something that I'm going to give the suggestive
shorthand name omega. And rolling back, remember what it is that s represents.
We were exploring the possibility that a solution to this equation looks like e^(st).
If we plug in these values for s, you now know what that means.
Plugging in a purely imaginary term like this
corresponds to oscillation in the complex plane.
Now, on the one hand, that is very weird because obviously our mass on
a spring needs a real valued solution, not these complex functions.
But on the other hand, the idea of oscillating kind of matches what you want to find.
And it matches it quantitatively too.
Imagine that you increase that value k, meaning you have a stronger spring.
Well, then omega goes up.
So that corresponds to faster oscillation.
And your physical intuition backs that up.
A stronger spring probably would give you faster oscillation.
Even still, the result, frankly, feels bizarre, if not obviously nonsense.
I mean, the position of the mass on a spring is clearly a real number.
And if you zoom out, really what's going on here is that we found for the pure
mathematical equation divorced from any physics,
there exists a complex valued function that solves it, namely e^(i * omega * t).
To connect this pure mathland answer to something that's actually physical,
you need to squeeze out a real valued solution from this.
And the animation on screen kind of gives you one indication of how you could do this.
You could just ignore the imaginary part, only consider the
real component of this solution that does actually work.
But a better way to think about it, which will line up with the overall
story I want to tell here that navigates towards Laplace transforms
is to add up the two distinct complex solutions that we just found.
When you add these rotating vectors tip to tail,
the result stays constrained to the real number line.
And in fact, the way that it oscillates on that number line over time looks
like the function two times the cosine of that same frequency term times T.
Now, the reason that you're allowed to just add two different solutions like
this to get another solution is based on a critical property of our equation.
It's what we call a linear equation, which means if you have two
distinct functions that solve it, then when you add up those functions,
that sum of the two functions also solves the differential equation.
And actually you have more flexibility than that.
If you scale each one of those functions by some constant and you add them up,
that scaled sum is also a solution of the equation.
Remember, in solving an equation like this, we're not
just looking for one function or even two functions.
We're looking for a family of a whole bunch of
solutions that will depend on the initial conditions.
In this case, when we tried our admittedly random looking guess,
and the math came back to us with two distinct functions because it's linear,
you can scale each one of those functions by some constant, add them together,
and get a valid solution to the equation.
And those scaling coefficients don't have to be real numbers.
Those could also be complex numbers, which in this case
affects the initial angle of each of those rotating vectors.
The family of all possible functions you can get by tuning these two
coefficients is the family of all possible solutions to the original equation.
And most of these solutions are complex valued functions.
But the real valued solutions are a special case of those.
And which one you want depends on the initial conditions.
For example, if the initial position is supposed to be 2 and the initial velocity is
supposed to be 0, then you get a valid answer by setting both of these coefficients
to be 1, basically meaning you're just adding the two solutions we found earlier.
If the initial position is something different,
then you simply scale both those constants by the same amount.
Now, as presented so far, if this is supposed to be an example of why complex
exponents are a natural and desirable thing, one of you could rightfully complain.
This is all just needlessly complicated!
If the so-called strategy is to just guess some function with a free parameter,
it's not like it's hard to guess for this situation that a cosine or a sine would solve
the equation, and you could have that frequency term be the free parameter that you're
solving for.
What you would find if you knew to make this guess,
is that either cosine or sine can totally solve this equation,
as long as you set that frequency to be the square root of k over m. And then,
just as before, because this is a linear equation,
you can get the full family of solutions by scaling both of these and adding them
together.
And this is another valid way to describe the family of solutions.
Essentially, we're describing it with an alternate coordinate system.
And you could argue this is a way more sensible coordinate
system to use when we care about real solutions.
Because in this case, all the real solutions are what you get
simply by setting those scaling coefficients to be real numbers.
Isn't this just way more sensible?
Why complicate things with complex numbers?
The value of putting exponentials front and center makes
itself clear as soon as we try to generalize things.
So far, when we solved for s, we got these two different values
in the complex plane that are constrained to the imaginary line.
And as you change what the constants k and m look like,
you end up with different imaginary values that for your solution,
correspond to distinct frequencies in the oscillation that you get.
But think about what it means if we reintroduce that damping coefficient mu,
setting it to something that's not equal to zero.
Well, in this case, solving the equation looks like applying the quadratic formula.
And you don't really need to dwell on the details of the algebra here.
I'm just going to go ahead and show you what it looks like if
I increase the value of that coefficient mu, and we see where
the two corresponding solutions for S land in the complex plane.
The salient feature is that they have not only
an imaginary but also a negative real component.
And just a few minutes ago, we talked all about what it looks like if you
want to exponentiate something with a negative real part and an imaginary part,
it both does decays and oscillates, where the real part tells you how much it decays,
the imaginary part tells you how much it oscillates.
In this case, what I'll do is graph for you the real component of that exponential,
and it kind of matches what you would expect of the spring.
One thing that's actually pretty fun here is how if you increase that damping
coefficient mu enough, eventually the solutions no longer have any imaginary part
and they only have a real component, meaning the solution just looks like decay.
And when this happens, you call the spring overdamped.
This whole example is called the damped harmonic oscillator.
Like I said, it's very fundamental throughout physics,
so just understanding it in its own right is a worthy enough task.
But how far does this dumb little trick actually take us?
The straightforward way that you can generalize it is for any equation that looks
like this, where you're taking a bunch of higher order derivatives,
you're scaling each one by some constant, you add them all up,
and you set the result equal to zero.
In that case, everything we just did works essentially the same way.
If you substitute e^(st) for x, then all of these derivative terms look just like that,
but each one picks up an additional factor of s.
This lets you factor out all of the exponential parts,
leaving you with a certain polynomial in s that you want to equal zero.
One of the most fundamental facts in algebra, literally called the
fundamental theorem of algebra, is that polynomials can always be factored
into linear terms like this, exposing n roots to the equation,
as long as you give those roots the freedom to maybe take on complex number values.
So, for example, if this was some fifth degree equation,
your solutions might look something like this in the S-plane.
Just like the oscillator example, this is basically the math telling you, hey,
e^(st) can absolutely be a valid solution as long as you set s equal to one of these
values.
And just as before, this is a linear equation.
So you can find the family of all solutions by scaling
each one of these exponentials and adding them together.
All of these constants are like knobs and dials that you can tune to your heart's content.
They can be real or they can be complex, allowing them to
influence both the amplitude and the phase of each term.
The specific values will depend on your initial conditions.
I am glossing over a certain nuance when it comes to repeated roots,
but this is the general idea.
Unfortunately, most real world equations are not simple linear ones like this.
For example, the equation for a damped harmonic oscillator has actually come
up on this channel before in a video about optics, but it came with a twist.
We were studying why light appears to slow down in a medium like glass,
causing it to refract.
And the key question was to understand why this depends on the color of that light,
giving the effect of a prism.
Now, I'm not going to recount all the details here,
but what you need to know is that deep in that video,
we were modeling charges inside the material, like glass,
as little damped harmonic oscillators.
These little charges wiggling about some equilibrium position,
were being influenced by an external force, in this case,
an incoming light wave, which oscillated up and down as a sine wave.
And critically, the frequency of that incoming light would in general
have nothing to do with the natural resonant frequency of the oscillator.
So, in short, we were studying the same equation,
but with this added term that looks like a certain cosine expression.
Now, unlike the linear case, the family of solutions here does not
look as simple as a linear combination of exponentials where you
can freely tune all of those constants to your heart's content.
And this dumb trick of just guessing e^(st) certainly is not going to work.
However, everything that we've discussed here does
actually bring you a lot closer than you might expect.
The solutions in this more complicated case do happen to
look like a combination of four specific exponentials.
It's just that, unlike the linear case, you can't freely tune all of the coefficients.
It's a lot more constrained.
In fact, the whole substance of that prism example comes down to understanding
exactly how big these coefficients are as a function of that incoming light frequency.
This is a surprisingly common outcome where the solutions to some
differential equation that pops up in the real world looks like a
certain combination of exponentials, but with particular coefficients.
This ubiquity of exponentials is why engineers benefit from an intuitive understanding
of points on the S-plane and how they can encode growth, decay, and oscillation.
You can kind of think about these functions, e^(st) as being like the atoms of calculus.
What I mean by that is that complicated functions that
describe our world can often be broken up into these parts.
And as long as you give s the freedom to take on complex values,
and by breaking it up that way, they become simpler to understand and to study.
This becomes especially true if you allow for infinite combinations,
potentially over a continuum of values for s rather than some discrete set.
We're going to go deep with that idea, and it is hard to overstate how powerful it is.
The key question is, given some unknown function and a differential
equation describing it, even if you assume it can be broken up into
exponential parts like this, how do you actually find what those parts are?
Taking the forced harmonic oscillator, for example:
How would you know that the solution is built out of four specific exponentials?
How would you solve for the appropriate values of s in the exponents?
And how would you solve for the corresponding
coefficients for a particular initial condition?
There is a tool for this job, and as you may have guessed by this point,
it's something known as a Laplace Transform.
If you watched the earlier chapter about Fourier series,
a lot of what I'm saying here is probably ringing all kinds of bells:
Imaginary exponentials as describing a kind of rotation breaking up general functions
as a sum of those rotating exponentials.
And there is absolutely a connection here.
A big part of the story I want to tell is how this Laplace transform
we are building up to extends the notion of Fourier series and Fourier transforms,
applying to a much more general family of functions.
We're going to go into much more detail in the next two chapters,
but here's a high level preview.
When you use a Laplace transform to solve a differential equation,
it actually ends up looking remarkably similar to that dumb trick of substituting e^(st).
In the context of our dumb trick, the differential equation turned
into algebra basically because the act of taking a derivative is
the same as multiplication by s, at least for these specific functions.
That same thing happens when you use a Laplace transform,
and it's for essentially the same reason too.
What that operation does is translate functions into a new language where
these terms e^(st), the atoms of calculus, are the fundamental units.
Then again, the fact that differentiation in time looks like multiplication by
s for these terms means that in this new language,
derivatives start to look a lot like multiplication,
and differential equations start to look like algebra.
To see how exactly this transform is defined, how you can visualize what it's doing,
and how to use it to concretely solve a nonlinear equation,
come join me in the next chapter.
At the time I'm publishing this, an early view for that next
chapter is available on Patreon, and my plan is to incorporate
the feedback and get a finalized version out by next week.
See you then.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.