Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
But what is a Laplace Transform? | 3Blue1Brown | YouTubeToText
YouTube Transcript: But what is a Laplace Transform?
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Video Summary
Summary
Core Theme
The Laplace transform is a powerful mathematical tool that converts functions, particularly those described by differential equations, into a new domain (the s-plane) where they can be analyzed more easily. Its primary function is to reveal the underlying exponential components of a function, which manifest as "poles" in the transformed representation.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
[Submit subtitle corrections at criblate.com] What you're looking at, a somewhat complicated diagram that you and I are
going to build up in this video, is a visualization unpacking the meaning
behind one of the most powerful tools used to study differential equations.
It's known as the Laplace transform.
This is one of those tools where, as a student,
you can learn how to use it to solve equations,
and yet be left completely in the dark about what it's actually doing.
Think about learning how to drive a car versus
learning how an internal combustion engine works.
Both are worthy pursuits, one is not necessarily better than the other,
and in fact driving is probably more practical.
But there is something deeply satisfying about popping
open the hood and understanding the mechanism inside.
Similarly, our main goal with this video is to pop the hood and show
you some of the beautiful math that awaits us inside this object.
And strictly speaking, a lot of what I want to show is
not necessary if your only goal is to solve equations.
That said, for the differential equation students among you,
I think the content here should make the stuff you have to memorize a lot more
memorable, and after dissecting this machine, studying it piece by piece,
you and I will take everything for a test drive and see what it looks like to solve a
concrete and very interesting differential equation.
Now before I just plop down the definition on the screen,
let's talk about what problem the Laplace transform is trying to solve.
We set up a lot of this in the previous chapter,
and there are two primary ideas worth restating here.
Number one, you need to understand exponential functions,
and I'm always going to be writing these as e^(st)
t represents time, and then s is a number.
It determines what specific exponential we're talking about,
but very importantly for this topic, we're going to give s the freedom to take on
complex number values.
We covered this much more thoroughly in the previous chapter,
but the quick summary is that if s has an imaginary part,
the output of your function rotates in the complex plane as time ticks forward.
When the real part of s is negative, the magnitude is decaying towards zero over time,
but if it happens to be positive, that would mean the magnitude grows,
namely growing exponentially.
If any of that feels shaky or if you want to understand why it's true,
do watch the previous chapter, but from this point forward,
I'm assuming everybody is comfortable with this notion.
Why do we care, though?
Why the hubbub about these functions?
Well, it's because of the second thing you need to know,
which is that a lot of functions, especially those arising in physics,
can be expressed as combinations of exponential pieces.
There's one very friendly example that's going to be helpful to
return to repeatedly throughout this lesson, the cosine of t.
This undulating function can be broken up as a sum of two purely imaginary exponentials.
Basically, if you take e^(it), that gives you rotation counterclockwise,
and e^(-it) rotates the other way, so when you add these two together,
perhaps imagining adding two rotating vectors tip to tail,
the imaginary parts cancel each other out, and what you're left with is
something whose output remains locked to the real number line,
oscillating back and forth.
Now as it stands, this sum goes between negative two and positive two,
but a cosine only goes between negative one and one,
so finally you multiply everything by one half.
More complicated functions might break down into more exponential pieces.
For example, later in this lesson, you and I are going to dig into something
called the driven harmonic oscillator, which is basically a mass on a spring
that's influenced by some external force, in our case, one that oscillates over time.
As a spoiler, the solution to the equation describing that
ends up looking like a sum of four exponential pieces.
Two of those pieces oscillate and decay in a way that matches
the natural resonant frequency of the spring,
and then the other two oscillate in a way that matches the external force.
So for cases like this and many others, what we would like is some sort of tool,
some sort of mathematical machine where you can pump in a function,
or even a differential equation describing that function,
and that machine will somehow reveal for us what specific exponential pieces
that function breaks down into.
That is, it exposes what these values s in the exponent all are,
as well as what the corresponding coefficients are.
Now you might be wondering why exponential pieces, why not focus on something else.
The short answer is that for these functions, when you take a derivative,
it looks precisely the same as multiplying by some number, namely s.
What you'll see by the end is how this means that same machine that lets us dissect
functions into exponential pieces also allows us to turn differential equations into
algebra, essentially because everywhere you see a derivative,
it turns into multiplication by s.
Now in a context like this one where we are so focused on complex-valued exponentials,
engineers have a special name for this complex plane that I'm depicting on the left,
representing all possible values for that term s in the exponent.
They call it, well, the s-plane.
A helpful mental image is to think of each individual point on
the s-plane as encoding the entire exponential function e^(st).
For this picture, the graphs I'm showing only depict the real component of the output,
but keep in mind, each point is representing the full complex-valued function.
These graphs are enough for intuition though.
You'll notice how bigger imaginary parts correspond to faster oscillation,
and then as your eyes scan from left to right,
the real part reflects either decay or growth.
So with this as our goal, the machine that you and I are going to build up today,
the one that exposes how a function breaks into exponential pieces,
is, as you have no doubt guessed, the Laplace transform.
Now this word transform is a little funny, basically in the same way
that a function is something that takes in a number and spits out a new number,
we often use the word transform in math for a more meta operation that
takes in an entire function and spits out a new function.
In the case of the Laplace transform, a typical convention is to name our new function
with a capitalized version of whatever you use to name that original function.
And in this setting, where our original function takes in time as an input,
the new transformed function has a new different kind of input, a complex value, s.
I think it's helpful to quickly preview what this new function actually
does before we pull up the definition and start dissecting it.
Suppose your original function, f(t), really can be
broken down as a sum of several exponential pieces.
When you apply this Laplace transform, giving you a new function of this new variable s,
if you were to plot this new function over the s plane,
in a way that I will explain in just a minute,
what you see are these sharp spikes above each value of s that corresponds
to one of those exponential pieces.
These spikes have a fancy name, they are called the poles of your function.
So even if you didn't know ahead of time how your function could be broken down as a
sum of exponentials, if you understand this transformed version,
and specifically if you understand the poles,
that can reveal for you what those exponential pieces are.
At this point you're probably itching to see how this thing is actually defined,
and in its full glory this is what that definition looks like,
which we can think of as two separate steps.
First, you multiply your function by the expression e^(-st), and then second,
you integrate that result over time from t equals zero to infinity.
We'll talk all about that integral and its nuances in just a minute,
but for the moment focus on that inner expression.
This term s is the newly introduced parameter,
the one that is the input of our new transformed function.
As I said, it's a complex number.
Every time you see the letter s in this video, it's a complex number.
And the way I like to think about it is that you might imagine freely moving
around this value s on the s-plane, and it's kind of sniffing around to find
which specific exponential functions line up closely with our function f(t).
For example, let's suppose our function is the cosine of t,
which as we discussed just a minute ago, we already know can
be broken down as a sum of two exponentials, e^(it) and e^(-it)
I've already previewed the idea that when you plot the final result you should
see these spikes over the key values of s, which in this example would mean poles
above plus i and negative i, and you can already get a little intuition by taking
this example and substituting in the expanded expression for the cosine of t.
Looking at this expansion, I want you to take a
moment to think about the product of these two terms.
When you multiply two exponentials, the contents of those exponents add together,
so what you're left with is e raised to the i minus s times t.
Let me go ahead and plot that value on the lower right,
keeping the s plane on the upper right. Just like the exponentials we've already seen,
as time goes from zero to infinity, this oscillates and decays, or maybe it grows.
The specific shape depends on how we set that value s.
Now in this case, as you move around that value s,
there is one specific value causing uniquely boring behavior.
If you set s equal to i, then that term in the exponent becomes zero,
so the entire function just looks like e^0, which is stuck at the constant one.
And this is a key idea for almost all values of s.
An exponential like this is going to change with time,
typically looking like some kind of spiral, but at the special value
that we're hunting for, in this case, s equals i, instead things get stuck at a constant.
So, stepping back, if you were a mathematician and you're trying to invent some
kind of machine that will detect the exponential pieces lurking inside a function,
for example, detecting that a cosine has an e^(it) and an e^(-it) lurking inside it,
you might ask yourself, is there something I can do that can detect when one
of the terms in a sum like this is secretly just a constant?
In essence, this is the role played by the integral that is wrapping everything
up in the full definition, integrating as time goes from zero to infinity.
And on the one hand, if you're comfortable with calculus,
you might be able to anticipate why this would result in some kind of sharp spike
over the desired values of s.
If you integrate a constant from zero all the way up to infinity, it blows up.
But there's actually a fair bit of nuance here.
To start, that function inside the integral takes on complex number values.
And this raises a natural question for those of us
curious to interpret things in a satisfying way.
How do you think about integrating a complex valued function?
If you're up for it, what I'd like to do with the next 10 minutes or so
is really dig in and dissect what an integral like this really means,
how to visualize it, and to explain why the plots that I'm showing
represent something slightly distinct from the literal meaning of this integral.
As with any new piece of math, it's best to start easy and work our way up in complexity.
So to kick things off, let's just ignore this function f(t)
itself and only focus on integrating the e^(-st) part.
And to warm up, before jumping into the complexity of it all,
let's just suppose that s is a real number, meaning this is a friendly,
real valued function that we can plot with a graph like normal.
Typically, when you first learn calculus, you learn to interpret
an integral as telling you the area under a graph like this.
For example, if you set the value s equal to 1,
and you go through the procedure for actually calculating this integral, you know,
you take an antiderivative, you take the difference at the two different bounds,
because there's an infinity, you could say you take the limiting value at that bound,
the way it all works out is that this expression equals 1.
And you can interpret that as telling you that the area under this graph is 1,
which is kind of fun.
I guess that means that all the area under this infinite tail
is exactly enough to fill in the rest of this unit square.
And then if we reintroduce that value s and set it equal to something that's not 1,
the effect is to squish the graph in the horizontal direction.
That's always the effect if you multiply the input of a function by some constant.
And so the area under this graph, which started out as 1, must now be 1 divided by s.
The key thing I want you to remember here is how if we let s get smaller and
smaller and approach the value at 0, then that area gets bigger and bigger and bigger,
actually approaching infinity, and approaching it quite quickly too.
But graphs are not the only way to visualize functions,
and area is not the only way to understand integrals,
and you should get in the habit of flexing your mind a little bit more.
Let me show you another way to think about this that will generalize
more easily once we let that function take on complex number values.
Think about this integral just between 0 and 1, something with a unit length.
And I want you to imagine all this area under the graph as a pool of water,
which you let kind of slosh down until it becomes level.
The height of this pool is telling you the average value of that function between 0 and 1.
And then because the width of this pool is just 1,
then its area is the same thing as its height.
So this integral up top, when it's over a unit interval,
is telling you the average value of the function on that interval.
Similarly, the integral from 1 to 2 would be telling you the
average value the function takes over that interval from 1 to 2.
And then same deal as you keep integrating along a bunch of other unit intervals.
So then, if you want the integral from 0 out to infinity,
what you can think about is taking all of these average values on those intervals
and adding them all together.
This is something we can work with.
Now let's look at the complex case.
S is going to be some complex number, and then the function e^(-st) cycles
and decays around the complex plane as you let time go from 0 up to infinity.
The specific way that it cycles and decays or grows depends on that value of s,
and we'll get a distinct path through the complex plane for each one.
Now if you want to integrate this function on a unit interval,
let's say from the values t equals 0 to t equals 1,
imagine taking a sample of all of the outputs in this range and then finding the
average, the center of mass for all those points.
That average value is the meaning of this integral,
which I will represent with a little arrow.
Actually, it'll be helpful if we put this integral in its own complex plane
down in the lower right, because we're about to start adding them all up.
If you let t range from 1 up to 2 and you do the same thing,
take the average value on that interval, represent it with an arrow,
and then you add that arrow to what we have on the lower right,
the resulting sum is basically telling you the integral from 0 all the way up to 2.
And then we repeat.
You take an average between 2 and 3, add that, average between 3 and 4,
add that, and just keep going on and on and on and on.
And the limiting point for this spiraling sum that you see is the value of the integral
from 0 to infinity of e^(-st), exactly the expression we're trying to understand.
And as we move around that input s, the resulting value of
this spiraling sum might wander around the complex plane.
As a quick sanity check, let me move that value of s over to the input 1.
So there's no oscillation because there's no imaginary part,
and you'll notice that all these little arrows stack up to end up on the number 1.
And this should make sense.
Back when we were interpreting the integral the more familiar way as an area
under a curve, we saw that this value, when s equals 1, works out to be 1.
And just as before, if I let s approach 0, getting smaller and smaller,
then the resulting integral gets bigger and bigger, rapidly approaching infinity.
In this new diagram, we can see how if s moves away from 0 in a different direction,
moving vertically, the resulting integral also gets smaller,
but for a much different reason.
All the oscillation in the function gives us more cancellation in that vector sum,
so the output gets closer to 0.
Next what I want to do is plot this value.
So look at that point where our spiraling sum converges to.
I want you to think of it as a little vector in the complex plane,
something that has a magnitude and a direction.
And to get a little fancy, let's take that magnitude,
and we're going to plot it above the value s in the s-plane.
So as I change that value s, and it changes the resulting integral,
the magnitude of our output might grow or shrink, and as it does so,
we will plot the result over the s-plane.
And I'm just going to leave this on autopilot for a moment,
where s is going to wander around the plane.
And as it does so, take a moment to think about why we're getting the shapes that we see.
Basically, the bigger the imaginary part of s,
the more spiraling there is in the expression, meaning more cancellation,
so the magnitude of that output is smaller.
Here's what it looks like if I graph the full plot over many possible values of s.
The most obvious feature is how if s gets closer and closer to 0,
then the magnitude of that output gets bigger and bigger, which makes sense.
The small real part means it has slower decay,
and the small imaginary part means there's less cancellation.
Now, as it stands, I'm only graphing the magnitude of that output,
but of course it has more information than that, it has a direction too,
so if we associate every possible direction of that output with a unique color,
then one thing I could do is color the graph,
giving us a richer sense of what that output looks like.
The other thing you've probably already noticed about this plot is that it is
conspicuously not being drawn over values of s where the real part is negative.
And think about what those values actually mean.
When the real part of s is negative, then the function e^(-st) grows exponentially,
and this spiraling sum for the integral we have blows up, it does not converge.
So these values are not defined, at least for the moment they're not defined.
There's a fancy notion we'll get to shortly.
On the boundary, things get kind of interesting.
If s is purely imaginary, the value e^(-st) simply goes around in a circle,
purely rotating, neither growing nor decaying.
And as we play this game of averaging along various intervals and adding them together,
that vector sum we get in the lower right simply spirals around and around ad nauseam.
Now on the one hand, this also does not converge,
there is not a specific value that this approaches.
However, it's not too hard to make sense out of it.
If you let the value of s get even just a little bit of a real component,
then our function does decay, and our spiraling sum does approach a clear concrete value.
And then if you slowly take away that real part of s,
letting it approach the imaginary number line,
then that resulting integral on the lower right unambiguously approaches one clear value.
And you can see that on the plot too, there's clearly some value that it wants to take on.
And in fact, I can tell you precisely what value it wants to converge to.
Think back to the real valued case where we saw
that the integral is equal to 1 divided by s.
This is a purely analytic fact that remains true even when s is a complex number.
Maybe that's what you'd expect, but it's not at all obvious that
this should remain true in the more rich case of complex numbers.
To gut check for at least one example, we were just looking very
closely at what happens while s approaches the imaginary constant i.
And if you focus on what's happening in the lower right,
you'll see that the integral is approaching negative i.
And indeed, 1 divided by i is negative i.
So we have this nice and blessedly simple equation describing our integral,
but the funny thing about it is that this right hand side,
1 divided by s, is defined everywhere on the plane.
I can plot the result, this is what it looks like.
Now maybe you raise an eyebrow for s equals 0,
but almost everywhere this has an unambiguous value.
Now to be clear, the integral itself emphatically does not converge on the left half
of the plane, so in that region the equation, strictly speaking, makes no sense.
As an example, think about setting s equal to negative 1.
On the right hand side, 1 divided by negative 1 is negative 1,
but I think you'll agree that integrating e^t from 0 to infinity sure does not
look like negative 1.
However, this brings us to a fascinating aspect of complex valued functions,
which is completely different from the world of real valued functions.
It's something known as analytic continuation.
It's a sense in which these nonsensical values beyond the domain of convergence can
nevertheless reflect useful meaning about the expression where it really does converge.
Although what follows is very firmly in the territory of more than you
need to know to drive the car, it is a beautiful piece of math and it's
the final puzzle piece to explain the plots that I'm drawing for you.
Here's the idea.
Suppose you have some function and it's defined only over a limited domain.
If this was a real valued function and you wanted to extend the definition to include
a bigger domain, you basically have infinitely many choices for how to do this.
Even if you add some constraint, say your function is smooth in the
sense that it has a derivative everywhere and you want your extension to also be smooth,
then you still have an infinity of choices.
It's kind of like a floppy bit of spaghetti.
Complex valued functions though turn out to be much more constrained.
If you have one defined only over a limited domain,
something like our integral that converges only over half the plane,
and if that function is nice in the sense of having a well-defined derivative,
and then if you want to extend the function in a way that keeps it nice,
again in the sense of having a derivative, then there's a nice little theorem
telling us that one of two things happens.
Either there is no way to extend it, or if there is a way, that way is unique.
That's very surprising.
You might think you have infinite choices, but you don't.
When this extension does exist, it has a fancy name.
We call it the analytic continuation of the original function.
And a very powerful theme throughout math is that you can sometimes discover
hidden information about a function by understanding its full extended version,
especially understanding the poles in that full extended version.
Over here in our context of studying Laplace transforms,
the relevance is that the actual integral defining this transform
typically only converges for half the plane when the real part of s is sufficiently big.
However, it can be very helpful to plot and to understand the
full extended version exposing all the poles of the function.
As I alluded to earlier, the poles are what exposes
the exponential pieces that we're hunting for.
Looking back at this warm-up example that we've been focusing on,
the integral of e^(-st), as I said, that only converges when the real part of
s is positive.
And on that half plane, it equals 1 divided by s.
1 divided by s is defined everywhere, and it is nice in the sense of having a derivative.
So we say this is the analytic continuation of our integral.
This function is the purest example of a pole.
You say it has a pole above s=0, and a slightly less hand-wavy definition of what
I mean by this is that it looks approximately like dividing by 0 around that point.
It's not hard to see where the name comes from.
The plot looks kind of like a circus tent with a pole above that point.
Wonderful.
This is actually a very useful result.
This integral that we have now spent so much time on is effectively the Laplace
transform of one of the simplest possible functions, the constant function at 1.
That constant function transforms into 1 divided by s,
which you should see in your mind's eye as a pole above s=0.
That might seem like a simple example, but almost for free,
we can squeeze out a much more general result,
which is that the transform of any exponential function also looks like a simple pole.
It's just going to be above some other value on the s-plane.
Here, let's take a moment to actually think it through.
What would happen if I asked you to pump in a function like, I don't know, e^(1.5 * t)?
Well, then the expression inside that integral combines,
and you get e^{(1.5 - s)t}, which is nearly identical to
everything we were just looking at, it's just things are shifted by 1.5.
If you wanted, I could pull up that same visual,
which dissects and interprets the integral, pictured on the lower right, and again,
the function inside the integral is pictured on the upper right,
and we're playing the same game of adding up averages.
But nothing in this diagram has substantively changed,
it's essentially the same thing we were just looking at,
the only difference is that now, that special value where explosion happens and
we see a pole, is at s = 1.5 instead of s = 0.
Symbolically, a little bit of rearrangement shows that this key
integral is almost identical to the one we were just studying,
the only difference is that s has been replaced by s - 1.5.
So the new result that we can write down in circle,
is that the transform of our exponential function looks like 1 divided by s - 1.5.
And of course, there's nothing special about 1.5, I can replace this with any constant a.
And if there is only one fact that you remember from this video, let it be this one.
The Laplace transform of an exponential function, e^(at),
is a new function of s that has a simple pole over s = a.
This is that key idea I alluded to earlier, poles in the
transformed function expose exponential pieces of the original.
The final step to fleshing out that idea, is to
convince ourselves that this works for combinations.
As an example, let's pull back in our good friend the cosine of t.
And here we have two options for how to study this,
we can think it through symbolically, and then after, for fun,
let's plug it into that same visual machine and see what it looks like.
Alright, so symbolically, as we discussed earlier,
a cosine can be broken up as 0.5 e^(it) + 0.5 e^(-it).
What you can do next from here is break this outer expression into two different pieces.
One that looks like half times the transform of e^(it),
and another which looks like half times the transform of e^(-it).
In the lingo, the way that you would phrase this is that the Laplace transform is linear,
meaning if you have a scaled sum of some stuff on the inside,
you can break everything apart to the same scaled sum of the transforms of those
inner parts.
In our case, because we just saw how to take the transform of simple exponential
functions, this whole expression here can be collapsed to look like 1 divided by s - i.
A thing with a simple pole at i.
And then this whole expression can collapse to become 1 over s + i,
a thing with a pole at negative i.
So when you read this whole expression, the sum of two fractions,
the image that should pop into your head is a plot with two different spikes above
i and negative i.
And in fact, if we have a little fun and we try plugging the expression cosine of t,
e^(-st), into that big complex integrating machine that we built up,
you do indeed see a plot that has poles above i and negative i.
This time, the diagram is notably more complicated, and if I'm honest with you,
the symbolic reasoning is probably the easier way to understand the final answer.
But you and I are here to have a little fun, aren't we?
Delving into each piston and valve of the machine that we're working with.
So let's see if we can take a minute or two to try
to make sense out of what exactly we're looking at.
Once again, on the upper right, I'm showing the function inside the integral,
the cosine of t times e^(-st).
But now it's a lot more of a chaotic squiggle.
To build a little intuition, let me set s equal to a small,
purely imaginary value, say 0.2 i.
On that plot in the upper right, I'll go ahead and add a vector that's just showing
the e^(st) part, which for the small imaginary value of s simply rotates very slowly.
In the full function, that term gets multiplied by cosine,
so the path that it would trace out would oscillate back and forth,
giving us this nice flower petal pattern.
As before, the way we're visualizing the integral is by taking averages along various
unit intervals and then adding those together, like a big tip-to-tail vector sum.
In this case, the diagram of adding those vectors is actually quite nice.
It looks like going around and around in a little star pattern.
And strictly speaking, this does not converge.
What's being plotted up on the left is the analytic continuation.
If you wanted this to converge, you can add even just a little bit of a real
component to that value s, meaning that the function decays a bit as time
goes out to infinity, and even a little bit of decay will be enough to cause this sum,
down on the lower right, to converge to a clear, unambiguous value.
Now notice what happens as I increase the imaginary part,
and the frequency of our exponential gets closer to the frequency of the cosine.
What you get is more and more alignment, resulting in a bigger total integral.
In fact, when that imaginary part is 1, meaning the oscillation of the
exponential exactly lines up with the oscillation of the cosine,
then the path that it traces out remains entirely confined to the right side
of the plane, and the result is that the integral kind of gets jettisoned
out to the right.
From there, if I were to decrease the real part of s, meaning less and less decay,
then the integral gets closer and closer to infinity,
hence why we see a pole above that value.
And by the way, stepping back, for any of you who happened to watch the
video I did many years ago about the Fourier transform,
if all of this looks strikingly familiar, it's because it's basically the same thing.
When s is a purely imaginary number, the Laplace
transform is nearly identical to the Fourier transform.
It's not quite the same expression, the lower bound on our integral is zero,
not negative infinity, and there's varying conventions about the
constants in that exponent, but the essence is really the same.
Now this relationship between Fourier transforms and Laplace transforms will play
a much bigger role in our story in a following chapter,
but right here I wanted to quickly highlight how, in a sense,
this Laplace transform is a generalization.
What it does is probe at how well a function lines up,
not just with purely imaginary exponentials, but with any exponential.
Now, looking back at our symbolic result for the Laplace transform of the cosine of t,
if you were to go and look this up, say in a big table of Laplace transforms,
this is actually not how it would look.
First of all, it's common and useful to consider a more general cosine
wave that has an arbitrary angular frequency omega on the inside.
The only change here is that everywhere you see an i, you replace it with omega times i.
So you can read that final expression as telling
you there are poles at omega i and negative omega i.
But even still, this is not what you would see in a table.
Let me go ahead and just run some algebra on autopilot
here that's going to combine those two terms.
And when all the dust settles, what you end up
with is s divided by s squared plus omega squared.
Now this is the expression you would actually see.
And in fact, the reason I bring it up is I want to talk about how
the entire logic of this example could flow the other way around.
Imagine you did not already know ahead of time that a cosine can be broken up as
a sum of two exponentials, but you were very savvy with integration by parts.
I won't walk through details here, but you can directly calculate this equality here,
essentially directly computing the definition for the transform.
From there, you could use a process that is fancifully called partial fraction
decomposition to break apart this fraction into the two pieces that clearly expose the
poles at omega i and negative omega i, and which also exposes those coefficients of
one half.
This in turn would be enough to tell you what the exponential pieces lurking inside are.
That flow of the logic is actually a lot more reflective
of what it feels like to use this transform in practice.
On that note, our next step is to take this machine for a test drive
and see what it looks like to solve an actual differential equation.
Now my original plan was to conclude this video with a worked example,
but looking at the time and considering this is all part of a series anyway,
it's probably a little better to give you the chance to stand up, stretch out,
reflect on everything, and let's put that in a follow-on chapter.
The key takeaway for this video is how when a function can be broken into exponential
pieces, the Laplace transform exposes what those pieces are as poles above the s-plane.
But I want you to know we are not done understanding its full generality.
Most functions cannot be expressed as discrete sums of exponentials like this.
Nevertheless, the transform offers a very powerful way to
express many many more functions as combinations of exponentials.
It's just you combine over a continuous range, not a discrete one.
Turning back to that analogy of driving a car versus learning how an engine works,
there is a third even deeper level of understanding,
which is knowing how to build a car for yourself.
In the final chapter of this sequence, I want to show you how you could
reinvent the Laplace transform from scratch, how it relates to Fourier
transforms and Fourier inversion, and how to think about it for a much
broader family of functions beyond these discrete sums of exponentials.
I'll see you there.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.