>> What's good you all.
Welcome back.
In this video, we are concluding our
introduction to eigenvalue problems
by completing the last step in our process
of mathematizing the couple pendula problem.
Recall in steps 1 through 4, we transformed a
coupled pendula problems, so 2 pendula hanging
from the same stable support structure,
where the masses are moving back
and forth, connected by a spring.
We want to solve this problem to
produce function descriptions u1 and u2,
that will let us determine the
location of the center of mass
at any time t along the ruler,
so project those things down.
And at the beginning of step five.
In other words, at the end of steps 1 through 4,
we have this equivalent realization
of the coupled pendula problem.
Specifically, we have it in matrix form
where we're taking a matrix m called the mass
matrix multiplying it by the second derivative
of the vector value displacement functions AKA
this is mass times acceleration in vector form.
And then we equal that to negative k times
the displacement function that we so desired.
The actual values of each of the entries
of these matrices are spelled out here.
This matrix we're going to call
the stiffness matrix to correspond
with work that we've done previously.
This matrix is the mass matrix, its
diagonal, the diagonal elements are positive
since the mass is on the end of each
chord in the pendulums are nonzero.
This matrix vector equation encodes
Newton's second law, specifically,
mass times acceleration is equal to
the net force acting on each object.
This is in higher dimensions because instead
of studying a scalar valued phenomenon,
one with only one click component,
we're studying two components
in the same system that are coupled together.
The net forces came from the analysis
that we did in the last video.
And in our work, we're searching for our unknown
and desired displacement functions u1 and u2,
that solves this system written
in this compact form.
When we created the system, we actually
factored out a negative of the stiffness matrix.
And the reason that we did that was it
allows us to bring all terms on one side,
this mu double dot of t plus
k, u of t is a matrix equation.
However, it has a very nice analog in
scalar space, which is the undamped,
unforced simple harmonic oscillator equation.
In this case, it is a two component
equation, but it is identical
to the scalar version specifically if we just
turned m, k and u into scalar valued objects.
That would be the undamped, unforced
simple harmonic oscillator for one mass.
In all of our work, we've claimed that we
can transform the couple pendula problem
into a standard eigenvalue problem.
To do so, let's start with
the Newton's second law.
Mass times acceleration is equal to
the net force acting on each mass.
Our goal in this work is to create
a standard eigenvalue problem,
not a generalized eigenvalue problem.
And thus we're going to assume
that the two masses are identical.
In other words, m1 is equal to m2.
The moment that we do that, why
don't we just call each mass m?
So m1 and m2 are identical.
That means this matrix is a
scalar multiple of the identity.
When we see this, we know that each of those
diagonal elements is nonzero and thus positive
because they measure masses there's
no such thing as a negative mass.
This is a diagonal matrix with nonzero diagonals
that are identical, that
matrix must be invertible.
The statement that the matrix m is invertible
is identical saying there exists an inverse
to m specifically, we could multiply
both sides of that equation by m inverse.
M inverse times m, that goes to the identity.
Identity times the second derivative
of u leaves the second derivative of u.
On the right hand side, when we
multiply on the left by m inverse,
we get negative m inverse times k times
the desired displacement function.
Of course, one of the really nice things about
matrices and vectors is we can write them
in general form, or we can
specify entry by entry.
If I look at this matrix m, where
we've assumed both masses are the same,
something we can design in
the McCusker apparatus,
the inverse of that diagonal
matrix is a diagonal matrix
with the reciprocal elements on the diagonal.
In other words, 1 over m on the first diagonal
and 1 over m on the second main diagonal,
which implies that we have this acceleration
vector on the left hand side equal
to the negative m inverse
times k, k hasn't changed.
And then over here we have our displacement.
Of course, we can just get the product
of m inverse times k using
simple matrix multiplication.
In other words, take row 1, multiplied by column
1, take row 1 multiplied column 2, take row 2,
column 1, row 2, column 2, and that would
give me the entry by entry definition.
And in fact, that's exactly
what we've done here.
When I multiply the first entry by 1 over
m, the ms cancel out in the first term,
and then I have k over m in the second term.
When I multiply 1 over m by the
entry, 1, 2, I get negative k over m.
And the same pattern happens in
the second row of that matrix,
which means that my couple pendulum
problem has been transformed into finding
of delta functions u1 and u2, such that the
second derivative of those functions is equal
to this matrix multiplied
by the functions themselves.
And this is where we're going to
introduce some simplifying notation.
Specifically, we're going to say, hey,
this matrix is really kind of special.
Let's give it its own name.
Let's call that capital A as in
the standard eigenvalue problem.
We can restate our problem,
the second derivative
of our desire displacement function
is equal to negative A times u of t,
where A is m inverse times k. In other words,
when we're trying to figure out what the motion
of the pendulum is, here, we have this video
where we're looking at the center of the masses,
the two pendulum are coupled,
and we're trying to figure
out how do these things displace over time.
We're claiming that if we can find functions,
u sub 1 and u sub 2, which are functions of t,
that give a concrete representation of the
displacement of these masses in this video,
that problem is equivalent to solving this
mathematical problem, which is finding functions
that satisfy the second derivative is equal
to negative A times u of t, where A is defined
by the mass matrix times k. Before we go further
into the theory, I want to talk about some
of the engineering components or what
you might call real world considerations
that we're going to make here.
Specifically, in practice, we're
not trying to predict the behavior
of these masses for the rest of time.
In theory, we will get to a place that we can do
that in ideal considerations, but in practice,
we have some very specific timeframe that
we're going to focus in on the masses.
So for our case, when we're looking
at this video of mass dynamics,
this video is only 2 minutes and
56 seconds long, we might determine
that if we could match the behavior of some
subset of those videos, some number of frames
in that entire video, we would
call that a successful model.
So the question that we asked ourselves
is, for this 2 minute and 56 second video,
we're going to observe the dynamics and
we're going to say for some start time,
and some n time, let's try to get those u
sub I, u sub 1 and u sub 2 of t functions
to match the identical behavior
observed in that timeframe.
So maybe we say, OK, let's start
our video processing at 32 seconds.
And then from 32 seconds all the way
to, I don't know, maybe a minute long.
So a minute and 32 seconds, we'll try
to match all the displacement behavior
of our particular masses in that time interval.
So in other words, we're going to limit our
focus, instead of saying for all of time,
we're going to say from 32 seconds in our
experiment, to a minute and 32 seconds
in our theory experiment, 60 seconds of time.
Let's see if we can match the behavior
observed in the experiment video
with the model prediction on that timeframe.
The next thing we're going to do in the
practical problem of trying to match up theory
and data is actually paid a reference
position or a reference time,
so that we can match the data that we
start with, with the observed phenomenon.
So specifically, we're going to say,
for some time within our observation
within the start time and the
end time that we're trying
to model, let's choose a reference time.
And we'll call that reference time t naught.
t naught in engineering terms, we would
say maybe that's the zero position.
So from that point forward, but we're going
to keep it general enough in our notation
to just say it is a reference time.
It's some time between the start time of
observation and the end time of observation.
In equalities, we would say the start
time is before the chosen reference time,
and that's before the chosen end time.
For example, our observation goes from
32 seconds to a minute and 32 seconds
and maybe we say to ourselves,
like OK, let's split the difference
and then just go to like
a minute and one second.
So we'll set our reference time to be the first
frame that happened a minute and one second
in to the video recording of these dynamics.
So that is now our reference time, which
indeed is in between the start time
of 32 seconds, and the end time of 92 seconds.
We're now further refining our problem.
So remember the goal of the couple pendula
problem is to find functions u1and u2
that predict the location of the center of mass
at any time t within some observed timeframe.
So specifically, we would say, hey, I want to
know the exact location of these black dots
between 32 seconds and 92 seconds any time.
So I want to get that location modeled to a
pretty darn precise value within that timeframe.
The reason that we chose a reference time
t naught in this case that was 61 seconds
into the modeling the first frame 61
seconds in was to use as a reference point
for the behavior of masses in time.
It allows us to say at that point in
time, we're going to say that we know
about both the exact location and the
velocities of both masses as they travel.
In other words, we're trying to
solve differential equations.
And a differential equation by nature
is an equation involving a derivative.
One of the things that's interesting about
derivatives is that they involve change.
So the whole idea of an initial value
problem or a boundary value problem is --
I know how something changes and if I have
some information about reference position,
sometime during my observation, then if
I know where it is at a specific time,
and I know how it changes, perhaps then
I can predict where it is at any time.
If I know where it is, at some point in
time, and I know precisely how it changes,
then I can predict the location
of those objects any time that I'd
like within my observation period.
So when we say that t naught is a reference,
what we mean is at the chosen timeframe,
let's say that we could actually
measure the exact position of each mass.
In other words, I could use tracker, for
example, to figure out the exact location
of the center of mass given
an imposed frame of reference,
and a calibration stick or a reference length.
If I had the positions, and I could figure
out where the equilibrium position is,
that gives me the exact displacement
at the chosen reference time.
Moreover, not only does this video give
us access to what we call position data,
through our tracker measurements, but
those data points can also be used
to calculate representation of what we call
initial velocity or reference velocity.
In other words, not only do we have u of t
naught, but we also have u dot of t naught,
the velocity of mass 1 at reference
time t naught and the velocity
of mass 2 at reference time t naught.
Under all of these, we've assumed that the
masses are actually changing the position
that the displacement data is nonzero.
That assumption is really important because if
we add a mass system that literally was stuck
in equilibrium, the solutions
of those would be trivial.
It would literally just be the displacement is
zero for the entire timeframe of my observation.
The only time this problem is interesting is if
one of those masses has nonzero displacement,
or nonzero velocity at some time, and then we
want to know what happens for the rest of time,
then we're making the claim that figuring
out functions that describe the way
that those masses move is equivalent
to solving this mathematical problem.
And one of the questions we
might ask ourselves is, well,
what do we know about the underlying system
that might help us get our head around how
to solve this problem mathematically?
This is where we can leverage all
that psychological and emotional pain
that we felt earlier in this lesson.
Specifically, we've spent
hours studying pendulums.
These systems are just sets
of coupled pendulums.
Moreover, when we put these systems in motion,
we're assuming small angle oscillations.
We've already seen that small angle oscillations
in pendulum give rise to
this "cosine phenomenon".
Specifically, let's take a look at this system
right here, those masses are swinging back
and forth in small angle approximation.
I can promise you that the
degree between the chord
and vertical position is less than 7 degrees.
In other words, less than 0.1 radians.
If I look at that behavior and graph the
displacement of those masses versus time,
we get something that looks like a cosine curve.
Let's put that in a way that's
a little easier to see.
So up here we have the vertical
position, which we call equilibrium.
Here we've got the red pendulum bob,
and we're going to map the distance
from the red pendulum bob to equilibrium.
In other words, we're going to
track the displacement over time.
Let's put this pendulum in
small angle oscillation.
And then what we'll do is let's map
that displacement over a time axis
and actually show what that function looks like.
So as we let that thing play out, notice that
over time, that behavior looks like a cosine.
The way that that thing oscillates back
and forth, back and forth, back and forth,
the displacement versus time
graph looks as if it's a cosine.
And in fact, we have this really interesting
phenomenon when we couple the masses together.
So if we don't let the spring extend at all, we
think about the spring as not changing length.
Both of those pendulums kind of swing
freely as if they were not coupled at all.
That's called normal frequency
1 or normal mode 1.
Up on top, let's imagine that we
just turned those masses vertically,
and then we tracked the position, notice
that that thing indeed looks like a cosine.
Now, of course, if we're doing it so that
we turn the right down and the left up,
positive direction goes in the downward
position, so we got to track our references.
But because we know the dynamics of single
pendulums, we can start to get insights
into what we might expect for
the solution of our equation.
In fact, not only in the case
that they are completely coupled,
they could also be anti coupled,
they could be anti symmetric,
and that's called normal
mode 2 or normal frequency 2.
That's what we see down here.
In that situation that the displacement
of the right mass is the exact opposite
at the exact same time as the displacement of
the left mass, we get this cosine behavior,
one is the opposite of the other.
But indeed, it kind of corresponds with what
we would expect from a normal pendulum swing,
a complete cosine curve discussing
how those displacements move.
That's what we see in both of those cases.
In other words, when we're
trying to get insights into how
to solve the coupled differential equation, u
double dot of t is equal to negative A times u
of t where u is a 2 by 1 vector
encoding the displacements of each mass.
We might say, well, what if we
guessed each displacement u1
and u2 was some scalar multiple
times a cosine curve?
So this scalar is going to
be called the amplitude.
v1 is a constant.
It's a real number of scalar.
That's the amplitude of the cosine
curve for the first displacement.
v2 is the amplitude of oscillation
for the second displacement.
Inside we have this what
we'll call angular frequency.
That's the Greek letter omega.
And then we'll also say, oh yeah, by the
way, this cosine curve has to be pegged
to some reference time point in our measurement.
In other words, at some point, we actually
have to know what is the displacement,
what's the velocity so that
we can combine the equation
for different AKA the differential
equation with an initial reference point.
If I know where I start,
and I know how I change,
perhaps then I know the behavior overall time.
In other words, we're going
to make the assumption,
let's assume that both displacement
functions can be written
as some scalar multiple times
the same cosine curve.
The scalar omega, it's not a w, it's kind of
a curly w, that's a lowercase Greek letter,
we read that out loud as omega, that's going
to tell me how my cosine curve changes in time.
And for those of you engineers, we
call that the angular frequency.
Notice that if I thought about the angular
frequency as 2 pi divided by t. Normally,
if I just had the cosine function, we know
the period of the cosine function is 2 pi.
If we think about the period of one oscillation,
the amount of time that it takes the mass to go
from a single position, all the way through
the motion, back to that single position,
that's not going to be 2 pi in general,
and in fact, that's going to be a function
of the masses, the spring constants
and the length of the pendulum.
But we'll call that capital T. So one period
of motion what we call one cycle of motion.
Let's assume that that takes t
seconds, which means after t seconds,
I will have traveled the entire period.
In other words, the angular frequency is the
regular unadulterated cosine 2 pi frequency
divided by the number of seconds that that mass
takes to travel one full cycle in its swing.
Let's say that in a different
way, we're going to say, hey,
if we want to solve this vector
valued differential equation,
this coupled differential equation, let's assume
that the vector valued function u takes the
form v1 times cosine omega t minus t naught,
v2 times cosine omega t minus t
naught, they're the same frequencies,
they're the same inside function for cosine,
and then the amplitudes might change.
We're not sure about that yet.
But the point is, let's make the assumption
that each displacement function is
just a scalar multiple of cosine.
We know that cosine is a real valued function,
these scalar amplitudes are scalar
constant, those are real valued.
We can compute those, the multiplication
swaps order, and specifically,
we can factor out a cosine at any time t,
the output of this function is a scalar.
So we're saying hey, let's
make a guess that the solution
to our differential equation
takes the form u of t is equal
to a scalar multiple cosine omega times
t minus t naught, the reference time,
times some constant vector v. The fact that
we've assumed that our masses are actually
at motion means that these
values v1 and v2 are not both 0.
In other words, this is not the zero function.
It's not like the masses just stay in place.
These things are actually moving.
In other words, we want to find a nonzero
vector v such that u is equal to cosine times
that nonzero vector in this form.
This type of educated guessing
has a long tradition.
And in fact, it has its own name.
It's called the cosine ansatz.
For you viewers that speak German, perhaps
you want to translate ansatz for me.
I've already looked it up online, but it would
be nice to have a human being tell me instead.
The point is because we know
information about how pendulum swing
in the small angle oscillation,
we're going to make the guess
that our displacement function is scalar
cosine function times some unknown v.
And then we have to pay the price and suffer.
What does that guess imply
about our original problem?
That's right.
Mathematicians and engineers guess all the time.
It's just that on the other side of the guess,
there's a lot of grunt work that goes into it.
So here we are, if we make the cosine
ansatz, if we assume this is true,
and we look back at our original
equation, what could we say?
But we know that the original equation u double
dot of t is equal to negative A times u of t
that involves a simple second derivative
of u of t on the left hand side.
On the right hand side, I just
have a negative matrix time scalar.
The second derivative of a vector valued
function, remember from multivariable calculus,
when I take the second derivative
of a vector, that is equivalent
to taking the second derivative
of each component of that vector.
So the second derivative operator hits both
the first component and the second component.
On the right hand side, we actually have the
entry by entry definition of that matrix.
With that, when we look at
these individual entries,
perhaps now we can use this cosine ansatz
in its scalar form to figure out what
that means in terms of the matrix equation.
We'll do that by looking at both the left
hand side and the right hand side separately,
and we'll start by differentiating each
individual displacement with respect
to the implications of that assumption.
The scalar form of our cosine
ansatz take these forms.
Let's go ahead and take the second
derivative of either of these equations.
To take the second derivative
of that scalar equation,
we actually need to take the
derivative of the first derivative.
Let's start with the first derivative.
Assuming that the ith displacement
is some amplitude times a cosine
with an unknown frequency, when we take
the first derivative of that thing,
that's going to be the first derivative
of a scalar multiple times cosine,
we know in differentiation, the scalars come
out, so I can pull the scalar v sub i out of
that thing, and I'm just left
with the derivative of cosine.
When I take the derivative of
cosine of omega of t minus t naught,
this is respect to the variable t. The
derivative of cosine is negative sine
and then I have to take the derivative of the
inside omega and t naught are both constants.
So that means the chain rule
allows me to bring an omega out.
And that implies that the first derivative
of the displacement is negative
omega times v sub i,
the amplitude of oscillation times
the sine of omega t minus t naught.
Remember, the goal is to look at
the left hand side of our equation
which involves second derivatives.
The second derivative is the
derivative of the first derivative.
This was the first derivative
we just calculated.
This is a differentiation operator with respect
to t omega and v sub i are constant scalars.
We can pull constant scalars out because
derivatives are linear operators.
That leaves us with the derivative of the
sine function, but we know from calculus
that the derivative of sine is cosine.
And then based on the chain rule, we also have
to take the derivative of the inner function.
The derivative of a composition function is
the derivative of the outer function evaluated
at the inner function times the
derivative of the inner function.
So now I have cosine omega t, t
minus t naught times the derivative
of the inside which is this omega.
The two omegas combined, that becomes
omega squared, vi times cosine.
That negative came out because the first
derivative of cosine was negative sine,
it stayed negative because the
derivative of sine is positive cosine.
There's only one negative.
Notice then, however, that the second
derivative is a scalar multiple
of the displacement function itself.
This was because the cosine onsets assumed
that the displacement was v sub
i times some cosine function.
That's the exact term that shows
up when we differentiate twice.
So the second derivative of each displacement
function is omega squared times the displacement
function with a negative out front.
But under the assumption that our cosine
ansatz hold for both displacement functions,
that implies the second derivative
of u1 and the second derivative
of u2 are literally just negative
omega squared times u1 in that form
and then negative omega squared
times u2 in that form,
which means we can factor
out a negative omega squared.
We can commute, we can swap
the order of multiplications.
And I have a cosine in the first
term, and it goes on the second term
and then nonzero scalars v1 and v2,
pulling out the shared coefficients.
I now have that second derivative
of my displacement vector valued function is
negative omega squared times cosine omega t
minus t naught times a nonzero vector
v. Focusing in on the right hand side,
everywhere we see you, we can
substitute with our cosine ansatz.
So negative A times u of t is equal to negative
A times the cosine omega t minus t naught times
nonzero vector v. But, of course, for each value
of t, the cosine outputs a real value scalar.
In other words, this is a scalar times a vector.
On the outside on the left, we
have a matrix multiplication.
But in matrix multiplication, if I have a matrix
times a scalar, times a vector, that comes out,
that's one of the properties of linearity,
which means negative A times u is actually
negative cosine omega t minus t naught times A
times unknown and desired nonzero vector v. When
we relate those two expressions to each other,
when we equate them, we see we're looking
for scalars omega and nonzero vectors v
that satisfy this new relation coming
out of the work that we've just done.
However, we want this to be true for all value
of t during our experimental observation.
This has to hold over all values of t in order
for us to predict the displacement of each mass.
Let's start with the easiest part
of this equation on both the left
and the right hand side, we see
that there is a negative sign.
Let's just divide each side by negative 1 and
that disappears, leaving us to search for values
of omega and nonzero vector v such that cosine
omega t minus t naught times omega squared times
v is equal to cosine omega t minus t
naught times A times v. We might be tempted
to divide each side by cosine except
because it has to hold for all values of t,
it should also hold when cosine
is 0, we can't divide by 0,
that approach doesn't actually hold weight here.
Instead, what we do, let's
bring the entire left hand side
over to the other side using subtraction.
That's called rules of algebra.
When we do so, we see the
term on the right hand side,
minus the term on the left hand
side is equal to the zero vector.
Both terms involve a scalar
valued output of the cosine.
So we can actually factor that
out of our equation and we're left
with cosine times quantity A times v
minus omega squared times v equal to zero,
but this is a scalar times
a 2 by 1 vector equal to 0.
The only way that a scalar times a
vector is equal to the zero vector is
if either the scalar is 0, or the vector is 0.
That's called the zero product
property for this vector space.
But this relation must hold for all values
of t cosine omega t minus t naught
is not always going to be 0.
Thus, in order for this product to be 0, it
cannot be that this scalar is 0 for all values
of t. In other words, we're
going to focus our search
to the term av minus omega squared times v.
But we know more than that, specifically,
we know that the vector v is nonzero, because
we assumed that the masses we're moving,
that we don't have the trivial
solution to the problem,
which is both displacements have zero value.
There's actually some displacement that happens.
There's some movement of the mass.
Let's recap that in a different way.
In order to solve this problem under the
cosine ansatz, that will be true only
if we can find nonzero vectors v, such that a
times v minus lambda squared times v is equal
to the zero vector.
But if we bring this negative term over to the
other side, we're looking for nonzero vectors v
such that A times v is equal to
omega squared times v. Let's go ahead
and introduce some notation.
Let's call omega squared equal to some new
variable called lambda, lowercase Greek lambda.
So we're looking for nonzero vector
z such that A times v is equal
to lambda times v, where
lambda is omega squared.
Ding, ding, ding, ding, ding.
This is a standard eigenvalue problem.
We have now transformed the problem of
predicting the location of the masses
at any time into a couple differential
equation that looks like this.
And under certain assumptions, we're claiming
that is equivalent to analyzing the matrix A
to find nonzero vectors and scalars in
pair forms such that A times v is equal
to lambda times v. This is where I want to
join you for some tears of joy and sorrow,
maybe tears of joy out of my left side
and tears of sorrow out of my right side.
We've just spent the last n hours in this
lecture defining a couple pendula problem,
and then translating that into
a standard eigenvalue problem
in the form ax equals lambda x. In other
words, the entire focus of our work was
to mathematize a real world context
into a standard eigenvalue problem.
We have just finished that.
My hope is that this gives you insight
into the type of work that it takes
to make linear algebra come alive.
Or another way to say that the type of
thinking and training and privilege required
to turn things in the world around you
into a form that you would recognize
from the math classes that you take.
This is a big deal you all -- I wish
I could write you a certificate.
In future lessons, we're going to learn how
to solve the eigenvalue problems and what
that means for the corresponding
couple pendula problem.
In other words, we're going to learn how to
complete the loop in this modeling process.
As a gift to my viewers, I challenge
you, we just use the cosine ansatz
to turn this differential equation
into this standard eigenvalue problem.
Please viewers use the sine ansatz.
So instead of assuming that the
displacement functions are a scalar,
multiple cosine scalar multiple
times some nonzero unknown vector v,
assume the sine ansatz, u of t is equal
to sine omega t minus t naught the same
omega times some nonzero vector v. Show
that this sine ansatz comes up with
the same standard eigenvalue problem.
I'm going to make a claim and I
could be lying through my teeth.
The only way you'll know is
if you check for yourself.
Proposition, if I have a nonzero vector v
that is called an eigenvector of this matrix
and associated eigenvalue lambda.
In other words, if a times v
is equal to lambda times v,
then if I look at the vector valued functions,
u of t is cosine omega times quantity t minus
t naught, entire function output multiplied
by nonzero vector v. And then I have
a separate displacement function u
of t is equal to sine omega.
These are the same frequencies, the same angular
frequencies omega t minus t naught times vector
v. Both those displacement functions
solve the second order linear system,
u double dot of t is equal to negative
A times u of t. In other words,
information about the eigenvalues of
the matrix A gives us deep intuition
about solving the corresponding
couple differential equations,
which gives us deep intuition
about the behavior, the positions,
the displacements of the masses
in the couple pendulum problem.
In the next videos, we'll talk about how
to solve this type of eigenvalue problems
by leveraging our intuition and
knowledge about matrix algebra.
I'll see you in the next lesson.
