An eigenvector of a square matrix is a non-zero
vector that, when the matrix multiplies , yields
a constant multiple of , the latter multiplier
being commonly denoted by . That is:
(Because this equation uses post-multiplication
by , it describes a right eigenvector.)
The number is called the eigenvalue of corresponding
to .
If 2D space is visualized as a piece of cloth
being stretched by the matrix, the eigenvectors
would make up the line along the direction
the cloth is stretched in and the line of
cloth at the center of the stretching, whose
direction isn't changed by the stretching
either. The eigenvalues for the first line
would give the scale to which the cloth is
stretched, and for the second line the scale
to which it's tightened. A reflection may
be viewed as stretching a line to scale -1
while shrinking the axis of reflection to
scale 1. For 3D rotations, the eigenvectors
form the axis of rotation, and since the scale
of the axis is unchanged by the rotation,
their eigenvalues are all 1.
In analytic geometry, for example, a three-coordinate
vector may be seen as an arrow in three-dimensional
space starting at the origin. In that case,
an eigenvector is an arrow whose direction
is either preserved or exactly reversed after
multiplication by . The corresponding eigenvalue
determines how the length of the arrow is
changed by the operation, and whether its
direction is reversed or not, determined by
whether the eigenvalue is negative or positive.
In abstract linear algebra, these concepts
are naturally extended to more general situations,
where the set of real scalar factors is replaced
by any field of scalars; the set of Cartesian
vectors is replaced by any vector space, and
matrix multiplication is replaced by any linear
operator that maps vectors to vectors. In
such cases, the "vector" in "eigenvector"
may be replaced by a more specific term, such
as "eigenfunction", "eigenmode", "eigenface",
or "eigenstate". Thus, for example, the exponential
function is an eigenfunction of the derivative
operator, , with eigenvalue , since its derivative
is .
The set of all eigenvectors of a matrix, each
paired with its corresponding eigenvalue,
is called the eigensystem of that matrix.
Any multiple of an eigenvector is also an
eigenvector, with the same eigenvalue. An
eigenspace of a matrix is the set of all eigenvectors
with the same eigenvalue, together with the
zero vector. An eigenbasis for is any basis
for the set of all vectors that consists of
linearly independent eigenvectors of . Not
every matrix has an eigenbasis, but every
symmetric matrix does.
The terms characteristic vector, characteristic
value, and characteristic space are also used
for these concepts. The prefix eigen- is adopted
from the German word eigen for "own-" or "unique
to", "peculiar to", or "belonging to" in the
sense of "idiosyncratic" in relation to the
originating matrix.
Eigenvalues and eigenvectors have many applications
in both pure and applied mathematics. They
are used in matrix factorization, in quantum
mechanics, and in many other areas.
Definition
Eigenvectors and eigenvalues of a real matrix
In many contexts, a vector can be assumed
to be a list of real numbers, written vertically
with brackets around the entire list, such
as the vectors u and v below. Two vectors
are said to be scalar multiples of each other
if they have the same number of coordinates,
and if every coordinate of one vector is obtained
by multiplying each corresponding coordinate
in the other vector by the same number. For
example, the vectors
and
are scalar multiples of each other, because
each coordinate of is −20 times the corresponding
coordinate of .
A vector with three coordinates, like or above,
may represent a point in three-dimensional
space, relative to some Cartesian coordinate
system. It helps to think of such a vector
as the tip of an arrow whose tail is at the
origin of the coordinate system. In this case,
the condition " is parallel to " means that
the two arrows lie on the same straight line,
and may differ only in length and direction
along that line.
If we multiply any square matrix with rows
and columns by such a vector , the result
will be another vector , also with rows and
one column. That is,
is mapped to
where, for each index ,
In general, if are not all zeros, the vectors
and will not be parallel. When they are parallel
we say that is an eigenvector of . In that
case, the scale factor is said to be the eigenvalue
corresponding to that eigenvector.
In particular, multiplication by a 3×3 matrix
may change both the direction and the magnitude
of an arrow in three-dimensional space. However,
if is an eigenvector of with eigenvalue , the
operation may only change its length, and
either keep its direction or flip it. Specifically,
the length of the arrow will increase if , remain
the same if , and decrease it if . Moreover,
the direction will be precisely the same if
, and flipped if . If , then the length of
the arrow becomes zero.
An example
For the transformation matrix
the vector
is an eigenvector with eigenvalue 2. Indeed,
On the other hand the vector
is not an eigenvector, since
and this vector is not a multiple of the original
vector .
Another example
For the matrix
we have
Therefore, the vectors and are eigenvectors
of corresponding to the eigenvalues 1 and
3 respectively.
Trivial cases
The identity matrix maps every vector to itself.
Therefore, every vector is an eigenvector
of , with eigenvalue 1.
More generally, if is a diagonal matrix, and
is a vector parallel to axis , then where
. That is, the eigenvalues of a diagonal matrix
are the elements of its main diagonal. This
is trivially the case of any 1 × 1 matrix.
General definition
The concept of eigenvectors and eigenvalues
extends naturally to abstract linear transformations
on abstract vector spaces. Namely, let be
any vector space over some field of scalars,
and let be a linear transformation mapping
into . We say that a non-zero vector of is
an eigenvector of if there is a scalar in
such that
.
This equation is called the eigenvalue equation
for , and the scalar is the eigenvalue of
corresponding to the eigenvector . Note that
means the result of applying the operator
to the vector , while means the product of
the scalar by .
The matrix-specific definition is a special
case of this abstract definition. Namely,
the vector space is the set of all column
vectors of a certain size ×1, and is the
linear transformation that consists in multiplying
a vector by the given matrix .
Some authors allow to be the zero vector in
the definition of eigenvector. This is reasonable
as long as we define eigenvalues and eigenvectors
carefully: If we would like the zero vector
to be an eigenvector, then we must first define
an eigenvalue of as a scalar in such that
there is a nonzero vector in with . We then
define an eigenvector to be a vector in such
that there is an eigenvalue in with . This
way, we ensure that it is not the case that
every scalar is an eigenvalue corresponding
to the zero vector.
Eigenspace and spectrum
If is an eigenvector of , with eigenvalue
, then any scalar multiple of with nonzero
is also an eigenvector with eigenvalue , since
. Moreover, if and are eigenvectors with the
same eigenvalue and , then is also an eigenvector
with the same eigenvalue . Therefore, the
set of all eigenvectors with the same eigenvalue
, together with the zero vector, is a linear
subspace of , called the eigenspace of associated
to . If that subspace has dimension 1, it
is sometimes called an eigenline.
The geometric multiplicity of an eigenvalue
is the dimension of the eigenspace associated
to , i.e. number of linearly independent eigenvectors
with that eigenvalue.
The eigenspaces of T always form a direct
sum. Therefore the sum of the dimensions of
the eigenspaces cannot exceed the dimension
n of the space on which T operates, and in
particular there cannot be more than n distinct
eigenvalues.
Any subspace spanned by eigenvectors of is
an invariant subspace of , and the restriction
of T to such a subspace is diagonalizable.
The set of eigenvalues of is sometimes called
the spectrum of .
Eigenbasis
An eigenbasis for a linear operator that operates
on a vector space is a basis for that consists
entirely of eigenvectors of . Such a basis
exists precisely if the direct sum of the
eigenspaces equals the whole space, in which
case one can take the union of bases chosen
in each of the eigenspaces as eigenbasis.
The matrix of T in a given basis is diagonal
precisely when that basis is an eigenbasis
for T, and for this reason T is called diagonalizable
if it admits an eigenbasis.
Generalizations to infinite-dimensional spaces
The definition of eigenvalue of a linear transformation
remains valid even if the underlying space
is an infinite dimensional Hilbert or Banach
space. Namely, a scalar is an eigenvalue if
and only if there is some nonzero vector such
that .
Eigenfunctions
A widely used class of linear operators acting
on infinite dimensional spaces are the differential
operators on function spaces. Let be a linear
differential operator in on the space of infinitely
differentiable real functions of a real argument
. The eigenvalue equation for is the differential
equation
The functions that satisfy this equation are
commonly called eigenfunctions of . For the
derivative operator , an eigenfunction is
a function that, when differentiated, yields
a constant times the original function. The
solution is an exponential function
including when is zero when it becomes a constant
function. Eigenfunctions are an essential
tool in the solution of differential equations
and many other applied and theoretical fields.
For instance, the exponential functions are
eigenfunctions of the shift operators. This
is the basis of Fourier transform methods
for solving problems.
Spectral theory
If is an eigenvalue of , then the operator
is not one-to-one, and therefore its inverse
does not exist. The converse is true for finite-dimensional
vector spaces, but not for infinite-dimensional
ones. In general, the operator may not have
an inverse, even if is not an eigenvalue.
For this reason, in functional analysis one
defines the spectrum of a linear operator
as the set of all scalars for which the operator
has no bounded inverse. Thus the spectrum
of an operator always contains all its eigenvalues,
but is not limited to them.
Associative algebras and representation theory
More algebraically, rather than generalizing
the vector space to an infinite dimensional
space, one can generalize the algebraic object
that is acting on the space, replacing a single
operator acting on a vector space with an
algebra representation – an associative
algebra acting on a module. The study of such
actions is the field of representation theory.
A closer analog of eigenvalues is given by
the representation-theoretical concept of
weight, with the analogs of eigenvectors and
eigenspaces being weight vectors and weight
spaces.
Eigenvalues and eigenvectors of matrices
Characteristic polynomial
The eigenvalue equation for a matrix is
which is equivalent to
where is the identity matrix. It is a fundamental
result of linear algebra that an equation
has a non-zero solution if, and only if, the
determinant of the matrix is zero. It follows
that the eigenvalues of are precisely the
real numbers that satisfy the equation
The left-hand side of this equation can be
seen to be a polynomial function of the variable
. The degree of this polynomial is , the order
of the matrix. Its coefficients depend on
the entries of , except that its term of degree
is always . This polynomial is called the
characteristic polynomial of ; and the above
equation is called the characteristic equation
of .
For example, let be the matrix
The characteristic polynomial of is
which is
The roots of this polynomial are 2, 1, and
11. Indeed these are the only three eigenvalues
of , corresponding to the eigenvectors and
.
In the real domain
Since the eigenvalues are roots of the characteristic
polynomial, an matrix has at most eigenvalues.
If the matrix has real entries, the coefficients
of the characteristic polynomial are all real;
but it may have fewer than real roots, or
no real roots at all.
For example, consider the cyclic permutation
matrix
This matrix shifts the coordinates of the
vector up by one position, and moves the first
coordinate to the bottom. Its characteristic
polynomial is which has one real root . Any
vector with three equal non-zero coordinates
is an eigenvector for this eigenvalue. For
example,
In the complex domain
The fundamental theorem of algebra implies
that the characteristic polynomial of an matrix
, being a polynomial of degree , has exactly
complex roots. More precisely, it can be factored
into the product of linear terms,
where each is a complex number. The numbers
, , ... , are roots of the polynomial, and
are precisely the eigenvalues of .
Even if the entries of are all real numbers,
the eigenvalues may still have non-zero imaginary
parts. Also, the eigenvalues may be irrational
numbers even if all the entries of are rational
numbers, or all are integers. However, if
the entries of are algebraic numbers, the
eigenvalues will be algebraic numbers too.
The non-real roots of a real polynomial with
real coefficients can be grouped into pairs
of complex conjugate values, namely with the
two members of each pair having the same real
part and imaginary parts that differ only
in sign. If the degree is odd, then by the
intermediate value theorem at least one of
the roots will be real. Therefore, any real
matrix with odd order will have at least one
real eigenvalue; whereas a real matrix with
even order may have no real eigenvalues.
In the example of the 3×3 cyclic permutation
matrix , above, the characteristic polynomial
has two additional non-real roots, namely
and ,
where is the imaginary unit. Note that , , and
. Then
and
Therefore, the vectors and are eigenvectors
of , with eigenvalues , and , respectively.
Algebraic multiplicities
Let be an eigenvalue of an matrix . The algebraic
multiplicity of is its multiplicity as a root
of the characteristic polynomial, that is,
the largest integer such that divides evenly
that polynomial.
Like the geometric multiplicity , the algebraic
multiplicity is an integer between 1 and ; and
the sum of over all distinct eigenvalues also
cannot exceed . If complex eigenvalues are
considered, is exactly .
It can be proved that the geometric multiplicity
of an eigenvalue never exceeds its algebraic
multiplicity . Therefore, is at most .
If , then is said to be a semisimple eigenvalue.
Example
For the matrix:
the characteristic polynomial of is ,
being the product of the diagonal with a lower
triangular matrix.
The roots of this polynomial, and hence the
eigenvalues, are 2 and 3. The algebraic multiplicity
of each eigenvalue is 2; in other words they
are both double roots. On the other hand,
the geometric multiplicity of the eigenvalue
2 is only 1, because its eigenspace is spanned
by the vector , and is therefore 1-dimensional.
Similarly, the geometric multiplicity of the
eigenvalue 3 is 1 because its eigenspace is
spanned by . Hence, the total algebraic multiplicity
of A, denoted , is 4, which is the most it
could be for a 4 by 4 matrix. The geometric
multiplicity is 2, which is the smallest it
could be for a matrix which has two distinct
eigenvalues.
Diagonalization and eigendecomposition
If the sum of the geometric multiplicities
of all eigenvalues is exactly , then has a
set of linearly independent eigenvectors.
Let be a square matrix whose columns are those
eigenvectors, in any order. Then we will have
, where is the diagonal matrix such that is
the eigenvalue associated to column of . Since
the columns of are linearly independent, the
matrix is invertible. Premultiplying both
sides by we get . By definition, therefore,
the matrix is diagonalizable.
Conversely, if is diagonalizable, let be a
non-singular square matrix such that is some
diagonal matrix . Multiplying both sides on
the left by we get . Therefore each column
of must be an eigenvector of , whose eigenvalue
is the corresponding element on the diagonal
of . Since the columns of must be linearly
independent, it follows that . Thus is equal
to if and only if is diagonalizable.
If is diagonalizable, the space of all -coordinate
vectors can be decomposed into the direct
sum of the eigenspaces of . This decomposition
is called the eigendecomposition of , and
it is preserved under change of coordinates.
A matrix that is not diagonalizable is said
to be defective. For defective matrices, the
notion of eigenvector can be generalized to
generalized eigenvectors, and that of diagonal
matrix to a Jordan form matrix. Over an algebraically
closed field, any matrix has a Jordan form
and therefore admits a basis of generalized
eigenvectors, and a decomposition into generalized
eigenspaces
Further properties
Let be an arbitrary matrix of complex numbers
with eigenvalues , , ... . Then
The trace of , defined as the sum of its diagonal
elements, is also the sum of all eigenvalues:
.
The determinant of is the product of all eigenvalues:
.
The eigenvalues of the th power of , i.e.
the eigenvalues of , for any positive integer
, are
The matrix is invertible if and only if all
the eigenvalues are nonzero.
If is invertible, then the eigenvalues of
are . Clearly, the geometric multiplicities
coincide. Moreover, since the characteristic
polynomial of the inverse is the reciprocal
polynomial for that of the original, they
are share the same algebraic multiplicity.
If is equal to its conjugate transpose , then
every eigenvalue is real. The same is true
of any a symmetric real matrix. If is also
positive-definite, positive-semidefinite,
negative-definite, or negative-semidefinite
every eigenvalue is positive, non-negative,
negative, or non-positive respectively.
Every eigenvalue of a unitary matrix has absolute
value .
Left and right eigenvectors
The use of matrices with a single column to
represent vectors is traditional in many disciplines.
For that reason, the word "eigenvector" almost
always means a right eigenvector, namely a
column vector that must be placed to the right
of the matrix in the defining equation
.
There may be also single-row vectors that
are unchanged when they occur on the left
side of a product with a square matrix ; that
is, which satisfy the equation
Any such row vector is called a left eigenvector
of .
The left eigenvectors of are transposes of
the right eigenvectors of the transposed matrix
, since their defining equation is equivalent
to
It follows that, if is Hermitian, its left
and right eigenvectors are complex conjugates.
In particular if is a real symmetric matrix,
they are the same except for transposition.
Variational characterization
In the Hermitian case, eigenvalues can be
given a variational characterization. The
largest eigenvalue of is the maximum value
of the quadratic form . A value of that realizes
that maximum, is an eigenvector. For more
information, see Min-max theorem.
Calculation
Computing the eigenvalues
The eigenvalues of a matrix can be determined
by finding the roots of the characteristic
polynomial. Explicit algebraic formulas for
the roots of a polynomial exist only if the
degree is 4 or less. According to the Abel–Ruffini
theorem there is no general, explicit and
exact algebraic formula for the roots of a
polynomial with degree 5 or more.
It turns out that any polynomial with degree
is the characteristic polynomial of some companion
matrix of order . Therefore, for matrices
of order 5 or more, the eigenvalues and eigenvectors
cannot be obtained by an explicit algebraic
formula, and must therefore be computed by
approximate numerical methods.
In theory, the coefficients of the characteristic
polynomial can be computed exactly, since
they are sums of products of matrix elements;
and there are algorithms that can find all
the roots of a polynomial of arbitrary degree
to any required accuracy. However, this approach
is not viable in practice because the coefficients
would be contaminated by unavoidable round-off
errors, and the roots of a polynomial can
be an extremely sensitive function of the
coefficients.
Efficient, accurate methods to compute eigenvalues
and eigenvectors of arbitrary matrices were
not known until the advent of the QR algorithm
in 1961. Combining the Householder transformation
with the LU decomposition results in an algorithm
with better convergence than the QR algorithm.
For large Hermitian sparse matrices, the Lanczos
algorithm is one example of an efficient iterative
method to compute eigenvalues and eigenvectors,
among several other possibilities.
Computing the eigenvectors
Once the value of an eigenvalue is known,
the corresponding eigenvectors can be found
by finding non-zero solutions of the eigenvalue
equation, that becomes a system of linear
equations with known coefficients. For example,
once it is known that 6 is an eigenvalue of
the matrix
we can find its eigenvectors by solving the
equation , that is
This matrix equation is equivalent to two
linear equations
that is
Both equations reduce to the single linear
equation . Therefore, any vector of the form
, for any non-zero real number , is an eigenvector
of with eigenvalue .
The matrix above has another eigenvalue . A
similar calculation shows that the corresponding
eigenvectors are the non-zero solutions of
, that is, any vector of the form , for any
non-zero real number .
Some numeric methods that compute the eigenvalues
of a matrix also determine a set of corresponding
eigenvectors as a by-product of the computation.
History
Eigenvalues are often introduced in the context
of linear algebra or matrix theory. Historically,
however, they arose in the study of quadratic
forms and differential equations.
In the 18th century Euler studied the rotational
motion of a rigid body and discovered the
importance of the principal axes. Lagrange
realized that the principal axes are the eigenvectors
of the inertia matrix. In the early 19th century,
Cauchy saw how their work could be used to
classify the quadric surfaces, and generalized
it to arbitrary dimensions. Cauchy also coined
the term racine caractéristique for what
is now called eigenvalue; his term survives
in characteristic equation.
Fourier used the work of Laplace and Lagrange
to solve the heat equation by separation of
variables in his famous 1822 book Théorie
analytique de la chaleur. Sturm developed
Fourier's ideas further and brought them to
the attention of Cauchy, who combined them
with his own ideas and arrived at the fact
that real symmetric matrices have real eigenvalues.
This was extended by Hermite in 1855 to what
are now called Hermitian matrices. Around
the same time, Brioschi proved that the eigenvalues
of orthogonal matrices lie on the unit circle,
and Clebsch found the corresponding result
for skew-symmetric matrices. Finally, Weierstrass
clarified an important aspect in the stability
theory started by Laplace by realizing that
defective matrices can cause instability.
In the meantime, Liouville studied eigenvalue
problems similar to those of Sturm; the discipline
that grew out of their work is now called
Sturm–Liouville theory. Schwarz studied
the first eigenvalue of Laplace's equation
on general domains towards the end of the
19th century, while Poincaré studied Poisson's
equation a few years later.
At the start of the 20th century, Hilbert
studied the eigenvalues of integral operators
by viewing the operators as infinite matrices.
He was the first to use the German word eigen
which means "own", to denote eigenvalues and
eigenvectors in 1904, though he may have been
following a related usage by Helmholtz. For
some time, the standard term in English was
"proper value", but the more distinctive term
"eigenvalue" is standard today.
The first numerical algorithm for computing
eigenvalues and eigenvectors appeared in 1929,
when Von Mises published the power method.
One of the most popular methods today, the
QR algorithm, was proposed independently by
John G.F. Francis and Vera Kublanovskaya in
1961.
Applications
Eigenvalues of geometric transformations
The following table presents some example
transformations in the plane along with their
2×2 matrices, eigenvalues, and eigenvectors.
Note that the characteristic equation for
a rotation is a quadratic equation with discriminant
, which is a negative number whenever is not
an integer multiple of 180°. Therefore, except
for these special cases, the two eigenvalues
are complex numbers, ; and all eigenvectors
have non-real entries. Indeed, except for
those special cases, a rotation changes the
direction of every nonzero vector in the plane.
Schrödinger equation
An example of an eigenvalue equation where
the transformation is represented in terms
of a differential operator is the time-independent
Schrödinger equation in quantum mechanics:
where , the Hamiltonian, is a second-order
differential operator and , the wavefunction,
is one of its eigenfunctions corresponding
to the eigenvalue , interpreted as its energy.
However, in the case where one is interested
only in the bound state solutions of the Schrödinger
equation, one looks for within the space of
square integrable functions. Since this space
is a Hilbert space with a well-defined scalar
product, one can introduce a basis set in
which and can be represented as a one-dimensional
array and a matrix respectively. This allows
one to represent the Schrödinger equation
in a matrix form.
The bra–ket notation is often used in this
context. A vector, which represents a state
of the system, in the Hilbert space of square
integrable functions is represented by . In
this notation, the Schrödinger equation is:
where is an eigenstate of and represents the
eigenvalue. It is a self adjoint operator,
the infinite dimensional analog of Hermitian
matrices. As in the matrix case, in the equation
above is understood to be the vector obtained
by application of the transformation to .
Molecular orbitals
In quantum mechanics, and in particular in
atomic and molecular physics, within the Hartree–Fock
theory, the atomic and molecular orbitals
can be defined by the eigenvectors of the
Fock operator. The corresponding eigenvalues
are interpreted as ionization potentials via
Koopmans' theorem. In this case, the term
eigenvector is used in a somewhat more general
meaning, since the Fock operator is explicitly
dependent on the orbitals and their eigenvalues.
If one wants to underline this aspect one
speaks of nonlinear eigenvalue problem. Such
equations are usually solved by an iteration
procedure, called in this case self-consistent
field method. In quantum chemistry, one often
represents the Hartree–Fock equation in
a non-orthogonal basis set. This particular
representation is a generalized eigenvalue
problem called Roothaan equations.
Geology and glaciology
In geology, especially in the study of glacial
till, eigenvectors and eigenvalues are used
as a method by which a mass of information
of a clast fabric's constituents' orientation
and dip can be summarized in a 3-D space by
six numbers. In the field, a geologist may
collect such data for hundreds or thousands
of clasts in a soil sample, which can only
be compared graphically such as in a Tri-Plot
diagram, or as a Stereonet on a Wulff Net.
The output for the orientation tensor is in
the three orthogonal axes of space. The three
eigenvectors are ordered by their eigenvalues
; then is the primary orientation/dip of clast,
is the secondary and is the tertiary, in terms
of strength. The clast orientation is defined
as the direction of the eigenvector, on a
compass rose of 360°. Dip is measured as
the eigenvalue, the modulus of the tensor:
this is valued from 0° to 90°. The relative
values of , , and are dictated by the nature
of the sediment's fabric. If , the fabric
is said to be isotropic. If , the fabric is
said to be planar. If , the fabric is said
to be linear.
Principal components analysis
The eigendecomposition of a symmetric positive
semidefinite matrix yields an orthogonal basis
of eigenvectors, each of which has a nonnegative
eigenvalue. The orthogonal decomposition of
a PSD matrix is used in multivariate analysis,
where the sample covariance matrices are PSD.
This orthogonal decomposition is called principal
components analysis in statistics. PCA studies
linear relations among variables. PCA is performed
on the covariance matrix or the correlation
matrix. For the covariance or correlation
matrix, the eigenvectors correspond to principal
components and the eigenvalues to the variance
explained by the principal components. Principal
component analysis of the correlation matrix
provides an orthonormal eigen-basis for the
space of the observed data: In this basis,
the largest eigenvalues correspond to the
principal-components that are associated with
most of the covariability among a number of
observed data.
Principal component analysis is used to study
large data sets, such as those encountered
in data mining, chemical research, psychology,
and in marketing. PCA is popular especially
in psychology, in the field of psychometrics.
In Q methodology, the eigenvalues of the correlation
matrix determine the Q-methodologist's judgment
of practical significance. More generally,
principal component analysis can be used as
a method of factor analysis in structural
equation modeling.
Vibration analysis
Eigenvalue problems occur naturally in the
vibration analysis of mechanical structures
with many degrees of freedom. The eigenvalues
are used to determine the natural frequencies
of vibration, and the eigenvectors determine
the shapes of these vibrational modes. In
particular, undamped vibration is governed
by
or
that is, acceleration is proportional to position.
In dimensions, becomes a mass matrix and a
stiffness matrix. Admissible solutions are
then a linear combination of solutions to
the generalized eigenvalue problem
where is the eigenvalue and is the angular
frequency. Note that the principal vibration
modes are different from the principal compliance
modes, which are the eigenvectors of alone.
Furthermore, damped vibration, governed by
leads to what is called a so-called quadratic
eigenvalue problem,
This can be reduced to a generalized eigenvalue
problem by clever use of algebra at the cost
of solving a larger system.
The orthogonality properties of the eigenvectors
allows decoupling of the differential equations
so that the system can be represented as linear
summation of the eigenvectors. The eigenvalue
problem of complex structures is often solved
using finite element analysis, but neatly
generalize the solution to scalar-valued vibration
problems.
Eigenfaces
In image processing, processed images of faces
can be seen as vectors whose components are
the brightnesses of each pixel. The dimension
of this vector space is the number of pixels.
The eigenvectors of the covariance matrix
associated with a large set of normalized
pictures of faces are called eigenfaces; this
is an example of principal components analysis.
They are very useful for expressing any face
image as a linear combination of some of them.
In the facial recognition branch of biometrics,
eigenfaces provide a means of applying data
compression to faces for identification purposes.
Research related to eigen vision systems determining
hand gestures has also been made.
Similar to this concept, eigenvoices represent
the general direction of variability in human
pronunciations of a particular utterance,
such as a word in a language. Based on a linear
combination of such eigenvoices, a new voice
pronunciation of the word can be constructed.
These concepts have been found useful in automatic
speech recognition systems, for speaker adaptation.
Tensor of moment of inertia
In mechanics, the eigenvectors of the moment
of inertia tensor define the principal axes
of a rigid body. The tensor of moment of inertia
is a key quantity required to determine the
rotation of a rigid body around its center
of mass.
Stress tensor
In solid mechanics, the stress tensor is symmetric
and so can be decomposed into a diagonal tensor
with the eigenvalues on the diagonal and eigenvectors
as a basis. Because it is diagonal, in this
orientation, the stress tensor has no shear
components; the components it does have are
the principal components.
Eigenvalues of a graph
In spectral graph theory, an eigenvalue of
a graph is defined as an eigenvalue of the
graph's adjacency matrix , or of the graph's
Laplacian matrix, which is either or , where
is a diagonal matrix with equal to the degree
of vertex , and in , the th diagonal entry
is . The th principal eigenvector of a graph
is defined as either the eigenvector corresponding
to the th largest or th smallest eigenvalue
of the Laplacian. The first principal eigenvector
of the graph is also referred to merely as
the principal eigenvector.
The principal eigenvector is used to measure
the centrality of its vertices. An example
is Google's PageRank algorithm. The principal
eigenvector of a modified adjacency matrix
of the World Wide Web graph gives the page
ranks as its components. This vector corresponds
to the stationary distribution of the Markov
chain represented by the row-normalized adjacency
matrix; however, the adjacency matrix must
first be modified to ensure a stationary distribution
exists. The second smallest eigenvector can
be used to partition the graph into clusters,
via spectral clustering. Other methods are
also available for clustering.
Basic reproduction number
See Basic reproduction number
The basic reproduction number is a fundamental
number in the study of how infectious diseases
spread. If one infectious person is put into
a population of completely susceptible people,
then is the average number of people that
one typical infectious person will infect.
The generation time of an infection is the
time, , from one person becoming infected
to the next person becoming infected. In a
heterogeneous population, the next generation
matrix defines how many people in the population
will become infected after time has passed.
is then the largest eigenvalue of the next
generation matrix.
See also
Antieigenvalue theory
Eigenplane
Eigenvalue algorithm
Introduction to eigenstates
Jordan normal form
List of numerical analysis software
Min-max theorem
Nonlinear eigenproblem
Quadratic eigenvalue problem
Singular value
Notes
References
External links
What are Eigen Values? – non-technical introduction
from PhysLink.com's "Ask the Experts"
Eigen Values and Eigen Vectors Numerical Examples
– Tutorial and Interactive Program from
Revoledu.
Introduction to Eigen Vectors and Eigen Values
– lecture from Khan Academy
Hill, Roger. "λ – Eigenvalues". Sixty Symbols.
Brady Haran for the University of Nottingham. 
Theory
Hazewinkel, Michiel, ed., "Eigen value", Encyclopedia
of Mathematics, Springer, ISBN 978-1-55608-010-4 
Hazewinkel, Michiel, ed., "Eigen vector",
Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 
Eigenvalue 
at PlanetMath.org.
Eigenvector – Wolfram MathWorld
Eigen Vector Examination working applet
Same Eigen Vector Examination as above in
a Flash demo with sound
Computation of Eigenvalues
Numerical solution of eigenvalue problems
Edited by Zhaojun Bai, James Demmel, Jack
Dongarra, Axel Ruhe, and Henk van der Vorst
Eigenvalues and Eigenvectors on the Ask Dr.
Math forums: [1], [2]
Online calculators
arndt-bruenner.de
bluebit.gr
wims.unice.fr
Demonstration applets
Java applet about eigenvectors in the real
plane
