In probability theory and mathematical physics,
a random matrix is a matrix-valued random
variable—that is, a matrix in which some
or all elements are random variables. Many
important properties of physical systems can
be represented mathematically as matrix problems.
For example, the thermal conductivity of a
lattice can be computed from the dynamical
matrix of the particle-particle interactions
within the lattice.
== Applications ==
=== Physics ===
In nuclear physics, random matrices were introduced
by Eugene Wigner to model the nuclei of heavy
atoms. He postulated that the spacings between
the lines in the spectrum of a heavy atom
nucleus should resemble the spacings between
the eigenvalues of a random matrix, and should
depend only on the symmetry class of the underlying
evolution. In solid-state physics, random
matrices model the behaviour of large disordered
Hamiltonians in the mean field approximation.
In quantum chaos, the Bohigas–Giannoni–Schmit
(BGS) conjecture asserts that the spectral
statistics of quantum systems whose classical
counterparts exhibit chaotic behaviour are
described by random matrix theory.In quantum
optics, transformations described by random
unitary matrices are crucial for demonstrating
the advantage of quantum over classical computation
(see, e.g., the boson sampling model). Moreover,
such random unitary transformations can be
directly implemented in an optical circuit,
by mapping their parameters to optical circuit
components (that is beam splitters and phase
shifters).Random matrix theory has also found
applications to the chiral Dirac operator
in quantum chromodynamics, quantum gravity
in two dimensions, mesoscopic physics,spin-transfer
torque, the fractional quantum Hall effect,
Anderson localization, quantum dots, and superconductors
=== Mathematical statistics and numerical
analysis ===
In multivariate statistics, random matrices
were introduced by John Wishart for statistical
analysis of large samples; see estimation
of covariance matrices.
Significant results have been shown that extend
the classical scalar Chernoff, Bernstein,
and Hoeffding inequalities to the largest
eigenvalues of finite sums of random Hermitian
matrices. Corollary results are derived for
the maximum singular values of rectangular
matrices.
In numerical analysis, random matrices have
been used since the work of John von Neumann
and Herman Goldstine to describe computation
errors in operations such as matrix multiplication.
See also for more recent results.
=== Number theory ===
In number theory, the distribution of zeros
of the Riemann zeta function (and other L-functions)
is modelled by the distribution of eigenvalues
of certain random matrices. The connection
was first discovered by Hugh Montgomery and
Freeman J. Dyson. It is connected to the Hilbert–Pólya
conjecture.
=== Theoretical neuroscience ===
In the field of theoretical neuroscience,
random matrices are increasingly used to model
the network of synaptic connections between
neurons in the brain. Dynamical models of
neuronal networks with random connectivity
matrix were shown to exhibit a phase transition
to chaos when the variance of the synaptic
weights crosses a critical value, at the limit
of infinite system size. Relating the statistical
properties of the spectrum of biologically
inspired random matrix models to the dynamical
behavior of randomly connected neural networks
is an intensive research topic.
=== Optimal control ===
In optimal control theory, the evolution of
n state variables through time depends at
any time on their own values and on the values
of k control variables. With linear evolution,
matrices of coefficients appear in the state
equation (equation of evolution). In some
problems the values of the parameters in these
matrices are not known with certainty, in
which case there are random matrices in the
state equation and the problem is known as
one of stochastic control. A key result in
the case of linear-quadratic control with
stochastic matrices is that the certainty
equivalence principle does not apply: while
in the absence of multiplier uncertainty (that
is, with only additive uncertainty) the optimal
policy with a quadratic loss function coincides
with what would be decided if the uncertainty
were ignored, this no longer holds in the
presence of random coefficients in the state
equation.
== Gaussian ensembles ==
The most studied random matrix ensembles are
the Gaussian ensembles.
The Gaussian unitary ensemble GUE(n) is described
by the Gaussian measure with density
1
Z
GUE
(
n
)
e
−
n
2
t
r
H
2
{\displaystyle {\frac {1}{Z_{{\text{GUE}}(n)}}}e^{-{\frac
{n}{2}}\mathrm {tr} H^{2}}}
on the space of n × n Hermitian matrices
H = (Hij)ni,j=1. Here ZGUE(n) = 2n/2 πn2/2
is a normalization constant, chosen so that
the integral of the density is equal to one.
The term unitary refers to the fact that the
distribution is invariant under unitary conjugation.
The Gaussian unitary ensemble models Hamiltonians
lacking time-reversal symmetry.
The Gaussian orthogonal ensemble GOE(n) is
described by the Gaussian measure with density
1
Z
GOE
(
n
)
e
−
n
4
t
r
H
2
{\displaystyle {\frac {1}{Z_{{\text{GOE}}(n)}}}e^{-{\frac
{n}{4}}\mathrm {tr} H^{2}}}
on the space of n × n real symmetric matrices
H = (Hij)ni,j=1. Its distribution is invariant
under orthogonal conjugation, and it models
Hamiltonians with time-reversal symmetry.
The Gaussian symplectic ensemble GSE(n) is
described by the Gaussian measure with density
1
Z
GSE
(
n
)
e
−
n
t
r
H
2
{\displaystyle {\frac {1}{Z_{{\text{GSE}}(n)}}}e^{-n\mathrm
{tr} H^{2}}\,}
on the space of n × n Hermitian quaternionic
matrices, e.g. symmetric square matrices composed
of quaternions, H = (Hij)ni,j=1. Its distribution
is invariant under conjugation by the symplectic
group, and it models Hamiltonians with time-reversal
symmetry but no rotational symmetry.
The joint probability density for the eigenvalues
λ1,λ2,...,λn of GUE/GOE/GSE is given by
1
Z
β
,
n
∏
k
=
1
n
e
−
β
n
4
λ
k
2
∏
i
<
j
|
λ
j
−
λ
i
|
β
,
(
1
)
{\displaystyle {\frac {1}{Z_{\beta ,n}}}\prod
_{k=1}^{n}e^{-{\frac {\beta n}{4}}\lambda
_{k}^{2}}\prod _{i<j}\left|\lambda _{j}-\lambda
_{i}\right|^{\beta }~,\quad (1)}
where the Dyson index, β = 1 for GOE, β
= 2 for GUE, and β = 4 for GSE, counts the
number of real components per matrix element;
Zβ,n is a normalization constant which can
be explicitly computed, see Selberg integral.
In the case of GUE (β = 2), the formula (1)
describes a determinantal point process. Eigenvalues
repel as the joint probability density has
a zero (of
β
{\displaystyle \beta }
th order) for coinciding eigenvalues
λ
j
=
λ
i
{\displaystyle \lambda _{j}=\lambda _{i}}
.
For the distribution of the largest eigenvalue
for GOE, GUE and Wishart matrices of finite
dimensions, see.
=== Distribution of level spacings ===
From the ordered sequence of eigenvalues
λ
1
<
…
<
λ
n
<
λ
n
+
1
<
…
{\displaystyle \lambda _{1}<\ldots <\lambda
_{n}<\lambda _{n+1}<\ldots }
, one defines the normalized spacings
s
=
(
λ
n
+
1
−
λ
n
)
/
⟨
s
⟩
{\displaystyle s=(\lambda _{n+1}-\lambda _{n})/\langle
s\rangle }
, where
⟨
s
⟩
=
⟨
λ
n
+
1
−
λ
n
⟩
{\displaystyle \langle s\rangle =\langle \lambda
_{n+1}-\lambda _{n}\rangle }
is the mean spacing. The probability distribution
of spacings is approximately given by,
p
1
(
s
)
=
π
2
s
e
−
π
4
s
2
{\displaystyle p_{1}(s)={\frac {\pi }{2}}s\,\mathrm
{e} ^{-{\frac {\pi }{4}}s^{2}}}
for the orthogonal ensemble GOE
β
=
1
{\displaystyle \beta =1}
,
p
2
(
s
)
=
32
π
2
s
2
e
−
4
π
s
2
{\displaystyle p_{2}(s)={\frac {32}{\pi ^{2}}}s^{2}\mathrm
{e} ^{-{\frac {4}{\pi }}s^{2}}}
for the unitary ensemble GUE
β
=
2
{\displaystyle \beta =2}
, and
p
4
(
s
)
=
2
18
3
6
π
3
s
4
e
−
64
9
π
s
2
{\displaystyle p_{4}(s)={\frac {2^{18}}{3^{6}\pi
^{3}}}s^{4}\mathrm {e} ^{-{\frac {64}{9\pi
}}s^{2}}}
for the symplectic ensemble GSE
β
=
4
{\displaystyle \beta =4}
.
The numerical constants are such that
p
β
(
s
)
{\displaystyle p_{\beta }(s)}
is normalized:
∫
0
∞
d
s
p
β
(
s
)
=
1
{\displaystyle \int _{0}^{\infty }ds\,p_{\beta
}(s)=1}
and the mean spacing is,
∫
0
∞
d
s
s
p
β
(
s
)
=
1
,
{\displaystyle \int _{0}^{\infty }ds\,s\,p_{\beta
}(s)=1,}
for
β
=
1
,
2
,
4
{\displaystyle \beta =1,2,4}
.
== Generalizations ==
Wigner matrices are random Hermitian matrices
H
n
=
(
H
n
(
i
,
j
)
)
i
,
j
=
1
n
{\displaystyle \textstyle H_{n}=(H_{n}(i,j))_{i,j=1}^{n}}
such that the entries
{
H
n
(
i
,
j
)
,
1
≤
i
≤
j
≤
n
}
{\displaystyle \left\{H_{n}(i,j)~,\,1\leq
i\leq j\leq n\right\}}
above the main diagonal are independent random
variables with zero mean, and
{
H
n
(
i
,
j
)
,
1
≤
i
<
j
≤
n
}
{\displaystyle \left\{H_{n}(i,j)~,\,1\leq
i<j\leq n\right\}}
have identical second moments.
Invariant matrix ensembles are random Hermitian
matrices with density on the space of real
symmetric/ Hermitian/ quaternionic Hermitian
matrices, which is of the form
1
Z
n
e
−
n
t
r
V
(
H
)
,
{\displaystyle \textstyle {\frac {1}{Z_{n}}}e^{-n\mathrm
{tr} V(H)}~,}
where the function V is called the potential.
The Gaussian ensembles are the only common
special cases of these two classes of random
matrices.
== Spectral theory of random matrices ==
The spectral theory of random matrices studies
the distribution of the eigenvalues as the
size of the matrix goes to infinity.
=== Global regime ===
In the global regime, one is interested in
the distribution of linear statistics of the
form Nf, H = n−1 tr f(H).
==== Empirical spectral measure ====
The empirical spectral measure μH of H is
defined by
μ
H
(
A
)
=
1
n
#
{
eigenvalues of
H
in
A
}
=
N
1
A
,
H
,
A
⊂
R
.
{\displaystyle \mu _{H}(A)={\frac {1}{n}}\,\#\left\{{\text{eigenvalues
of }}H{\text{ in }}A\right\}=N_{1_{A},H},\quad
A\subset \mathbb {R} .}
Usually, the limit of
μ
H
{\displaystyle \mu _{H}}
is a deterministic measure; this is a particular
case of self-averaging. The cumulative distribution
function of the limiting measure is called
the integrated density of states and is denoted
N(λ). If the integrated density of states
is differentiable, its derivative is called
the density of states and is denoted ρ(λ).
The limit of the empirical spectral measure
for Wigner matrices was described by Eugene
Wigner; see Wigner semicircle distribution
and Wigner surmise. As far as sample covariance
matrices are concerned, a theory was developed
by Marčenko and Pastur.The limit of the empirical
spectral measure of invariant matrix ensembles
is described by a certain integral equation
which arises from potential theory.
==== Fluctuations ====
For the linear statistics Nf,H = n−1 ∑ f(λj),
one is also interested in the fluctuations
about ∫ f(λ) dN(λ). For many classes of
random matrices, a central limit theorem of
the form
N
f
,
H
−
∫
f
(
λ
)
d
N
(
λ
)
σ
f
,
n
⟶
D
N
(
0
,
1
)
{\displaystyle {\frac {N_{f,H}-\int f(\lambda
)\,dN(\lambda )}{\sigma _{f,n}}}{\overset
{D}{\longrightarrow }}N(0,1)}
is known, see, etc.
=== Local regime ===
In the local regime, one is interested in
the spacings between eigenvalues, and, more
generally, in the joint distribution of eigenvalues
in an interval of length of order 1/n. One
distinguishes between bulk statistics, pertaining
to intervals inside the support of the limiting
spectral measure, and edge statistics, pertaining
to intervals near the boundary of the support.
==== Bulk statistics ====
Formally, fix
λ
0
{\displaystyle \lambda _{0}}
in the interior of the support of
N
(
λ
)
{\displaystyle N(\lambda )}
. Then consider the point process
Ξ
(
λ
0
)
=
∑
j
δ
(
⋅
−
n
ρ
(
λ
0
)
(
λ
j
−
λ
0
)
)
,
{\displaystyle \Xi (\lambda _{0})=\sum _{j}\delta
{\Big (}{\cdot }-n\rho (\lambda _{0})(\lambda
_{j}-\lambda _{0}){\Big )}~,}
where
λ
j
{\displaystyle \lambda _{j}}
are the eigenvalues of the random matrix.
The point process
Ξ
(
λ
0
)
{\displaystyle \Xi (\lambda _{0})}
captures the statistical properties of eigenvalues
in the vicinity of
λ
0
{\displaystyle \lambda _{0}}
. For the Gaussian ensembles, the limit of
Ξ
(
λ
0
)
{\displaystyle \Xi (\lambda _{0})}
is known; thus, for GUE it is a determinantal
point process with the kernel
K
(
x
,
y
)
=
sin
⁡
π
(
x
−
y
)
π
(
x
−
y
)
{\displaystyle K(x,y)={\frac {\sin \pi (x-y)}{\pi
(x-y)}}}
(the sine kernel).
The universality principle postulates that
the limit of
Ξ
(
λ
0
)
{\displaystyle \Xi (\lambda _{0})}
as
n
→
∞
{\displaystyle n\to \infty }
should depend only on the symmetry class of
the random matrix (and neither on the specific
model of random matrices nor on
λ
0
{\displaystyle \lambda _{0}}
). This was rigorously proved for several
models of random matrices: for invariant matrix
ensembles,
for Wigner matrices,
et cet.
==== Edge statistics ====
See Tracy–Widom distribution.
== Other classes of random matrices ==
=== Wishart matrices ===
Wishart matrices are n × n random matrices
of the form H = X X*, where X is an n × m
random matrix (m ≥ n) with independent entries,
and X* is its conjugate transpose. In the
important special case considered by Wishart,
the entries of X are identically distributed
Gaussian random variables (either real or
complex).
The limit of the empirical spectral measure
of Wishart matrices was found by Vladimir
Marchenko and Leonid Pastur, see Marchenko–Pastur
distribution.
=== Random unitary matrices ===
See circular ensembles.
=== Non-Hermitian random matrices ===
See circular law.
== Guide to references ==
Books on random matrix theory:
Survey articles on random matrix theory:
Historic works
