In mathematics – specifically, in the theory
of stochastic processes – Doob's martingale
convergence theorems are a collection of results
on the long-time limits of supermartingales,
named after the American mathematician Joseph
L. Doob.
== Statement of the theorems ==
In the following, (Ω, F, F∗, P), F∗ = (Ft)t
≥ 0, will be a filtered probability space
and N : [0, +∞) × Ω → R will be a right-continuous
supermartingale with respect to the filtration
F∗; in other words, for all 0 ≤ s ≤ t
< +∞,
N
s
≥
E
⁡
[
N
t
∣
F
s
]
.
{\displaystyle N_{s}\geq \operatorname {E}
{\big [}N_{t}\mid F_{s}{\big ]}.}
=== Doob's first martingale convergence theorem
===
Doob's first martingale convergence theorem
provides a sufficient condition for the random
variables Nt to have a limit as t → +∞
in a pointwise sense, i.e. for each ω in
the sample space Ω individually.
For t ≥ 0, let Nt− = max(−Nt, 0) and
suppose that
sup
t
>
0
E
⁡
[
N
t
−
]
<
+
∞
.
{\displaystyle \sup _{t>0}\operatorname {E}
{\big [}N_{t}^{-}{\big ]}<+\infty .}
Then the pointwise limit
N
(
ω
)
=
lim
t
→
+
∞
N
t
(
ω
)
{\displaystyle N(\omega )=\lim _{t\to +\infty
}N_{t}(\omega )}
exists and is finite for P-almost all ω ∈ Ω.
=== Doob's second martingale convergence theorem
===
It is important to note that the convergence
in Doob's first martingale convergence theorem
is pointwise, not uniform, and is unrelated
to convergence in mean square, or indeed in
any Lp space. In order to obtain convergence
in L1 (i.e., convergence in mean), one requires
uniform integrability of the random variables
Nt. By Chebyshev's inequality, convergence
in L1 implies convergence in probability and
convergence in distribution.
The following are equivalent:
(Nt)t > 0 is uniformly integrable, i.e.
lim
C
→
∞
sup
t
>
0
∫
{
ω
∈
Ω
∣
N
t
(
ω
)
>
C
}
|
N
t
(
ω
)
|
d
P
(
ω
)
=
0
;
{\displaystyle \lim _{C\to \infty }\sup _{t>0}\int
_{\{\omega \in \Omega \,\mid \,N_{t}(\omega
)>C\}}\left|N_{t}(\omega )\right|\,\mathrm
{d} \mathbf {P} (\omega )=0;}
there exists an integrable random variable
N ∈ L1(Ω, P; R) such that Nt → N as t
→ +∞ both P-almost surely and in L1(Ω,
P; R), i.e.
E
⁡
[
|
N
t
−
N
|
]
=
∫
Ω
|
N
t
(
ω
)
−
N
(
ω
)
|
d
P
(
ω
)
→
0
as
t
→
+
∞
.
{\displaystyle \operatorname {E} \left[\left|N_{t}-N\right|\right]=\int
_{\Omega }\left|N_{t}(\omega )-N(\omega )\right|\,\mathrm
{d} \mathbf {P} (\omega )\to 0{\text{ as }}t\to
+\infty .}
=== Corollary: convergence theorem for continuous
martingales ===
Let M : [0, +∞) × Ω → R be a continuous
martingale such that
sup
t
>
0
E
⁡
[
|
M
t
|
p
]
<
+
∞
{\displaystyle \sup _{t>0}\operatorname {E}
{\big [}{\big |}M_{t}{\big |}^{p}{\big ]}<+\infty
}
for some p > 1. Then there exists a random
variable M ∈ Lp(Ω, P; R) such that Mt → M
as t → +∞ both P-almost surely and in
Lp(Ω, P; R).
== Discrete-time results ==
Similar results can be obtained for discrete-time
supermartingales and submartingales, the obvious
difference being that no continuity assumptions
are required. For example, the result above
becomes
Let M : N × Ω → R be a discrete-time martingale
such that
sup
k
∈
N
E
⁡
[
|
M
k
|
p
]
<
+
∞
{\displaystyle \sup _{k\in \mathbf {N} }\operatorname
{E} {\big [}{\big |}M_{k}{\big |}^{p}{\big
]} 1. Then there exists a random
variable M ∈ Lp(Ω, P; R) such that Mk → M
as k → +∞ both P-almost surely and in
Lp(Ω, P; R)
== Convergence of conditional expectations:
Lévy's zero–one law ==
Doob's martingale convergence theorems imply
that conditional expectations also have a
convergence property.
Let (Ω, F, P) be a probability space and
let X be a random variable in L1. Let F∗
= (Fk)k∈N be any filtration of F, and define
F∞ to be the minimal σ-algebra generated
by (Fk)k∈N. Then
E
⁡
[
X
∣
F
k
]
→
E
⁡
[
X
∣
F
∞
]
as
k
→
∞
{\displaystyle \operatorname {E} {\big [}X\mid
F_{k}{\big ]}\to \operatorname {E} {\big [}X\mid
F_{\infty }{\big ]}{\text{ as }}k\to \infty
}
both P-almost surely and in L1.
This result is usually called Lévy's zero–one
law or Levy's upwards theorem. The reason
for the name is that if A is an event in F∞,
then the theorem says that
P
[
A
∣
F
k
]
→
1
A
{\displaystyle \mathbf {P} [A\mid F_{k}]\to
\mathbf {1} _{A}}
almost surely, i.e., the limit of the probabilities
is 0 or 1. In plain language, if we are learning
gradually all the information that determines
the outcome of an event, then we will become
gradually certain what the outcome will be.
This sounds almost like a tautology, but the
result is still non-trivial. For instance,
it easily implies Kolmogorov's zero–one
law, since it says that for any tail event
A, we must have
P
[
A
]
=
1
A
{\displaystyle \mathbf {P} [A]=\mathbf {1}
_{A}}
almost surely, hence
P
[
A
]
∈
{
0
,
1
}
{\displaystyle \mathbf {P} [A]\in \{0,1\}}
.
Similarly we have the Levy's downwards theorem
:
Let (Ω, F, P) be a probability space and
let X be a random variable in L1. Let (Fk)k∈N
be any decreasing sequence of sub-sigma algebras
of F, and define F∞ to be 
the intersection. Then
E
⁡
[
X
∣
F
k
]
→
E
⁡
[
X
∣
F
∞
]
as
k
→
∞
{\displaystyle \operatorname {E} {\big [}X\mid
F_{k}{\big ]}\to \operatorname {E} {\big [}X\mid
F_{\infty }{\big ]}{\text{ as }}k\to \infty
}
both P-almost surely and in L1.
== Doob's upcrossing inequality ==
The following result, called Doob's upcrossing
inequality or, sometimes, Doob's upcrossing
lemma, is used in proving Doob's martingale
convergence theorems.Hypothesis.Let
N
{\displaystyle N}
be a natural number. Let
X
n
{\displaystyle X_{n}}
, for
n
=
1
,
…
,
N
{\displaystyle n=1,\ldots ,N}
, be a martingale with respect to a filtration
F
n
{\displaystyle {\mathcal {F}}_{n}}
, for
n
=
1
,
…
,
N
{\displaystyle n=1,\ldots ,N}
. Let
a
{\displaystyle a}
,
b
{\displaystyle b}
be two real numbers with
a
<
b
{\displaystyle a<b}
.Define the random variables
U
n
{\displaystyle U_{n}}
, for
n
=
1
,
…
,
N
{\displaystyle n=1,\ldots ,N}
, as follows:
U
n
=
m
{\displaystyle U_{n}=m}
if and only if
m
{\displaystyle m}
is the largest integer such that there exist
integers
j
1
,
k
1
,
j
2
,
k
2
,
…
,
j
m
{\displaystyle j_{1},k_{1},j_{2},k_{2},\ldots
,j_{m}}
,
k
m
{\displaystyle k_{m}}
satisfying 1 ≤
j
1
{\displaystyle j_{1}}
<
k
1
<
j
2
<
k
2
<
⋯
<
j
m
<
k
m
≤
n
{\displaystyle k_{1}<j_{2}<k_{2}<\cdots <j_{m}<k_{m}\leq
n}
and, for
i
=
1
,
…
,
m
{\displaystyle i=1,\ldots ,m}
, for each pair
j
i
,
k
i
{\displaystyle j_{i},k_{i}}
the inequalities
X
j
i
<
a
{\displaystyle X_{j_{i}}<a}
and
X
k
i
>
b
{\displaystyle X_{k_{i}}>b}
are satisfied. Each
U
n
{\displaystyle U_{n}}
is called the number of upcrossings with respect
to the interval
a
,
b
{\displaystyle a,b}
for the martingale
X
i
{\displaystyle X_{i}}
,
i
=
1
,
…
,
n
{\displaystyle i=1,\ldots ,n}
.
Conclusion.
(
b
−
a
)
E
⁡
[
U
n
]
≤
E
⁡
[
(
X
n
−
a
)
−
]
{\displaystyle (b-a)\operatorname {E} [U_{n}]\leq
\operatorname {E} [(X_{n}-a)^{-}]}
.
== See also ==
Backwards martingale convergence theorem
