# 程序代写案例-ST3180

ST3180
UNIVERSITY OF WARWICK
Third Year EXAMINATIONS: SUMMER 2020
Probability Theory
Time allowed: 2 hours
Approved calculators may be used.
Full marks may be obtained by correctly answering 3 complete questions. Candi-
dates may attempt all questions. Marks will be awarded for the best 3 answers
only.
1
1: (a) State and prove the first and second Borel-Cantelli lemmas. [6]
For the rest of this question you may use, without proof, Kolmogorov’s zero one law.
(b) Let (Xn)n≥1 be a sequence of independent random variables on probability space (Ω,F ,P) taking
values in {1, 2, 3, . . .}. Suppose that P(Xn ≥ i) = 1/i for each n and each i ∈ {1, 2, 3, . . .}.
(i) Calculate
P(Xn ≥ nα i.o.)
for each fixed α > 0.
(ii) Show that the random variable
lim sup
n→∞
log(Xn)
log(n)
is almost surely constant, and find the value of this constant.
[8]
(b) Suppose (Xn)n≥1 is a sequence of independent identically distributed random variables such that
P(Xn = 1) = P(Xn = −1) = 1
2
for each n ∈ N .
Let Sn =
∑n
k=1Xk, and define
B− =
{
lim inf
n→∞ Sn = −∞
}
and B+ =
{
lim sup
n→∞
SN =∞
}
.
(i) Show that both B− and B+ belong to the tail σ-algebra of (Xn)n≥1, and that P(B+) =
P(B−) ∈ {0, 1}.
(ii) Using the Borel-Cantelli lemmas, show that for each k ≥ 1
lim sup
n→∞
(
Sn+k − Sn
)
= k a.s.
[Hint: Consider An = {Sn+k − Sn = k}.]
(iii) Deduce that P(Bc+ ∩Bc−) = 0 and hence P(B+) = P(B−) = 1.
[6]
[TOTAL: 20]
Continued...
2
2: (a) Suppose that (Xn)n≥1 is a sequence of random variables with |Xn| ≤ Y for each n, where Y ∈ L1
(i.e. E[|Y |] <∞). Show that {Xn}n≥1 is uniformly integrable. [4]
(b) Suppose that (Xn)n≥1 is a sequence of random variables converging to zero in probability. Fur-
ther, suppose that there exists a constant K ∈ (0,∞) such that E[|Xn|3] ≤ K for each n. Show
that Xn → 0 in L2.
(You may use results about uniform integrability so long as you state them clearly) [4]
(c) Suppose that (Xn)n≥1 is a sequence of independent normally distributed random variables, with
common mean µ and common variance 1. Let Zn = exp(X1 + . . .+Xn).
(i) Show that Zn → 0 in L2 if and only if µ < −1.
(Hint: If W is a standard normal random variable then E(eθW ) = eθ2/2.)
(ii) Show that Zn → 0 in probability if µ < 0.
[5]
(d) Let (Xn)n≥2 be a sequence of independent random variables such that for each n ∈ {2, 3 . . .}
P(Xn = n) = P(Xn = −n) = 1
2n log n
, and P(Xn = 0) = 1− 1
n log n
.
Let Sn = X2 + . . .+Xn. Show that
Sn
n → 0 in probability but not almost surely.
(Hint: You may use without proof that

n
1
n logn =∞.) [7]
[TOTAL: 20]
Continued...
3
3: (a) Suppose (Xn)n≥1 is a sequence of random variables and X is a random variable all on probability
space (Ω,F ,P). Let µn, Fn be the law and distribution function of Xn respectively and µ, F be
the law and distribution function of X respectively.
(i) Suppose that µn → µ weakly; show that this implies Xn → X in distribution.
(ii) Suppose that Xn → X in distribution, and let (an)n≥1 be a sequence of real numbers such
that an → 0 as n→∞. Show that anXn converges in distribution to zero.
[8]
(b) Let µX be the law of a random variable X. Show that the distribution of X is symmetric, i.e.
µX(−∞, x] = µX [−x,∞) for all x ∈ R, if and only if the characteristic function of X is real. [4]
(c) In the following, identify if the sequence of random variables (Yn)n≥1 converges weakly and if so
identify the limit. (You should explain your reasoning, and state clearly any results from lectures
that you use).
(i) Yn = max{U1, . . . , Un} where U1, U2, . . . are independent Uniform(−1, 1) random variables.
(ii) Yn = n(1 − max{U1, . . . , Un}) where U1, U2, . . . are independent Uniform(−1, 1) random
variables.
(iii) Yn =

3
n(U1 + . . .+Un) where U1, U2, . . . are independent Uniform(−1, 1) random variables.
(iv) Yn = nmin{U1, . . . , Un} where U1, U2, . . . are independent Uniform[0, 1] random variables.
(v) Yn is an exponential random variable with mean λn > 0 for each n, i.e. P(Yn > y) = e−y/λn ,
where λn → 0 as n→∞.
[8]
[TOTAL: 20]
Continued...
4
4: (a) Let X and Y and Z be integrable random variables on a probability space (Ω,F ,P), and let
G ⊆ F be a sub-σ-algebra. Show that if Y and Z are both versions of E[X | G] then Y = Z
almost surely. [3]
(b) Suppose that (Xn)n≥1 is a random process adapted to (Fn)n≥1. Define the hitting time τA of
(Xn)n≥1 on a Borel set A. Show that τA is a stopping time for this filtration. [3]
(c) Supose (Xn)n≥1 is a symmetric simple random walk on Z started from 0. Use the Optional
Stopping Theorem for Bounded Stopping Times to evaluate the probability (Xn)n≥1 hits b > 0
before −a < 0. [4]
(d) Let (Xn)n≥1 be a sequence of independent random variables on probability space (Ω,F ,P), with
E[Xn] = 1 for each n. Let F0 = {∅,Ω}, Fn = σ(X1, . . . , Xn) for n ∈ N, and
M0 = 1, and Mn =
n∏
k=1
Xk for n = 1, 2, . . . .
(i) Show that (Mn)n≥0 is a martingale with respect to (Fn)n≥0.
(ii) Now suppose ϕ(t) = E[etX1 ] <∞ for all t ∈ R. Let S0 = 0, Sn =
∑n
k=1Xk, and
Yn =
etSn
ϕ(t)n
, for n = 0, 1, 2, . . . .
Show, using part (i), that (Yn)n≥0 is a martingale with respect to (Fn)n≥0.
[4]
(c) Let (εn)n≥1 be independent random variables with
P(εn = +1) = p, P(εn = −1) = q, where 1/2 < p = 1− q < 1 .
Let Fn = σ(ε1, . . . , εn) and define Xn inductively by X0 = 1 and for n ≥ 1
Xn = Xn−1 + Vnεn .
Assume Vn is a Fn−1-measurable random variable for each n, and that Vn is strictly between 0
and Xn.
(i) Prove that if Xn > 0 then E[log(Xn+1/Xn) | Fn] = f(Vn+1/Xn) where
f(x) = p log(1 + x) + q log(1− x) .
(ii) Deduce that (log(Xn)− nα)n≥0 is a supermartingale, where
α = p log p+ q log q + log 2 .
[6]
[TOTAL: 20]
End.
5
ST3180
UNIVERSITY OF WARWICK
Third Year EXAMINATIONS: SUMMER 2020
Probability Theory
Time allowed: 2 hours
Approved calculators may be used.
Full marks may be obtained by correctly answering 3 complete questions. Candi-
dates may attempt all questions. Marks will be awarded for the best 3 answers
only.
1
1: (a) State and prove the first and second Borel-Cantelli lemmas. [6]
(Borel-Cantelli Lemmas) Let (An)n≥1 be a sequence of events on probability space (Ω,F ,P). Then,
(BC1) If

n P(An) <∞ then P(lim supAn) = P(An i.o.) = 0,
(BC2) If (An)n≥1 are also independent then

n P(An) = ∞ implies P(lim supAn) = P(An i.o.) = 1 (i.e.
the converse of (BC1) holds under independence).
(1 marks)
For (BC1) we assume that

n P(An) < ∞. Let Gn =

m≥nAm, which is a decreasing sequence of sets
such that Gn ↘ G where G = lim supn→∞An. Fix k ∈ N and observe
P(lim sup
n→∞
An) = P
( ⋂
n≥1
Gn
) ≤ P(Gk) ≤∑
n≥k
P(An) ,
where the final inequality holds by σ-subadditivity. By assumption the right hand side converges to zero as
k →∞, which concludes the proof of (BC1).
(2 marks)
For (BC2) we assume that (An)n≥1 are independent and

n P(An) =∞. The main idea of the proof is to
take compliments and use the standard bound 1− x ≤ e−x for x ∈ R. Fix m, r ∈ N, then by independence
P
( ⋂
m≤n
Acn
) ≤ P( ⋂
m≤n≤r
Acn
)
=

m≤n≤r
P(Acn) =

m≤n≤r
(
1− P(An)
)
,
Applying the bound 1− x ≤ e−x, we find
P
( ⋂
m≤n
Acn
) ≤ e−∑m≤n≤r P(An) → 0 as r →∞ ,
where convergence follows by assumption on

n P(An). It follows that, since {Acn ev.} is a countable
union of null sets, that P(An i.o.) = 1− P(Acn ev.) = 1, as required.
(3 marks)
For the rest of this question you may use, without proof, Kolmogorov’s zero one law.
(b) Let (Xn)n≥1 be a sequence of independent random variables on probability space (Ω,F ,P) taking
values in {1, 2, 3, . . .}. Suppose that P(Xn ≥ i) = 1/i for each n and each i ∈ {1, 2, 3, . . .}.
(i) Calculate
P(Xn ≥ nα i.o.)
for each fixed α > 0.
(ii) Show that the random variable
lim sup
n→∞
log(Xn)
log(n)
is almost surely constant, and find the value of this constant.
[8]
Answer: (Unseen (seen similar example):)
(i) Apply (BC1) and (BC2) above with An = {Xn ≥ nα}. Observe

n
P(An) =

n
1
dnαe
{
=∞ if α ≤ 1 ,
<∞ if α > 1 .
2
So (since the Xn are independent) the Borel-Cantelli lemmas imply
P(Xn ≥ nα i.o.) =
{
1 if α ≤ 1 ,
0 if α > 1 .
(3marks)
(ii) Let Fn = σ(Xn), then (Fn)n≥1 is an independent sequence of σ-algebras. Let Ln = supm≥n logXmlogm
and L = limn→∞ Ln. Since Ln is σ
(⋃
m≥n Fm
)
-measurable we have L is T -measurable, in particular
L is almost surely constant by Kolmogorov’s zero-one law. Observe {L ≥ 1} ⊇ {Ln ≥ 1 i.o.} ⊇
{Xn ≥ n i.o.}. It follows that P(L ≥ 1) = 1. Now fix k ∈ N,
{L > 1 + 2
k
} ⊆ {L ≥ 1 + 2
k
} ⊆

m
{Ln > 1 + 2
k
− 1
m
i.o.} ⊆ {Ln > 1 + 1
k
i.o.} .
By definition {Ln > 1 + 1k i.o.} ⊆ {Xn ≥ n1+1/k i.o}, and by part (i) P(Xn ≥ n1+1/k i.o) = 0. The
conclusion follows, since
{L > 1} =

k
{
L > 1 +
1
k
}
,
and a countable union of null sets is null. (5 marks)
(c) Suppose (Xn)n≥1 is a sequence of independent identically distributed random variables such that
P(Xn = 1) = P(Xn = −1) = 1
2
for each n ∈ N .
Let Sn =
∑n
k=1Xk, and define
B− =
{
lim inf
n→∞ Sn = −∞
}
and B+ =
{
lim sup
n→∞
SN =∞
}
.
(i) Show that both B− and B+ belong to the tail σ-algebra of (Xn)n≥1, and that P(B+) =
P(B−) ∈ {0, 1}.
(ii) Using the Borel-Cantelli lemmas, show that for each k ≥ 1
lim sup
n→∞
(
Sn+k − Sn
)
= k a.s.
[Hint: Consider An = {Sn+k − Sn = k}.]
(iii) Deduce that P(Bc+ ∩Bc−) = 0 and hence P(B+) = P(B−) = 1.
[6]
Answer: (Unseen (seen similar):)
(i) Notice that, for each N ∈ N fixed
lim inf
n→∞ Sn = −∞ if and only if lim infn→∞ (XN + . . . Xn) = −∞ ,
and hence B− is measurable with respect to σ(XN , XN+1, . . .) for each N ∈ N, and therefore B− is
in the tail σ-algebra. The same argument applies to B+. It follows from Kolmogorov’s 0-1 law that
P(B−),P(B+) ∈ {0, 1}. Finally, by symmetry (Xn)n≥1 and (−Xn)n≥1 have the same distribution and
hence so do (Sn) and (−Sn). It follows that
P(B+) = P(lim sup(−Sn) =∞) = P(− lim inf Sn =∞) = P(B−) .
(2 marks)
3
(ii) Fix k ≥ 1, consider An as defined in the hint. Then
P(An) = P(Xn+1 = 1, . . . , Xn+k = 1) =
1
2k
for each n ∈ N .
It follows that

n≥1 P(Ank) =∞, since the {Ank}n≥1 are independent (BC2) implies P({Ank}n≥1 i.o.) =
1. Together with the fact that (Sn+k − Sn) ≤ k for each n we conclude that lim sup(Sn+k − Sn) = k
almost surely. (2 marks)
(iii) Since
(Bc+ ∩Bc−) ⊆ {(Sn) bounded above and below} ⊆
∞⋃
k=1
{
lim sup(Sn+k − Sn) = k
}c
,
and a countable union of null sets is null, we have P(Bc+ ∩Bc−) = 0. Combining with part (i) we get
P(B+) = P(B−) = 1 as required. (2 marks)
[TOTAL: 20]
Continued...
4
2: (a) Suppose that (Xn)n≥1 is a sequence of random variables with |Xn| ≤ Y for each n, where Y ∈ L1
(i.e. E[|Y |] <∞). Show that {Xn}n≥1 is uniformly integrable. [4]
Answer: (Applied Bookwork) Observer, by assumptions E(|Xn|1|Xn|≥K) ≤ E(|Y |1|Xn|≥K) ≤ E(|Y |1|Y |≥K)
and the right hand side tends to zero (uniformly in n), by dominated convergence, since Y ∈ L1. For full
marks it must be made clear that convergence is uniform in n. (4 marks)
(b) Suppose that (Xn)n≥1 is a sequence of random variables converging to zero in probability. Fur-
ther, suppose that there exists a constant K ∈ (0,∞) such that E[|Xn|3] ≤ K for each n. Show
that Xn → 0 in L2.
(You may use results about uniform integrability so long as you state them clearly) [4]
Answer: (Applied Bookwork) We know from lectures that if (Yn)n≥1 is a sequence of random variables
bounded in Lp for p > 1 then {Yn}n≥1 are uniformly integrable. Now let Yn = X2n for each n, this forms a
sequence which is bounded in L3/2 and hence is U.I.. Also from lectures, a U.I. sequence which converges
in probability converges in L1. This implies (Xn)n≥1 converges in L2. (Alternative proofs, for example
using law of total expectation are also acceptable). (4 marks)
(c) Suppose that (Xn)n≥1 is a sequence of independent normally distributed random variables, with
common mean µ and common variance 1. Let Zn = exp(X1 + . . .+Xn).
(i) Show that Zn → 0 in L2 if and only if µ < −1.
(Hint: If W is a standard normal random variable then E(eθW ) = eθ2/2.)
(ii) Show that Zn → 0 in probability if µ < 0.
[5]
Answer: (Unseen example) (i) Consider
E(Z2n) = E[exp(2(X1 + . . .+Xn))] =
(
E[exp(2(X1 − µ))]
)n
e2nµ ,
and apply the hint.
(ii) Either by considering E[( 1n (X1 + . . . + Xn))] = 1/n → 0 as n → ∞ (convergence in L2 implies in
prob.), or by the Weak Law of Large Numbers, 1n (X1 + . . . + Xn) to µ in probability as n → ∞. Hence
P[X1 + . . .+Xn ≥ n(µ+ε)]→ 0 for each ε > 0. Fix ε > 0 such that µ+ε < 0 (possible since µ < 0). Now
P[Zn ≥ en(µ+ε)]→ 0 and en(µ+ε) → 0 as n→∞, which implies convergence in probability. (5 marks)
(d) Let (Xn)n≥2 be a sequence of independent random variables such that for each n ∈ {2, 3 . . .}
P(Xn = n) = P(Xn = −n) = 1
2n log n
, and P(Xn = 0) = 1− 1
n log n
.
Let Sn = X2 + . . .+Xn. Show that
Sn
n → 0 in probability but not almost surely.
(Hint: You may use without proof that

n
1
n logn =∞.) [7]
Answer: (Unseen example) Fix ε > 0 and consider P(|Sn/n| > ε), since Sn is mean zero we can apply
Chebyshev’s inequality. Hence consider the variance of Sn,
Var(Sn) =
n∑
k=2
Var(Xk) =
n∑
k=2
E(X2k) =
n∑
k=2
k
log k
≤ (n− 2)n
log n
+
2
log 2
,
where in the final line we used x 7→ x/ log x is increasing for x ≥ 3 > e. So by Chebyshev’s inequality
P(|Sn/n| > ε) ≤ E(S
2
n)
ε2n2
≤ 1
ε2 log n
+
2
ε2n2 log 2
→ 0 ,
5
as n→∞, as required. For (lack of) almost sure convergence, consider An = {Xn = n} and Bn = {Xn =
−n}, then ∑
n≥2
P(An) =

n≥2
P(Bn) =

n≥2
1
n log n
=∞ ,
so by (BC2) P(An i.o.) = P(Bn i.o.) = 1, so in particular Sn can not converge almost surely. (7 marks)
[TOTAL: 20]
Continued...
6
3: (a) Suppose (Xn)n≥1 is a sequence of random variables and X is a random variable all on probability
space (Ω,F ,P). Let µn, Fn be the law and distribution function of Xn respectively and µ, F be
the law and distribution function of X respectively.
(i) Suppose that µn → µ weakly; show that this implies Xn → X in distribution.
(ii) Suppose that Xn → X in distribution, and let (an)n≥1 be a sequence of real numbers such
that an → 0 as n→∞. Show that anXn converges in distribution to zero.
[8]
(i) (Applied Bookwork) Fix x ∈ R a continuity point of F , and δ > 0. Define hx ∈ Cb(R) by
hx(y) =

1 if y ≤ x ,
1− y−xδ if y ∈ (x, x+ δ) ,
0 if y ≥ x+ δ .
Then by assumption µn(hx)→ µ(hx) as n→∞, and by construction of hx we have
Fn(x) ≤ µn(hx) and µ(hx) ≤ F (x+ δ) ,
which implies that
lim sup
n→∞
Fn(x) ≤ lim sup
n→∞
µn(hx) = µ(hx) ≤ F (x+ δ) .
Now take the limit δ → 0 and use continuity of F at x to get lim supn→∞ Fn(x) ≤ F (x). We use the
same trick to get the desired upper bound on F (x). That is, define gx ∈ Cb(R) by
gx(y) =

1 if y ≤ x− δ ,
1− y−(x−δ)δ if y ∈ (x− δ, x) ,
0 if y ≥ x .
Then by the same argument,
lim inf
n→∞ Fn(x) ≥ lim infn→∞ µn(gx) = µ(gx) ≥ F (x− δ) .
Now take the limit δ → 0 and use continuity of F at x to get lim infn→∞ Fn(x) ≥ F (x). It follows
that Fn
d−→ F . (4 marks)
(ii) (Unseen) Let Yn = anXn. Fix ε > 0 and u a continuity point of F such that F (u) > 1− ε. If x > 0
then for all n sufficiently large x/an > u, and |Fn(u)− F (u)| < ε. It follows that
FYn(x) = P(anXn ≤ x) = P(Xn ≤ x/an) ≥ Fn(u) > 1− 2ε .
Thus limn→∞ FYn(x) = 1. By the same argument for x < 0 limn→∞ FYn(x) = 0.
(4 marks)
(b) Let µX be the law of a random variable X. Show that the distribution of X is symmetric, i.e.
µX(−∞, x] = µX [−x,∞) for all x ∈ R, if and only if the characteristic function of X is real. [4]
Answer: (Similar to exercise) ϕX(t) = E[eitX ] = E[ei(−t)(−X)] = E[eit(−X)] = ϕ−X(t). So by Le´vy’s in-
version formula (i.e. one-to-one correspondence between probability measures on (R,B) and characteristic
functions) X has the same law as −X if and only if ϕX(t) is real. (4 marks)
(c) In the following, identify if the sequence of random variables (Yn)n≥1 converges weakly and if so
identify the limit. (You should explain your reasoning, and state clearly any results from lectures
that you use).
(i) Yn = max{U1, . . . , Un} where U1, U2, . . . are independent Uniform(−1, 1) random variables.
7
(ii) Yn = n(1 − max{U1, . . . , Un}) where U1, U2, . . . are independent Uniform(−1, 1) random
variables.
(iii) Yn =

3
n(U1 + . . .+Un) where U1, U2, . . . are independent Uniform(−1, 1) random variables.
(iv) Yn = nmin{U1, . . . , Un} where U1, U2, . . . are independent Uniform[0, 1] random variables.
(v) Yn is an exponential random variable with mean λn > 0 for each n, i.e. P(Yn > y) = e−y/λn ,
where λn → 0 as n→∞.
[8]
(i) Weak convergence to 1. Check (equivalently) convergence in distribution. For u ≥ 1, P(Yn ≤ u) = 1,
if u ≤ −1 then P(Yn ≤ u) = 0 and for u ∈ (−1, 1), P(Yn ≤ u) = ((1 + u)/2)n → 0.
(ii) Weak convergence to an Exp(1/2) random variable. Check (equivalently) convergence in distribution.
For u > 0, P(Yn ≤ u) = 1 − P(max{U1, . . . , Un} < 1 − u/n) = 1 − P(X1 < 1 − u/n)n = 1 − ((2 −
u/n)/2)n → 1− e−u/2.
(iii) Weak convergence to standard normal. This follows from the Central Limit Theorem, which states
if X1, X2, . . . is a sequence of independent identically distributed random variables with mean µ and
variance σ, then
1√
nσ2
(X1 + . . .+Xn − nµ) weakly−−−−→ N (0, 1) .
Here µ = 0 and σ2 = E[X21 ] = 23 .
(iv)
P(nmin{U1, . . . , Un} ≤ x) = 1− P
(
U1 >
x
n
, . . . , Un >
x
n
)
= 1−
(
1− x
n
)n
→ 1− e−x ,
as n→∞. So Yn converges in distribution to an Exp(1) random variable.
(v) Fix f ∈ Cb(R) then
E f(Yn) =

R
f(x)
1
λn
e−x/λndx =

R
f(λny)e
−ydy → f(0) ,
as n→∞, where the final limit follows from the dominated convergence theorem since the integrand
is dominated by |f(λny)e−y| ≤ (sup |f |)e−y and converges to f(0)e−y. Hence Yn converges weakly to
0.
(8 marks)
[TOTAL: 20]
Continued...
8
4: (a) Let X and Y and Z be integrable random variables on a probability space (Ω,F ,P), and let
G ⊆ F be a sub-σ-algebra. Show that if Y and Z are both versions of E[X | G] then Y = Z
almost surely. [3]
Answer: (Applied bookwork:) Suppose Z1 and Z2 are both versions of E[X | G], let Aε = {Z2−Z1 > ε} ∈
G (because Z1 and Z2 are both G-measurable so Z1 − Z2 is). Therefore, by (3.), E[X ; Aε] = E[Z1 ; Aε] =
E[Z2 ; Aε]. So by linearity of expectation, and construction of Aε, we have 0 = E[Z1−Z2] ≥ εP(Aε), hence
P(Z2 − Z1 > ε) = 0. By the same argument (symmetry) P(Z1 − Z2 > ε) = 0, and since a union of null
events is null P(|Z1 − Z2| > ε) = 0. Finally, by taking compliments and applying monotone convergence
of measures
P(Z1 = Z2) = P
( ⋂
n≥1
{|Z1 − Z2| ≤ 1
n
})
= lim
n→∞P
(|Z1 − Z2| ≤ 1/n) = 1.
(3 mark)
(b) Suppose that (Xn)n≥1 is a random process adapted to (Fn)n≥1. Define the hitting time τA of
(Xn)n≥1 on a Borel set A. Show that τA is a stopping time for this filtration. [3]
Answer: (Applied bookwork:) (Fn)n≥1 is a filtration on (Ω,F ,P) if for all n ∈ N, Fn ⊆ Fn+1 ⊆ F . The
hitting time is defined by τA = inf{n ∈ N : Xn ∈ A}. τA is a stopping time if and only if {τA ≤ n} ∈ Fn
for each n ∈ N. This holds because
{τA ≤ n} =

k≤n
{Xk ∈ A}
and since (Xn)n≥1 is adapted to (Fn)n≥1 we have {Xk ∈ A} ∈ Fk ⊆ Fn. Hence {τA ≤ n} ∈ Fn.
(3 marks)
(c) Supose (Xn)n≥1 is a symmetric simple random walk on Z started from 0. Use the Optional
Stopping Theorem for Bounded Stopping Times to evaluate the probability (Xn)n≥1 hits b > 0
before −a < 0. [4]
Answer: (Seen in Exercises:)
Let T = τ{−a,b}, then T is not bounded, but it is almost surely finite by a Borel-Cantelli argument, and
Tk = T ∧ k is bounded for each k ∈ N. Hence, by dominated convergence, E[XT ] = limk→∞ E[XTk ], and
E[XTk ] = 0 by OST. Let p = P(XT = b) = 1 − P(XT = −a), so that E[XT ] = p b − a(1 − p) and solving
gives p = a/(b+ a).
(4 marks)
(d) Let (Xn)n≥1 be a sequence of independent random variables on probability space (Ω,F ,P), with
E[Xn] = 1 for each n. Let F0 = {∅,Ω}, Fn = σ(X1, . . . , Xn) for n ∈ N, and
M0 = 1, and Mn =
n∏
k=1
Xk for n = 1, 2, . . . .
(i) Show that (Mn)n≥0 is a martingale with respect to (Fn)n≥0.
(ii) Now suppose ϕ(t) = E[etX1 ] <∞ for all t ∈ R. Let S0 = 0, Sn =
∑n
k=1Xk, and
Yn =
etSn
ϕ(t)n
, for n = 0, 1, 2, . . . .
Show, using part (i), that (Yn)n≥0 is a martingale with respect to (Fn)n≥0.
[4]
Answer: (Unseen example (seen similar):)
(i) Fix n ∈ N. Firstly, by independence and integrability of the (Xn)n≥1
E[|Mn|] =
n∏
k=1
E[|Xk|] <∞ ,
9
and hence Mn is integrable. Since Mn depends only on X1, . . . , Xn it is Fn-measurable. Since Mn+1 =
Xn+1Mn and Mn is Fn-measurable, we have
E[Mn+1 | Fn] = E[Xn+1Mn | Fn] = MnE[Xn+1 | Fn] = MnE[Xn+1] = Mn a.s. ,
where we used ‘taking out what is known’ in the second equality and independence in the third.
(ii) Using part (i), since etSn+1 = etXn+1etSn , we have
Y0 = 1, , and Yn =
n∏
k=1
etXk
ϕ(t)
for n = 1, 2, . . . ..
Also E[etXk/ϕ(t)] = 1 and (etXn/ϕ(t))n≥1 is and independent sequence. Hence the result follows
from part (i) with etXn/ϕ(t) in place of Xn.
(4 marks)
(c) Let (εn)n≥1 be independent random variables with
P(εn = +1) = p, P(εn = −1) = q, where 1/2 < p = 1− q < 1 .
Let Fn = σ(ε1, . . . , εn) and define Xn inductively by X0 = 1 and for n ≥ 1
Xn = Xn−1 + Vnεn .
Assume Vn is a Fn−1-measurable random variable for each n, and that Vn is strictly between 0
and Xn.
(i) Prove that if Xn > 0 then E[log(Xn+1/Xn) | Fn] = f(Vn+1/Xn) where
f(x) = p log(1 + x) + q log(1− x) .
(ii) Deduce that (log(Xn)− nα)n≥0 is a supermartingale, where
α = p log p+ q log q + log 2 .
[6]
(i) Observe
log
(
Xn+1
Xn
)
= log
(
Xn + Vn+1
Xn
)
1εn+1=+1 + log
(
Xn − Vn+1
Xn
)
1εn+1=−1 .
Also E[1εn+1=+1 | Fn] = E[1εn+1=−1] = p and E[1εn+1=+1 | Fn] = q. Taking expectation conditioned
on Fn, and ‘taking out what is known’ we get
E[log(Xn+1/Xn) | Fn] = p log
(
1 +
Vn+1
Xn
)
+ q log
(
1− Vn+1
Xn
)
= f
(
Vn+1
Xn
)
.
(3 marks)
(ii) By part (i),
E[log(Xn+1)− (n+ 1)α | Fn] = log(Xn)− nα+ f
(
Vn+1
Xn
)
− α ,
so we have to show f(Vn+1/Xn)− α ≤ 0. Notice f is differentiable, so to find the max of f on [0, 1]
consider the derivative. For each x ∈ (0, 1)
f ′(x) =
p
x+ 1
− q
1− x ,
so f is increasing on (0, p− q] and decreasing on [p− q, 1], hence attains its maximum at
f(p− q) = p log(2p) + q log(2q) = α .
It follows that (log(Xn)− nα)n≥0 is a supermartingale.
(3 marks)
10
[TOTAL: 20]
End.
11

Email:51zuoyejun

@gmail.com