© 2020 Imperial College London Page 1

MATH97083

BSc, MSci and MSc EXAMINATIONS (MATHEMATICS)

May-June 2020

This paper is also taken for the relevant examination for the

Associateship of the Royal College of Science

Applied Probability

SUBMIT YOUR ANSWERS AS SEPARATE PDFs TO THE RELEVANT DROPBOXES ON

BLACKBOARD (ONE FOR EACH QUESTION) WITH COMPLETED COVERSHEETS WITH

YOUR CID NUMBER, QUESTION NUMBERS ANSWERED AND PAGE NUMBERS PER

QUESTION.

.

Date: 13th May 2020

Time: 13.00pm - 15.30pm (BST)

Time Allowed: 2 Hours 30 Minutes

Upload Time Allowed: 30 Minutes

This paper has 5 Questions.

Candidates should start their solutions to each question on a new sheet of paper.

Each sheet of paper should have your CID, Question Number and Page Number on the

top.

Only use 1 side of the paper.

Allow margins for marking.

Any required additional material(s) will be provided.

Credit will be given for all questions attempted.

Each question carries equal weight.

We denote the natural numbers including 0 by N0 = N ∪ {0}.

1. (a) Consider a discrete-time homogeneous Markov chain (Xn)n∈N0 with state space E =

{1, 2, 3, 4, 5, 6, 7, 8} and transition matrix given by

P =

1

4 0

3

4 0 0 0 0 0

0 1 0 0 0 0 0 0

1

3

1

3

1

3 0 0 0 0 0

0 0 0 14

3

4 0 0 0

0 0 0 13

2

3 0 0 0

0 0 0 0 0 0 1 0

0 0 0 0 0 1 0 0

0 0 0 0 0 0 1 0

.

(i) Draw the transition diagram. (2 marks)

(ii) Specify the communicating classes and determine whether they are transient, null

recurrent or positive recurrent. Please note that you need to justify your answers.

(3 marks)

(iii) Find all stationary distributions. (5 marks)

(b) Suppose the influenza virus exists in K different strains, where K ≥ 2. Each year, the virus

either stays the same with probability 1 − a, for a ∈ (0, 1), or mutates to any of the other

strains with equal probability. Suppose you can model the virus mutation by a discrete-time

homogeneous Markov chain.

(i) We denote the state space by E = {1, . . . , K}. State the corresponding 1-step transition

probabilities of the Markov chain. (3 marks)

(ii) You decide to group the states: You consider the modified state space E˜ = {I, O} where

I stands for the initial state and O for the collection of the other K − 1 states.

(1.) State the corresponding 1-step transition probabilities of the Markov chain on E˜.

(2 marks)

(2.) Show that, for n ∈ N,

pII(n+ 1) = pII(n)

{

1− a− a

K − 1

}

+ a

K − 1 ,

and state all results from the lectures which you apply in your proof. (5 marks)

(Total: 20 marks)

MATH96052/MATH97083 Applied Probability (2020) Page 2

2. (a) Let T be a nonnegative discrete random variable on a probability space (Ω,F ,P) and let

A ∈ F be an event with P(A) > 0. Show that

E(T |A) =

∞∑

n=1

P(T ≥ n|A).

(2 marks)

(b) Consider a discrete-time homogeneous Markov chain on a countable state space E. Suppose

that the Markov chain is irreducible, has a stationary distribution denoted by pi and all states

are recurrent.

(i) Show that pii = µ−1i for all i ∈ E, where µi denotes the mean recurrence time for state

i. (5 marks)

(ii) Show that all states are positive recurrent. (3 marks)

(c) Consider a homogeneous Markov chain (Xn)n∈{0,1,2,...,1000} with state space E = {1, 2, 3, 4}

and transition matrix given by

P =

0.5 0 0.5 0

0 0.5 0 0.5

0 0.5 0 0.5

0.5 0 0.5 0

.

Answer the following questions about this Markov chain, justifying your answers.

(i) Is this Markov chain irreducible? (2 marks)

(ii) How many stationary distributions does this Markov chain have? Find all stationary

distributions. (3 marks)

(iii) Is this Markov chain time-reversible? (5 marks)

(Total: 20 marks)

MATH96052/MATH97083 Applied Probability (2020) Page 3

3. (a) Consider two independent and Poisson distributed random variables X ∼ Poi(λ) and

Y ∼ Poi(µ) with λ, µ > 0. Show that

X

∣∣∣X + Y = n ∼ Bin(n, λ

λ+ µ

)

, for n ∈ N.

You may state and use without proof the distribution of X + Y . (4 marks)

(b) Let (Nt)t≥0 denote a Poisson process with rate λ > 0. For t1 < t2, show that

Nt1

∣∣∣Nt2 = n ∼ Bin(n, t1t2

)

, for n ∈ N.

(5 marks)

(c) Let (Nt)t≥0 denote a Poisson process with rate λ > 0. Let (Zi)i∈N denote independent and

identically distributed random variables with Bernoulli distribution with parameter p > 0.

Suppose that (Zi)i∈N and (Nt)t≥0 are independent. For t ≥ 0, define

Xt =

Nt∑

i=1

Zi, Yt = Nt −Xt.

(i) Show that (Xt)t≥0 and (Yt)t≥0 are Poisson processes with rates λp and λ(1 − p),

respectively. (4 marks)

(ii) Also show that for any t ≥ 0, Xt and Yt are independent. (2 marks)

(d) Consider the Cramér-Lundberg model in insurance mathematics.

(i) State the model for the total claim amount. (3 marks)

(ii) Derive the cumulative distribution function of the total claim amount at a fixed point in

time. (2 marks)

(Total: 20 marks)

MATH96052/MATH97083 Applied Probability (2020) Page 4

4. (a) Let N = (Nt)t≥0 and M = (Mt)t≥0 denote independent Poisson processes with rates λ > 0

and µ > 0, respectively. Let T = inf{t ≥ 0 : Mt = 1} denote the random time when the

first jump in M occurs. Determine P(NT/2 = k) for k ∈ N0 and name the distribution.

(7 marks)

(b) Consider a population of N individuals consisting at time 0 of one ‘infective’ and N − 1

‘susceptibles’. The process changes only by susceptibles becoming infective. If, at some time

t, there are i infectives, then, for each susceptible, there is a probability of i2λδ + o(δ) of

becoming infective in (t, t+ δ] for λ, δ > 0.

(i) If we consider the event of becoming an infective as a birth, what is the birth rate λi of

the process, when there are i infectives? (2 marks)

(ii) Let T denote the time to complete the epidemic, i.e. the first time when all N individuals

are infective.

(1.) Derive E(T ) (without using any type of generating functions). (2 marks)

(2.) Show that the Laplace transform of T is given by

E[e−sT ] =

N−1∏

i=1

(

λi

λi + s

)

, for s ≥ 0.

(2 marks)

(3.) Derive E(T ) by using the Laplace transform given in (2.). (2 marks)

You may leave your solution in (1.) and (3.) as a sum.

(c) Give an example of two non-identical continuous-time Markov chains which have the same

jump chain. (5 marks)

(Total: 20 marks)

MATH96052/MATH97083 Applied Probability (2020) Page 5

Mastery question based on additional reading material.

5. Let X = (Xt)t≥0 be a continuous-time Markov chain on a countable state space E with generator

G. We assume that the Markov chain is minimal.

(a) (i) Give a definition for the state i ∈ E to be recurrent. (2 marks)

(ii) Give a definition for the state i ∈ E to be transient. (1 mark)

(b) Suppose E = {1, 2, 3, 4} and

G =

−1 12 12 0

1

4 −12 0 14

1

6 0 −13 16

0 0 0 0

.

For each state in the state space, decide whether it is null/positive recurrent or transient and

justify your answer.

(5 marks)

(c) Now let E = {1, 2} and

G =

−1 1

2 −2

.

(i) Find the stationary distribution of X and justify your answer. (4 marks)

(ii) Find the stationary distribution of the jump chain associated with X. (2 marks)

(iii) Formulate a general result which describes the relationship between the stationary

distributions of X and of its jump chain and show how the result applies in the context

of the Markov chain considered in this question (part (c)). (4 marks)

(iv) Find the transition matrix Pt = (pij(t))i,j∈E for all t ≥ 0. (2 marks)

Hint: You may use without a proof that G = ODO−1, where

O =

1 −12

1 1

, D =

0 0

0 −3

, O−1 =

23 13−23 23

.

(Total: 20 marks)

MATH96052/MATH97083 Applied Probability (2020) Page 6

BSc and MSci EXAMINATIONS (MATHEMATICS)

May-June 2020

This paper is also taken for the relevant examination for the Associateship.

MATH96052/MATH97083

Applied Probability (Solutions)

Setter’s signature Checker’s signature Editor’s signature

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

c© 2020 Imperial College London MATH96052/MATH97083 Page 1 of 10

meth seen ⇓

2,A

1. (a) (i) The transition diagram is given by

1 23 4 5 6 7 8

3

4

1

4 1

1

3

1

3

1

3

3

4

1

4

1

3

2

3

1

1

1

1

(ii) We have a finite state space which can be divided into five communicating

classes: The classes T1 = {1, 3}, T2 = {8} are not closed and hence transient. 1,A

The classes C1 = {2}, C2 = {4, 5}, C3 = {6, 7} are finite and closed and hence

positive recurrent. 2,A

(iii) This Markov chain does not have a unique stationary distribution pi since we

have three closed (essential) communicating classes. For the transient states

we know from the lectures that pii = 0 for i = 1, 3, 8. For the positive 1,A

recurrent states, we solve pi2 · 1 = pi2, (pi4, pi5) = (pi4, pi5)

(

1

4

3

4

1

3

2

3

)

and

(pi6, pi7) = (pi6, pi7)

(

0 1

1 0

)

, which leads to pi2 = pi2, pi5 =

9

4pi4 and pi6 = pi7.

2,A

There are various ways of representing all possible stationary distributions (only

one is needed!), e.g.:

∗ pi = (0, pi2, 0, pi4, 94pi4, pi6, pi6, 0) for all pi2, pi4, pi6 ≥ 0 with pi2 + 134 pi4 +

2pi6 = 1,

∗ pi = (0, pi2, 0, 49pi5, pi5, pi6, pi6, 0) for all pi2, pi5, pi6 ≥ 0 with pi2 + 139 pi5 +

2pi6 = 1,

∗ pi = a(0, 1, 0, 0, 0, 0, 0, 0) + b(0, 0, 0, 413 , 913 , 0, 0, 0) + c(0, 0, 0, 0, 0, 12 , 12 , 0)

for all a, b, c ≥ 0 with a+ b+ c = 1. 2,A

unseen ⇓

(b) (i) We have pii = 1 − a for all i ∈ E and we have that pij = b for all i 6= j for

some b ∈ (0, 1), hence 1 = ∑Kj=1 pij = 1− a+ (K − 1)b⇔ b = aK−1 , which

implies that pij =

a

K−1 for all i, j ∈ E, i 6= j. 3,B

(ii) (1.) Here the transition matrix corresponding to the Markov chain on E˜ =

{I,O} is given by

P =

(

1− a a

a

K−1 1− aK−1

)

.

2,B

(2.) Let n ∈ N. From the Chapman-Kolmogorov equations, we have that

Pn+1 = PnP, hence

pII(n+ 1) = pII(n) · pII + pIO(n) · pOI .

= pII(n) · (1− a) + pIO(n) · a

K − 1 .

2,C

Since Pn is a stochastic matrix, we have that pII(n) + pIO(n) = 1 ⇔

pIO(n) = 1− pII(n). 2,B

MATH96052/MATH97083 Applied Probability (Solutions) (2020) Page 2 of 10

Hence

pII(n+ 1) = pII(n) · pII + pIO(n) · pOI

= pII(n) · pII + (1− pII(n)) · pOI

= pII(n)(pII − pOI) + pOI

= pII(n)

(

1− a− a

K − 1

)

+

a

K − 1 .

[2 marks for applying and mentioning the Chapman- 1,B

Kolmogorov equations, 2 marks for using and mentioning that

Pn is a stochastic matrix, 1 mark for deriving the final formula.

If students made a mistake in the first part (1.) of the question

and plugged in the wrong 1-step transition probabilities here,

then they should not be penalised again, but can achieve full

marks in this part of the question.]

MATH96052/MATH97083 Applied Probability (Solutions) (2020) Page 3 of 10

sim. seen ⇓2. (a) We use the definition of the conditional expectation and the fact that m = ∑m−1n=0 1

to deduce that

E(T |A) =

∞∑

m=0

mP(T = m|A) =

∞∑

m=0

m−1∑

n=0

P(T = m|A) =

∞∑

n=0

∞∑

m=n+1

P(T = m|A)

=

∞∑

n=0

P(T ≥ n+ 1|A) =

∞∑

n=1

P(T ≥ n|A).

2,A

seen ⇓(b) (i) Suppose that X0 ∼ pi (i.e. P(X0 = i) = pii for each i). Let Ti = inf{n ≥

1 : Xn = i} denote the first hitting time for state i ∈ E, using part (a), we

get pijµj = P(X0 = j)E(Tj |X0 = j) =

∑∞

n=1 P(Tj ≥ n|X0 = j)P(X0 = j) =∑∞

n=1 P(Tj ≥ n,X0 = j). 2,D

Define an := P(Xm 6= j, 0 ≤ m ≤ n), for n ∈ N0.

Then P(Tj ≥ 1, X0 = j) = P(X0 = j) (since Tj ≥ 1 by definition) and for

n ≥ 2

P(Tj ≥ n,X0 = j) = P(X0 = j,Xm 6= j, 1 ≤ m ≤ n− 1)

= P(Xm 6= j, 1 ≤ m ≤ n− 1)− P(Xm 6= j, 0 ≤ m ≤ n− 1)

= P(Xm 6= j, 0 ≤ m ≤ n− 2)− P(Xm 6= j, 0 ≤ m ≤ n− 1)

= an−2 − an−1,

where we have used the homogeneity of the chain and the law of total

probability. 2,D

Then, summing over n (noting that we are dealing with a telescoping sum)

leads to

pijµj = P(X0 = j) +

∞∑

n=2

(an−2 − an−1)

= P(X0 = j) + P(X0 6= j)− lim

n→∞ an

= 1− lim

n→∞ an.

However, limn→∞ an = P(Xm 6= j, ∀m) = 0 by the recurrence of j. That is,

pi−1j = µj if pij > 0. 1,D

(ii) To see that pij > 0 for all j, suppose the converse; then

0 = pij =

∑

i∈E

piipij(n) ≥ piipij(n)

for all i, n, yielding that pii = 0 whenever i → j. However, the chain is

irreducible, so that pii = 0 for each i - a contradiction to the fact that pi is a

stationary vector. Thus µi <∞ and all states are positive. 3,C

meth seen ⇓(c) (i) Yes, this Markov chain is irreducible since all states communicate with each

other.

2,A

MATH96052/MATH97083 Applied Probability (Solutions) (2020) Page 4 of 10

(ii) Since the Markov chain is irreducible and the state space is finite, there is

only one finite and closed communicating class, hence all states are positive

recurrent. From a theorem in lectures we can then deduce that there is a

unique stationary distribution. We observe that the transition matrix is doubly-

stochastic and hence we know from lectures/problem class that the uniform

distribution pi = (0.25, 0.25, 0.25, 0.25) is the unique stationary distribution of

the Markov chain. [Solving pi = piP for pii ≥ 0,

∑4

i=1 pii = 1 to derive

the stationary distribution is of course also a valid approach.] [1

mark for the correct answer and 2 marks for the justification.] 3,B

(iii) From the lectures we know that the Markov chain is time-reversible if and only if

the detailed-balance equations hold, i.e. piipij = pijpji for all i, j ∈ {1, 2, 3, 4}.

Here we have that pii = pij for all i, j, hence we need a symmetric transition

matrix for the detailed-balance equations to hold. However, here we have

e.g. p14 = 0 6= p41 = 0.5, hence the detailed-balance equations do not hold

and hence the Markov chain is not time-reversible. [2 marks for “not

time-reversible”, 3 marks for justification] 5,B

MATH96052/MATH97083 Applied Probability (Solutions) (2020) Page 5 of 10

unseen ⇓3. (a) We recall from lectures that X + Y ∼ Poi(λ+ µ).

1,A

Let n ∈ N and k ∈ {0, 1, . . . , n}, then, using Bayes’ rule, we have

P(X = k|X + Y = n) = P(X + Y = n|X = k)P(X = k)

P(X + Y = n)

=

P(Y = n− k|X = k)P(X = k)

P(X + Y = n)

by independence of X and Y

=

P(Y = n− k)P(X = k)

P(X + Y = n)

=

µn−k

(n− k)!e

−µλk

k!

e−λ

(

(λ+ µ)n

n!

e−(µ+λ)

)−1

=

(

n

k

)(

λ

λ+ µ

)k ( µ

λ+ µ

)n−k

,

which is indeed the probability mass function of a Bin

(

n, λλ+µ

)

random variable. 3,A

unseen ⇓(b) Since (0, t1], (t1, t2] are disjoint, the increments Nt1 − N0 = Nt1 , Nt2 − Nt1 are

independent. Also, we have that Nt1 ∼ Poi(λt1), Nt2 ∼ Poi(λt2), Nt2 − Nt1 ∼

Poi(λ(t2 − t1)). Hence we can apply part (a) with X = Nt1 and Y = Nt2 −Nt1

to conclude that

Nt1

∣∣Nt2 = n ∼ Bin(n, λt1λt1 + λ(t2 − t1)

)

⇔ Nt1

∣∣Nt2 = n ∼ Bin(n, t1t2

)

.

5,A

sim. seen ⇓(c) We answer (i) and (ii) jointly: Both X = (Xt)t≥0 and Y = (Yt)t≥0 are stochastic

processes. We need to check the defining properties of the Poisson process:

∗ X0 = 0 and Y0 = 0 almost surely, since N0 = 0 almost surely. 1,A

∗ Independent increments: (Nt)t≥0 has independent increments. By

construction (the Zis are all independent of each other and of (Nt)) (Xt)t≥0

and (Yt)t≥0 inherit the independent increments property from (Nt)t≥0.

∗ Stationary increments: (Nt)t≥0 has stationary increments. By construction

(the Zis are all independent of each other and of (Nt) and identically

distributed) (Xt)t≥0 and (Yt)t≥0 inherit the stationary increments property

from (Nt)t≥0. 1,A

∗ Let n,m ∈ N0, t ≥ 0, then

P(Xt = m,Yt = n) = P(Xt = m,Nt = m+ n)

= P(Xt = m|Nt = m+ n)P(Nt = m+ n)

=

(

m+ n

m

)

pm(1− p)n (λt)

m+n

(m+ n)!

e−λt

=

(λtp)m

m!

e−λtp

(λt(1− p))n

n!

e−λt(1−p) = P(Xt = m)P(Yt = n).

So, we found that Xt and Yt are independent with Xt ∼ Poi(λtp) and 2,A; 2,B

Yt ∼ Poi(λt(1 − p)) and X and Y are Poisson processes with rates λp and

λ(1−p), respectively. [The final 4 marks are split as follows: 2 marks

for the marginal distributions (possibly derived separately), 2

marks to show the independence.]

MATH96052/MATH97083 Applied Probability (Solutions) (2020) Page 6 of 10

seen ⇓(d) (i) The total claim amount in the Crame´r-Lundberg model is defined as a

compound Poisson process (St)t≥0 satisfying

St =

{ ∑Nt

i=1 Yi, Nt > 0,

0, Nt = 0,

where N = (Nt)t≥0 is a Poisson process of rate λ > 0 and the claim size

process is denoted by Y = (Yi)i∈N, where the Yi denote positive i.i.d. random

variables with finite mean and finite variance. Also, N and Y are assumed to

be independent of each other. [Any alternative correct definition of 3,A

the (compound) Poisson process should be awarded full marks.]

(ii) Let t ≥ 0. Then FSt(x) = P(St ≤ x) = 0 for x < 0. Let x ≥ 0, then

FSt(x) = P(St ≤ x) =

∞∑

n=0

P(St ≤ x,Nt = n) =

∞∑

n=0

P

(

n∑

i=1

Yi ≤ x,Nt = n

)

=

∞∑

n=0

P

(

n∑

i=1

Yi ≤ x

)

P(Nt = n) =

∞∑

n=0

e−λt

(λt)n

n!

P

(

n∑

i=1

Yi ≤ x

)

.

2,A

MATH96052/MATH97083 Applied Probability (Solutions) (2020) Page 7 of 10

unseen ⇓4. (a) Since N and M are independent, we deduce that N and T are independent. From

lectures we know that T ∼ Exp(µ). 2,D

Then, using the continuous version of the law of total probability, we have for

k ∈ N0:

P(NT/2 = k) =

∫ ∞

−∞

P(NT/2 = k|T = t)fT (t)dt =

∫ ∞

0

P(Nt/2 = k|T = t)µe−µtdt

N,T independent

=

∫ ∞

0

P(Nt/2 = k)µe−µtdt =

∫ ∞

0

(λt/2)k

k!

e−λt/2µe−µtdt.

2,D

We change the variables z = (λ/2 + µ)t and use the fact that

∫∞

0 z

ke−zdz =

Γ(k + 1) = k! to deduce that

P(NT/2 = k) =

µ

k!

(

λ

2

)k (λ

2

+ µ

)−(k+1) ∫ ∞

0

zke−zdz

= µ

(

λ

2

)k (λ

2

+ µ

)−(k+1)

=

(

λ

λ+ 2µ

)k ( 2µ

λ+ 2µ

)

.

2,D

Hence NT/2 has a geometric distribution with parameter

2µ

λ+2µ . 1,D

sim. seen ⇓

(b) (i) If there are i infectives, then there are N − i susceptibles and hence the birth

rate is given by λi = (N − i)i2λ if i = 1, . . . , N − 1 and 0 otherwise.

2,B(ii) (1.) Let Xi be the time spent in state i (where i denotes the number of

infectives), then we have that the time to complete the epidemic is

T = X1 + · · ·+XN−1,

where the Xi are independent of each other with Xi ∼ Exp(λi). By the

linearity of the expectation,

E[T ] =

N−1∑

i=1

E(Xi) =

N−1∑

i=1

1

λi

=

N−1∑

i=1

1

(N − i)i2λ.

2,C

(2.) Let s ≥ 0. Using the notation from (1.), the Laplace transform of Xi is

given by

E(e−sXi) =

∫ ∞

0

e−sxλie−λixdx = λi

∫ ∞

0

e−(s+λi)xdx =

λi

λi + s

.

Hence

E[e−sT ] = E[e−s

∑N−1

i=1 Xi ]

independence of Xis

=

N−1∏

i=1

E[e−sXi ] =

N−1∏

i=1

(

λi

λi + s

)

.

2,C

MATH96052/MATH97083 Applied Probability (Solutions) (2020) Page 8 of 10

(3.) To compute the expectation, we calculate the logarithm of the Laplace

transform and use the fact that

E[T ] = − d

ds

[

log

{

E[e−sT ]

}]∣∣∣∣

s=0

.

Here log

{

E[e−sT ]

}

=

∑N−1

i=1 log

(

λi

λi+s

)

and 1,C

d

ds

log

{

E[e−sT ]

}

=

N−1∑

i=1

(λi + s)

λi

· (λi + s) · 0− λi · 1

(λi + s)2

= −

N−1∑

i=1

1

(λi + s)

.

Hence,

E[T ] =

N−1∑

i=1

1

λi

=

N−1∑

i=1

1

(N − i)i2λ.

1,C

meth seen ⇓(c) Many examples are possible: For instance, take a non-explosive birth process

starting at 0 with rates λi > 0 for i ∈ N0 and a Poisson process of rate λ > 0.

Both processes are continuous-time Markov chains on N0. 3,D

Then the jump chain for both processes, denoted by (Zn)n∈N0 , is given by Zn = n

for n ∈ N0 and its transition probabilities are given by pZij = 1 when j = i + 1

and 0 otherwise (for i, j ∈ N0), since both continuous-time Markov chains are 1,D

non-decreasing processes which can only jump up by one step at a time.

1,C

Alternatively, one could consider Poisson processes of different rates etc.

MATH96052/MATH97083 Applied Probability (Solutions) (2020) Page 9 of 10

Mastery question based on additional reading material. unseen ⇓

5. (a) (i) We say that state i ∈ E is recurrent if P({t ≥ 0 : Xt = i} is unbounded |X0 =

i) = 1. 2

(ii) We say that state i ∈ E is transient if P({t ≥ 0 : Xt = i} is unbounded |X0 =

i) = 0. 1

(b) We derive the transition matrix of the corresponding jump chain:

P =

0 12

1

2 0

1

2 0 0

1

2

1

2 0 0

1

2

0 0 0 1

.

We observe that the jump chain has two communicating classes: T = {1, 2, 3}, C =

{4}. T is not closed, hence all states in T are transient. C is finite and closed,

hence state 4 is (positive) recurrent. 3

We know that if a state is recurrent (transient) for the jump chain, then it is

recurrent (transient) for the continuous-time Markov chain. So we conclude that

states 1, 2, 3 are transient and state 4 is recurrent for the continuous-time Markov

chain. Moreover, since g44 = 0, we have that state 4 is positive recurrent. 2

(c) (i) We denote the stationary distribution of X by λ = (λ1, λ2). We solve λG = 0,

for λ1, λ2 ≥ 0, λ1 + λ2 = 1. Here we have −λ1 + 2λ2 = 0⇔ λ1 = 2λ2. Then

1 = λ1 + λ2 ⇔ 1 = 3λ2 ⇔ λ1 = 23 , λ2 = 13 . 2

Since G is irreducible and recurrent, we get that λG = 0⇔ λPt = λ for all

t ≥ 0, where (Pt)t≥0 denotes the matrix of transition probabilities associated

with X. 2

(ii) The transition matrix of the associated jump chain is given by

P =

(

0 1

1 0

)

.

We denote the stationary distribution of the jump chain by pi = (pi1, pi2). We

solve piP = pi, for pi1, pi2 ≥ 0, pi1 + pi2 = 1. Here we have pi1 = pi2. Then

1 = pi1 + pi2 ⇔ pi1 = 12 , pi2 = 12 . 2

(iii) Let G be a generator and let P denote the transition matrix of the associated

jump chain. Let λ be a measure. Then λ satisfies λG = 0 if and only if

µP = µ where µi = λi · (−gii) for all i ∈ E. 3

In our case, we can obtain µ as follows: µ1 = λ1 · (−g11) = 23 · 1 = 23 , µ2 =

λ2 · (−g22) = 13 · 2 = 23 , i.e. µ1 = µ2. When we normalise, we obtain that

pi = (0.5, 0.5) is the stationary distribution of the jump chain. 1

(iv) From lectures, we know that, for t ≥ 0, we have Pt = etG =∑∞

n=0

tn

n!OD

nO−1 = O

∑∞

n=0

tn

n!D

nO−1. Hence

Pt = O

(

et·0 0

0 e−3t

)

O−1 =

(

2

3 +

1

3e

−3t 1

3 − 13e−3t

2

3 − 23e−3t 13 + 23e−3t

)

.

2

MATH96052/MATH97083 Applied Probability (Solutions) (2020) Page 10 of 10

ExamModuleCode QuestionComments for Students

MATH97083 1

Overall students did well on this question. The two common mistakes were: first, failure to

represent all possible stationary distributions correctly in iii) by omitting to mention the non‐

negativity constraints. Second, not justifying each stop of the derivation in b)ii)2) properly.

MATH97083 2

A lot of students struggled with the proof of b) ii), and a handful of students struggled with b) i).

This was surprising since b) i) has a much more involved proof than b) ii).

MATH97083 3

Parts a and b were done well; some students unnecessarily repeated calculations in part b which

were identical those in part a. Many students addressed part c using generating functions and

other expectations, which is not the correct approach. Part d was book work and so most

students answered these questions easily, although some did not state enough technical detail

about the model.

MATH97083 4

There were some mixed efforts in trying to solve Q4, which was possibly the hardest question in

this exam. Very few students gave a detailed derivation with sufficient justifications of all the

required steps in 4a). Q4b) was generally done better, but many students lost marks for using

sloppy notation and/or insufficient justifications. The results in Q4c) were mixed, but it was nice

to see that many students got the general idea right.

MATH97083 5

The Mastery question was generally done well. Some students did not provide sufficient

justifications for the results in 5b) and 5ci). Some struggled with some more detailed

computations in 5ciii) and 5civ).

If your module is taught across multiple year levels, you might have received this form for each level of the module. You are only required to fill this out once for each question.

Please record below, some brief but non-trivial comments for students about how well (or otherwise) the questions were answered. For example, you may wish to comment on

common errors and misconceptions, or areas where students have done well. These comments should note any errors in and corrections to the paper. These comments will be made

available to students via the MathsCentral Blackboard site and should not contain any information which identifies individual candidates. Any comments which should be kept

confidential should be included as confidential comments for the Exam Board and Externals. If you would like to add formulas, please include a sperate pdf file with your email.

0000001

0000002

欢迎咨询51作业君