seen ⇓
1. (a) 1. generate Ui ∼ U(0, 1);
2. set Xi := F−1X (Ui).
3. then X ∼ FX
3 A
(b) Want X ∼ exp(λ). So,
FX(x) = 1− e−λx.
Setting U = 1− e−λX ⇒ X = − 1
λ
log(1− U)⇒ F−1X (U) = −λ−1 log(1− U).
Algorithm (Exponential)
1. Generate U = u ∼ U(0, 1)⇒ 1− U ∼ U(0, 1).
2. Set X = −λ−1 log(u).
3. Then X ∼ exp(λ).
replace **** with -log(u)/a
4 A
unseen ⇓
(c) (i)
Y = F−1X (Fa + (Fb − Fa)U)
FY (y) = P(Y ≤ y) = P (F−1X (Fa + (Fb − Fa)U) ≤ y) = P(Fa + (Fb − Fa)U ≤ FX(y))
= P(U ≤ (FX(y)− Fa)/(Fb − Fa)) = FU((FX(y)− Fa)/(Fb − Fa))
Let V = (FX(y)− Fa)/(Fb − Fa), V < 0⇒ y < a and V > 1⇒ y > b, so
FY (y) =

0 y ≤ a
(FX(y)− Fa)/(Fb − Fa) a < y < b
1 y ≥ b
So,
fY (y) =
{
fX(y)
Fb−Fa a < y < b
0 otherwise
Given that Fb − Fa = ∫ ba fX(x) dx, we have that Y follows the same distribution as
X but truncated to (a, b).
6 D
(ii) 3 C
(iii) Lots of options here, could just simulate from an exponential and reject those outside
the range. Any sensible answer awarded marks. 2 B
seen ⇓(d) F
−1
X (x) may not have a closed form, or it may be computationally expensive to invert.
2 AMATH96054/MATH97085 Stochastic Simulation (Solutions) (2020) Page 1
sim. seen ⇓
seen ⇓2. (a) GeneratingX with a known pdf fX(·) using rejection employs the use of a rejection envelope
gY (·) (which we can generate from). The pdf gY (·) must have the following properties:
∗ the support of gY encompasses that of fX , i.e. fX(x) > 0⇒ gY (x) > 0,
∗ there exists M > 0 such that ∀x s.t. fX(x) > 0,
fX(x)
gY (x)
≤M <∞.
Rejection Sampling Algorithm
1. Generate Y = y ∼ gY (.).
2. Generate U = u ∼ U(0, 1).
3. If u ≤ fX(y)
MgY (y) set X = y.
4. Otherwise GOTO 1.
4 A
unseen ⇓
(b) (i) Using pdfs up to proportionality,
f ∗X(x)
g∗Y (x)
= x
α−1e−βx
xn/2−1e−x/2 = x
α−n2 exp
(
−βx+ x2
)
d
dx
= xα−n/2 exp
(
x
(1
2 − β
))(1
2 − β
)
+ exp
(
x
(1
2 − β
))(
α− n2
)
xα−n/2−1.
For maximum,
x(1/2− β) + (α− n/2) = 0⇒ x = n/2− α1/2− β .
Check max for full marks (show second deriv < 0 or otherwise). Need x > 0 ⇒
n/2− α < 1/2− β (as β > 1/2), so
n < 1− 2β + 2α. 4 D
(ii) Let x∗ = n/2−α1/2−β =
16/2−10
1/2−3/2 = 2. Then,
M = max
x
f ∗(x)
g∗(x) = x
α−n2∗ exp
(
−βx∗ + x∗2
)
= x10−
16
2∗ exp
(
−3x∗2 +
x∗
2
)
= x2∗ exp (−x∗) = 22 exp(−2) = 4e−2,
and
f ∗X(y)
Mg∗Y (y)
= y
2 exp(−y)
4 exp(−2) =
y2
4 exp (2− y) .
Rejection Sampling Algorithm
1. Generate Y = y ∼ χ212.
2. Generate U = u ∼ U(0, 1).
3. If u ≤ y24 exp (2− y) set X = y.
4. Otherwise GOTO 1.
5 B
MATH96054/MATH97085 Stochastic Simulation (Solutions) (2020) Page 2
(iii) Limitations: computationally complex, low acceptance probability. Benefits:
straightforward to generate from χ2 distribution using sum of squared normals,
analytically tractable.
3 B
(iv) MC integration, using sample X1, . . . , Xn from fX , then
E
(
φ(X)
fX(X)
)
=
∫ ∞
0
φ(x)
fX(x)
fX(x) dx = θ
Estimate θ using
θˆ = 1
n
n∑
i=1
φ(Xi)
fX(Xi)
Code reproduces, X1, . . . Xn ∼ Gamma(10, 3/2), where n = 10000 then takes
1
n
n∑
i=1
Γ(10)X2i
We have, with α = 10 and β = 3/2,
φ(x)
fX(x)
= Γ(α)x2 ⇒ φ(x) = βαxα+1e−βx =
(3
2
)10
x11e−3x/2.
The code estimates, ∫ ∞
0
(3
2
)10
x11e−3x/2 dx. 4 D
MATH96054/MATH97085 Stochastic Simulation (Solutions) (2020) Page 3
seen ⇓
3. (a) The Metropolis algorithm proceeds as follows:
Given states {1, 2, . . . , K}, choose any symmetric transition matrix Q, with elements qij.
Suppose currently at state i.
Select state j with probability qij.
Move to state j with probability
min
{
1, pij
pii
}
.
Otherwise, stay at state i.
4 A
(b) The Markov Chain induced by the algorithm will have unique stationary distribution pi if
the chain is aperiodic and irreducible (finite state space).
2 A
sim. seen ⇓
(c) (i) there are 2K possible values for g(z). 1 B
(ii) Metropolis algorithm
Suppose currently at state i:
z(i) = {z1, . . . , zn, . . . , zK}.
Implicitly define transition matrix Q by randomly selecting an element of the sequence
to change, i.e. choose n (say) uniformly from {1, 2, . . . , K}, to give us a proposal
state j:
z(j) = {z1, . . . , 1− zn, . . . , zK}.
pij
pii
= P(Z1 = z1, . . . , Zn = 1− zn, . . . , ZK = zK)P(Z1 = z1, . . . , Zn = zn, . . . , ZK = zK) =
hZ(z(j))
hZ(z(i))
= exp(−λgn(1− zn))exp(−λgn(zn)) = exp(−λ(gn(1− zn)− gn(zn)))
Move to state j with probability
min {1, exp(−λ(gn(1− zn)− gn(zn)))}
Otherwise stay at state i.
Metropolis Algorithm
1. Generate n uniformly from {1, 2, . . . , K}.
2. Generate u ∼ U(0, 1)
if u < min {1, exp(−λ(gn(1− zn)− gn(zn)))} then zn → 1− zn
3. Goto 1.
Note that in this case we could just simulate independent Bernoulli random variables,
so we don’t need the Metropolis algorithm.
6 C
MATH96054/MATH97085 Stochastic Simulation (Solutions) (2020) Page 4
seen ⇓
(iii)
piipij = pii min
{
1, pij
pii
}
qij
= min {pii, pij} qij
= min {pii, pij} qji since Q symmetric
= pij min
{
1, pii
pij
}
qji
= pijpji
Hence satisfies detailed balance by design.
4 A
(iv) The sequence of z which minimises g(z) has the highest probability and will be
generated more often via the Metropolis Hastings scheme. Start the algorithm with a
low value of λ so that you explore the space, as the algorithm progresses, increase λ.
As λ increases, only z’s which nearly minimise g(z) get any probability. λ → ∞ ⇒
spike at the z which minimizes g(z).
3 B
Commentary
Q1 A straightforward start, students may find part (c) challenging.
Q2 Mainly straightforward application of rejection, though it requires care with the algebra. The last
part may be challenging as it is in reverse and needs a good understanding.
Q3 Application of standard results, but in an unseen context that will need to be carefully described
for marks.
MATH96054/MATH97085 Stochastic Simulation (Solutions) (2020) Page 5

Email:51zuoyejun

@gmail.com