程序代写案例-R2

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top

1. Consider a region in R2 defined by:
Ch =
{
(u, v) ∈ R2 : 0 ≤ u ≤

h
(
v
u
)}
where h(x) ≥ 0 ∀x ∈ R and

R
h(x)dx <∞.
(a) (i) Show that Ch has finite area.
(ii) Show that, if the bivariate vector (U, V ) is distributed uniformly on Ch, then the random
variable X = V/U has probability density function
fX(x) =
h(x)∫
R
h(y)dy
.
Now define
h(x) =

exp {−λx2} x ≥ 0
0 otherwise.
(b) Calculate values for a, b and c, such that
Rh =
{
(u, v) ∈ R2 : 0 ≤ u ≤ a, b ≤ v ≤ c
}
is the smallest rectangle such that Ch ⊆ Rh.
The following pseudo-code outlines a Ratio-of-Uniforms procedure for sampling from fX(x), which
uses squeezing in Steps 3 and 4:
1. Choose values for the tuning parameters α and β.
2. Generate U1, U2 ∼ U(0, 1).
3. Set U = aU1, V = b+ (c− b)U2, where a, b and c are given in part (b).
4. If V ≤ T1(U ;α), set X = V/U .
5. If V ≥ T2(U ; β), return to Step 2.
6. If V ≤ T3(U), set X = V/U , otherwise return to Step 2.
(c) (i) Derive the expression T3(U), which appears in Step 6.
(ii) By using a Taylor series expansion for exp(y), derive expressions for the squeezing
functions T1(U ;α) and T2(U ; β).
(d) Explain how the above Ratio-of-Uniforms procedure could be used to generate a homogeneous
Poisson process of rate λ.
M3S9/M4S9 Stochastic Simulation (2017) Page 2
2. Consider the problem of estimating the quantity η = EfX [X] , where X is a random variable with
some known probability density function fX(·).
Suppose that ηˆ1 and ηˆ2 are two distinct, unbiased and negatively correlated estimators for η, both
of which have variance σ2.
(a) Show that the arithmetic mean of ηˆ1 and ηˆ2 is an unbiased estimator with variance less than
σ2/2.
(b) Prove that if U ∼ U(0, 1), and φ : [0, 1]→ R is a monotonic increasing function, then φ(U)
and φ(1− U) will be negatively correlated.
Hint: you may wish to consider defining a point t ∈ [0, 1] such that
1− t = inf
{
u : φ(u) > E[φ(U)]
}
.
Suppose now that X has a truncated Cauchy distribution, i.e.
fX(x) =

k
pi(1 + x2) , 0 ≤ x ≤ 1
0 otherwise.
(c) Calculate the constant of proportionality, k.
(d) (i) State an expression for an unbiased estimator for η, in terms of an i.i.d. sample
X1, . . . , Xn ∼ fX ; denote this estimator ηˆ3.
(ii) Now express your estimator ηˆ3 in terms of an i.i.d. sample U1, . . . , Un ∼ U(0, 1).
(iii) Derive expressions for two further, different unbiased estimators for η. These estimators
should be written as a function of an i.i.d. sample U1, . . . , Un ∼ U(0, 1).
(iv) State how the variance of each of your estimators in (d)(iii) compares with the variance
of ηˆ3.
Suppose we use Importance Sampling to estimate η, choosing an auxiliary density g(x).
(e) Derive the Importance Sampling estimator ηˆIS when the auxiliary density corresponds to a
U(0, 1) distribution.
M3S9/M4S9 Stochastic Simulation (2017) Page 3
3. The central Laplace distribution has probability density function
fX(x) =
1
2σ exp
{
−|x|
σ
}
, x ∈ R,
and it is known that this is also the distribution of the random variable
X =

−σ log(1− 2|W |) W ≥ 0
σ log(1− 2|W |) W < 0
where W ∼ U
(
−12 , 12
)
.
(a) Suppose we have access to a stream of n+1 i.i.d. Exponentially-distributed random variables
Y1, . . . , Yn+1 ∼ Exp(1).
(i) By defining
Sk =
k∑
i=1
Yk, k = 1, . . . , n+ 1,
show that the joint density of S = (S1, . . . , Sn+1) can be written
fS(s1, . . . , sn+1) = exp {−sn+1} .
(ii) Using part (i), and defining
Vk = Sk/Sn+1, k = 1, . . . , n, and Vn+1 = Sn+1,
show that the joint density of V = (V1, . . . , Vn+1) can be written
fV (v1, . . . , vn+1) = exp {−vn+1} vnn+1.
(iii) Using part (ii), show that V1, . . . , Vn can be treated as a set of n ordered U(0, 1) random
variates.
(iv) Hence, briefly explain how we can use Y1, . . . , Yn+1 to obtain an ordered sample
X(1), . . . , X(n) ∼ fX .
[Continued on next page.]
M3S9/M4S9 Stochastic Simulation (2017) Page 4
The President of the United States of America tweets the following:
“I’ve just observed an ordered data set x(1), . . . , x(n) from the central Laplace
distribution. The ratio x(n)/x(1) is -0.5. So central! Amazing!”
In order to fact-check this tweet, you decide to carry out a two-tailed Monte Carlo test with Type
I error α, using the observed test statistic t = x(n)/x(1). Your null hypothesis, H0, is that the
observed data are distributed according to fX(x). The distribution of T = X(n)/X(1) under H0 is
unknown but continuous.
You generate m distinct ordered datasets from fX , and use these to obtain simulated values
t∗1, . . . , t

m of the test statistic. These simulated values are then sorted, to provide an ordered set
of simulated test statistics, which we denote t∗(1), . . . , t∗(m).
(b) Give expressions for k1 and k2 such that we would reject H0 if either
t < t∗(k1) or t > t

(k2).
(c) Show that the Monte Carlo test rejects H0 with probability
p =
k1−1∑
r=0
(
m
r
){
ur(1− u)m−r + um−r(1− u)r
}
,
where u = P(T ≤ t).
Having implemented the MC test, suppose we do not reject H0. Define
Ti = t∗i , i = 1, . . . ,m, and Tm+1 = t,
such that T = (T1, . . . , Tm+1) is a vector of m+ 1 i.i.d., unordered replicates of the test statistic
T and denote the corresponding sample mean by T¯ .
(d) Explain how Jackknife resampling can be used to estimate the variance of T¯ . Provide an
expression for the Jackknife estimator for the variance of T¯ .
M3S9/M4S9 Stochastic Simulation (2017) Page 5
4. Suppose we wish to sample from a distribution with pdf pi(x), defined on a continuous univariate
state space X ⊆ R.
(a) We can sample from pi(x) using the following static (i.e. non-adaptive) Metropolis-Hastings
procedure to construct a Markov chain {Xn}n≥0 with pi(x) as its stationary distribution:
Step 1: Initialise the chain by sampling from some initial distribution: X0 ∼ pi(0). Set n = 1.
Step 2: Given Xn−1 = x, generate a candidate value Y = y from the proposal density
qh(y|x) = 1
h
k
(
y − x
h
)
, x, y ∈ X , h > 0,
Step 3: Set Xn = y with probability α(x, y), otherwise set Xn = x.
Step 4: Replace n by n+ 1 and return to Step 2.
(i) Give an expression for α(x, y) and state the property of the resulting Markov chain that
guarantees the existence of a stationary distribution.
(ii) What condition must k(·) satisfy in order for the above Metropolis-Hastings procedure
to reduce to a Metropolis procedure.
(iii) The given proposal density, qh, allows the user to control the performance of the
Metropolis procedure through the fixed parameter h. Assuming the condition in part
(ii) holds, describe and explain the effect on the acceptance probability α(x, y) that
comes from decreasing h.
Suppose we are now interested in using adaptive MCMC methods to sample from the multivariate
distribution with pdf pid(x), which is defined on the continuous state space X d ⊆ Rd, d > 1.
(b) The Adaptive Metropolis (AM) algorithm is a version of the Metropolis procedure that uses
the following proposal density at iteration n:
qn(y|x) =

fN (x, (0.1)2Id/d) n ≤ 2d
(1− β)fN (y|x, 2.382Sn/d) + βfN (x, 0.12Id/d) n > 2d,
where fN (·|µ,Σ) is a multivariate Gaussian density with mean µ and covariance matrix Σ,
and where Id is the d-dimensional identity matrix and β ∈ (0, 1) is a user-specified constant.
(i) Describe the quantity denoted by Sn, and briefly explain why the value of 2.382/d is
chosen to scale Sn in the above density.
(ii) Briefly explain why the non-adaptive density fN (x, 0.12Id/d) is included in the proposal
mechanism.
[Continued on next page.]
M3S9/M4S9 Stochastic Simulation (2017) Page 6
(c) As an alternative to the AM procedure in part (b), one can employ component-wise adaptive
scaling methods, such as the Single Component Adaptive Metropolis (SCAM) algorithm and
the Adaptive Metropolis-within-Gibbs (AMWG) algorithm.
Briefly explain the primary difference between the SCAM and AMWG procedures.
(d) Suppose that, at the start of iteration n, the existing output of an adaptive Metropolis
procedure is denoted xn = (x′0, . . . , x′n−1) ∈ X d×n, where xj = (x1j , . . . , xdj ) ∈ X d for
j = 0, . . . , n−1, and where x′j denotes the transpose of xj. Denote the row vector containing
only component i of this output by xin = (xi0, . . . , xin−1) ∈ X n, and define the sample mean
x¯in−1 and sample variance gin for this univariate component of the output in the usual way:
x¯in−1 =
1
n
n−1∑
j=0
xij, g
i
n =
1
n− 1
n−1∑
j=0
(
xij − x¯in
)2
.
Show that gin satisfies the following relationship:
gin =
n− 2
n− 1g
i
n−1 +
(
x¯in−1
)2
+ 1
n− 1
(
xin−1
)2 − n− 1
n
(
x¯in−1 +
xin−1
n− 1
)2
.
(e) Roberts & Rosenthal (“Examples of Adaptive MCMC”, 2009) use integrated autocorrelation
time and average square jump distance as indicators of performance for adaptive MCMC
procedures.
(i) Which aspect of performance is being measured by these quantities?
(ii) For each of these indicators, state whether good performance of the procedure is indicated
by high or low values?
M3S9/M4S9 Stochastic Simulation (2017) Page 7
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 1 Marks &
seen/unseen
Part (a) (i) seen
Area(Ch) =
Z Z
Ch
dudv
Use a change of variables: (U, V )! U,X = VU 1
V = XU =) dv
dx
= u , dv = u dx,
So, Area(Ch) =
Z
R
"Z ph(x)
0
udu
#
dx =
1
2
Z
R
h(x)dx <1 1
Part (a) (ii) seen
The Jacobian for the above change of variables is@(u, x)@(u, v)
= 1u
=) fU,X(u, x) = ufU,V (u, v)
=
u
Area(Ch)
if (U, V ) is uniform on Ch 1
Now, we marginalise to obtain the density for X: 1
fX(x) =
Z ph(x)
0
fU,X(u, x) du
=
1
Area(Ch)
Z ph(x)
0
u du =
1
2h(x)
1
2
R
R h(x)
Part (b) seen
Find the bounding rectangle Rh = {(u, v) 2 R2 : 0  u  a, b  v  c} :
a = sup
x0
p
h(x) = 1 2
b = inf
x0
x
p
h(x) = 0 2
c = sup
x0
x
p
h(x) = sup
x0
n
xex
2/2
o
To find c, we must find max of g(x) = xex2/2:
g0(x) = ex
2/2

1 x2
= 0 when x = ±1/2
We can only take x = 1/2, as we are looking for x 0
We verify that x = 1/2 is a maximum through checking that g00(x) < 0 1
when x = 1/2
So c = sup
x0
g(x) = g

1/2

= (e)1/2 1
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 1 Marks &
seen/unseen
Part (c) (i) seen sim
Step 5 of the given procedure is the post-squeezing membership test
for the Ratio of Uniforms procedure.
In the RoU procedure, we set X = V/U if (U, V ) 2 Ch, i.e. if
U 
p
h(V/U) = exp


2
V 2
U2

1
=) logU 
2
V 2
U2
=) 2U
2

logU V 2
=) V 

2U
2

logU
1/2
=) T3(U) =

2U
2

logU
1/2
2
Part (c) (ii) seen sim
From the Taylor series expansion for exp(y), we know that
ey 1 + y 8y 2 R 1
=) y log{1 + y} 8y > 1 (?)
Writing y = ↵U 1 in (?), we have
↵U 1 log{↵U} 8U > 0
=) ↵U 1 log↵ logU 8U > 0
=) log↵ ↵U + 1  logU 8U > 0
=)

2U2

(1 + log↵) 2↵U
3

1/2
 T3(U) 8U > 0
So we can take T1(U ;↵) =

2U2

(1 + log↵) 2↵U
3

1/2
[1 mark for evidence of understanding that we need
T1(U ;↵)  T3(U) 8U > 0
1 mark for a correct derivation of a sensible T1(U ;↵).] 2
If we instead write y =

U 1 in (?), we have


U 1 log{U} 8U > 0
=) U (1 + log ) logU 8U > 0
=)

2U

2U
2

(1 + log )
1/2
T3(U) 8U > 0
So we can take T2(U ; ) =

2U

2U
2

(1 + log )
1/2
[1 mark for evidence of understanding that we need
T2(U ; ) T3(U) 8U > 0
1 mark for a correct derivation of a sensible T2(U ; ).] 2
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 1 Marks &
seen/unseen
Part (d) unseen
To generate a Poisson process, we could simulate exponential waiting
times
Ti ⇠ fT / exp{t},
which could be achieved using RoU as above, with h(x) = exp{x}
instead of h(x) = exp{x2} 2
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 2 Marks &
seen/unseen
Part (a) seen
Unbiasedness:
E
h1
2
(⌘ˆ1 + ⌘ˆ2)
i
=
1
2

E[⌘ˆ1] + E[⌘ˆ2]

=
1
2
(⌘ + ⌘) = ⌘ 1
Lower variance:
Var
h1
2
(⌘ˆ1 + ⌘ˆ2)
i
=
1
4
Var[⌘ˆ1] +
1
4
Var[⌘ˆ2] +
1
2
Cov[⌘ˆ1, ⌘ˆ2]
=
1
2

2 + Cov[⌘ˆ1, ⌘ˆ2]

=
2
2

1 + Corr[⌘ˆ1, ⌘ˆ2]

So, Corr[⌘ˆ1, ⌘ˆ2] < 0 =) Var
h1
2
(⌘ˆ1 + ⌘ˆ2)
i
< 2/2 1
Part (b) seen
Let ✓ = E[(U)] = E[(1 U)], as U ⇠ U(0, 1)
Then Cov
h
(U),(1 U)
i
= E
h⇣
(U) ✓
⌘⇣
(1 U) ✓
⌘i
= E
h
(U)

(1 U) ✓
⌘i
✓E
h
(1 U) ✓
i
= E
h
(U)

(1 U) ✓
⌘i
=
Z 1
0
(u)

(1 u) ✓

du 1
Now, 1 t = inf
n
u : (u) > ✓
o
, so
u 2 [0, t] =) 1 u 2 [1 t, 1] =) (1 u) > ✓, (?)
and u 2 (t, 1] =) 1 u 2 [0, 1 t) =) (1 u)  ✓, (??)
so we can split our integral at t: 1
Cov
h
(U),(1 U)
i
=
Z t
0
(u)

(1 u) ✓

du
+
Z 1
t
(u)

(1 u) ✓

du
and use (?) and (??) to provide upper bounds for each component.
(?) =) (1 u) ✓ > 0
=)
Z t
0
(u)

(1 u) ✓

du < (t)
Z t
0

(1 u) ✓

du 1
(??) =) ✓ (1 u) 0
=)
Z 1
t
(u)

✓ (1 u)

du (t)
Z 1
t

✓ (1 u)

du
=)
Z 1
t
(u)

(1 u) ✓

du  (t)
Z 1
t

(1 u) ✓

du 1
(cont. on next page)
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 2 Marks &
seen/unseen
Combining these results, we get
Cov
h
(U),(1 U)
i
< (t)
Z 1
0

(1 u) ✓

du| {z }
=0
So Cov
h
(U),(1 U)
i
< 0
=) Corr
h
(U),(1 U)
i
< 0 1
Part (c) unseen
fX(x) =
k


1 + x2
1
, 0  x  1Z 1
0
fX(x) dx = 1 =) 1
k
=
1

Z 1
0

1 + x2
1
dx 1
=
1


tan1 x
⇤1
0
=
1


4
=) k = 4 1
Part (d) (i) seen
A simple unbiased estimator for ⌘ is the sample mean:
⌘ˆ3 =
1
n
nX
i=1
Xi, Xi ⇠ fX , i = 1, . . . , n
[A mark should be given for any estimator in terms of X1, . . . , Xn
that is unbiased for ⌘] 1
Part (d) (ii) unseen
From part (c), we know the pdf
fX(x) =
4

1
1 + x2
0  x  1
and straightforward integration yields the corresponding cdf:
FX(x) =
4

tan1 x 0  x  1.
From the probability integral transform, we know that, for continuous X,
F1X (X) ⇠ U(0, 1), so if U ⇠ U(0, 1), then X = tan

⇡U
4

will have the
required distribution, and we can write
⌘ˆ3 =
1
n
nX
i=1
tan

⇡Ui
4

[2 marks should be given for correct calculation of the inverse CDF, and
the correct subsequent application to the estimator given in (d)(i)] 2
[Inversion sampling has been seen; its use in developing a second
unbiased estimator is unseen.]
(cont. on next page)
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 2 Marks &
seen/unseen
Part (d) (iii) seen
In (d)(ii), we use the inverse cdf to write X = F1(U).
We know that, for U ⇠ U(0, 1), and for any (integrable) function
: [0, 1]! R,
E

(U)

= E

(1 U)⇤.
=) E⇥F1(U)⇤ = E⇥F1(1 U)⇤ = ⌘.
So,
⌘ˆ4 =
1
n
nX
i=1
F1(1 Ui) = 1
n
nX
i=1
tan


4
⇡Ui
4

is also an unbiased estimator for ⌘.
From part (a), we know that, since ⌘ˆ3 and ⌘ˆ4 are both unbiased for ⌘,
then
⌘ˆ5 =
1
2

⌘ˆ3 + ⌘ˆ4

is also unbiased for ⌘.
[For each estimator:
1 mark for stating an unbiased estimator distinct from ⌘ˆ3 2
1 mark for a correct justification or derivation.] 2
Part (d) (iv)
For the estimators given above,
• Var
h
⌘ˆ4
i
= Var
h
⌘ˆ3
i
• Var
h
⌘ˆ5
i
< Var
h
⌘ˆ3
i.
2.
Note that this follows from parts (a) and (b) as
F1(u) is a monotonic increasing function of u.
[For each estimator:
1 mark for stating correctly how the variance of the estimators
they give in part (d)(iii) compare with that of ⌘ˆ3.] 2
NB: Whilst the tools for answering (d) are ‘seen’, a complete answer
requires a deeper understanding of the material than would be tested
by simple regurgitation of ‘seen’ content.
Part (e)
The Importance Sampling estimator ⌘ˆIS is given by seen
⌘ˆIS =
1
n
nX
i=1
fX(Ui)
g(Ui)
Ui Ui ⇠ U(0, 1)
with g(x) = 1 8x 2 [0, 1] and fX(x) as calculated in part (c).
Thus,
⌘ˆIS =
1
n
nX
i=1
4Ui
⇡(1 + U2i )
Ui ⇠ U(0, 1) 2
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 3 Marks &
seen/unseen
Part (a)(i) seen
First, note that Y1, . . . , Yn+1
iid⇠ Exp(1)
=) fY (y1, . . . , yn+1) = exp
(

n+1X
i=1
yi
)
1
Now, using
Sk =
kX
i=1
Yk, k = 1, . . . , n+ 1
=)

Y1 = S1
Yk = Sk Sk1, k = 2, . . . , n+ 1,
we can find the joint density:
fS(s1, s2, . . . , sn+1) = fY (s1, s2 s1, . . . , sn+1 sn)|J | 1
where
|J | =

@y1
@s1
@y1
@s2
· · · @y1@sn+1
@y2
@s1
@y2
@s2
· · · @y2@sn+1
...
...
@yn+1
@s1
@yn+1
@s2
· · · @yn+1@sn+1

=

1 0 0 · · · 0
1 1 0 · · · 0
0 1 1 · · · 0
...
...
0 0 0 · · · 1

= 1 1
So,
fS(s1, s2, . . . , sn+1) = exp
(
s1
n+1X
k=2
(sk sk1)
)
= exp{sn+1}
Part (a)(ii) seen
Now for a second transformation of variables: using
Vk = Sk/Sn+1 k = 1, . . . , n,
Vn+1 = Sn+1,

=)

Sk = Vn+1Vk k = 1, . . . , n,
Sn+1 = Vn+1,
we obtain the joint density
fV (v1, v2, . . . , vn+1) = fS(v1vn+1, v2vn+1, . . . , vn+1)|J | 1
where
|J | =

@s1
@v1
@s1
@v2
· · · @s1@vn+1
@s2
@v1
@s2
@v2
· · · @s2@vn+1
...
...
@sn+1
@v1
@sn+1
@v2
· · · @sn+1@vn+1

=

vn+1 0 · · · v1
0 vn+1 · · · v2
...
. . .
...
0 0 · · · 1
= v
n
n+1 1
So, using part (i),
fV (v1, . . . , vn, vn+1) = exp{vn+1}vnn+1
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 3 Marks &
seen/unseen
Part (a)(iii) seen
We find the joint density for (V1, . . . , Vn) and show that it is equal to
the density for n ordered standard uniforms.
The joint pdf of n ordered uniforms is fU(1),...,U(n)(u(1), . . . , u(n)) = n! 1
The joint density for (V1, . . . , Vn) can be found through marginalising:
fV (v1, . . . , vn) =
Z 1
0
fV (v1, . . . , vn, vn+1) dvn+1 1
=
Z 1
0
exp{vn+1}vnn+1 dvn+1
Integration by parts gives 1
=
⇥ exp{vn+1}vnn+1⇤10| {z }
=0
+n
Z 1
0
exp{vn+1}vn1n+1 dvn+1
and repeating another n 1 times yields
fV (v1, . . . , vn) = n!
Z 1
0
exp{vn+1} dvn+1 = n! 1
as required.
Part (a)(iv) unseen
Given Y1, . . . , Yn+1 ⇠ Exp(1), we can use the transformations in
parts (i)-(iii) to generate n ordered random variables
U(1), . . . , U(n) ⇠ U(0, 1)
By defining W(i) = U(i) 1/2, i = 1, . . . , n, we can then obtain 1
X(1), . . . , X(n) ⇠ fX by using the transformation given in the question.
[Alternatively, assuming the cdf for the given distribution to be available,
we could use the ordered uniforms in an inversion sampling procedure]
Part (b) seen sim
For a two-tailed Monte Carlo test with Type I error ↵, H0 is rejected
if either
t < t⇤(k1) or t > t

(k2),
where
k1
m+ 1
= 1 k2
m+ 1
=

2
=) k1 = (m+ 1)↵
2
, k2 = (m+ 1)

1 ↵
2

2
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 3 Marks &
seen/unseen
Part (c) seen sim
For our two-tailed Monte Carlo test, the probability of rejecting H0 is
p = P

r simulated values of T  t, 0  r  k1 1

+ P

r simulated values of T t, 0  r  m k2

(?)
or, equivalently,
p = P

r simulated values of T  t, 0  r  k1 1

+ P

r simulated values of T  t, k2  r  m

(??)
2
We deal with (?) and (??) separately; both are valid approaches. unseen
(?) :
p =
k11X
r=0
P

r sim. values of T  t

+
mk2X
r=0
P

r sim. values of T t

=
k11X
r=0

P

r sim. values of T  t

+ P

r sim. values of T t

as m k2 = k1 1.
The two probabilities inside the braces are binomial probabilities,
which can be rewritten:
p =
k11X
r=0
⇢✓
m
r

ur(1 u)mr +

m
r

(1 u)rumr

and this can be rearranged to give the required expression.
(??) :
p =
k11X
r=0
P

r sim. values of T  t

+
mX
r=k2
P

r sim. values of T  t

=
k11X
r=0

m
r

ur(1 u)mr +
mX
r=k2

m
r

ur(1 u)mr
=
k11X
r=0

m
r

ur(1 u)mr +
mX
r=m+1k1

m
r

ur(1 u)mr
Now, substituting s = m r in the second summation (and switching
the limits in the notation):
=
k11X
r=0

m
r

ur(1 u)mr +
k11X
s=0

m
m s

ums(1 u)s
and since

m
r

=

m
mr

, this can be rewritten
p =
k11X
r=0
⇢✓
m
r

ur(1 u)mr +

m
r

(1 u)rumr

which can be rearranged to give the required expression.
2
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 3 Marks &
seen/unseen
Part (d) seen
To perform Jackknife resampling, repeat the following for
i = 1, . . . ,m+ 1 :
• construct an m-length vector Ti by dropping the ith element 1
of T
• use Ti to estimate the sample mean: 1
T¯i =
1
m
m+1X
j=1
j 6=i
Tj
This gives m+ 1 estimates of the sample mean, T¯i, i = 1, . . . ,m+ 1.
The variance of T¯ can then be estimated using a corrected sample
variance of the m+ 1 sample means:
cVar⇥T¯ ⇤ = m
m+ 1
m+1X
i=1

T¯i T¯(·)
⌘2
. 2
where
T¯(·) =
1
m+ 1
mX
i=1
T¯i,
[Can alternatively just use T¯ instead of T¯(·) in expression for cVar⇥T¯ ⇤].
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 4 Marks &
seen/unseen
Part (a)(i) seen
↵(x, y) = min
(
⇡(y) qh(x | y)
⇡(x) qh(y | x) , 1
)
2
The resulting Markov chain will be reversible, guaranteeing the
existence of its stationary distribution. 1
Reversibility in the MC is achieved by design, as the form of ↵(x, y)
ensures that the transition densities satisfy the detailed balance
equations.
[Must mention either reversibility or detailed balance for the full mark]
Part (a)(ii) seen
The function k(·) must be symmetric (in x and y) in order for the 1
given MH procedure to reduce to a Metropolis procedure.
Part (a)(iii) seen
Since the proposal qh is assumed to be symmetric, the acceptance
probability is simply the ratio of target densities ⇡(y)/⇡(x).
If h is small, then qh will often propose candidate values in regions
where the ratio of target densities ⇡(y)/⇡(x) is high; this is because
the proposal mechanism will be more likely to propose candidate 1
values close to the current value.
Thus, decreasing h will increase the acceptance probability. 1
Part (b)(i) seen/unseen
Sn is the empirical estimate, at iteration n, of the covariance matrix 1 (seen)
of the target distribution, based on the simulated chain so far.
The covariance matrix in the multivariate Gaussian density performs
the same role as h in part (a) - it scales the proposal density.
The multivariate Gaussian proposal
fN (y|x, h⌃⇡) Max 2 from 3:
specifies a ddimensional random-walk Metropolis procedure. As the
dimension d!1, the Markov chain produced by this procedure
converges to a di↵usion process with step size proportional to 1/d. 1 (unseen)
The di↵usion speed of this process is maximised by 1 (unseen)
h = 2.382/d,
which corresponds to a Metropolis acceptance rate of ⇡ 0.234.
The use of 2.38/d2 to scale the proposal distribution is an e↵ort to 1 (seen)
approximate this optimality for finite d.
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 4 Marks &
seen/unseen
Part (b)(ii) seen
The mixture with a non-adaptive proposal density is chosen to ensure
that the procedure does not get stuck at problematic values of Sn. 1
An example of a problematic Sn is one that is singular. 1
Part (c) seen
The SCAM procedure adaptively scales its univariate proposal for
component i 2 {1, . . . , d} based on the empirical variance of the 1
corresponding component of the simulated chain so far.
In contrast, the AMWG procedure adaptively scales its univariate
proposal for component i based on the empirical acceptance rate 1
so far, for component i.
Part (d) unseen
gin is the sample variance for (x
i
0, . . . x
i
n1):
gin =
1
n 1
n1X
j=0

xij x¯in
◆2
=
1
n 1
n1X
j=0

xij
⌘2 2
n 1 x¯
i
n
n1X
j=0
xij +
n
n 1

x¯in
⌘2
=
1
n 1
n1X
j=0

xij
⌘2 n
n 1

x¯in
⌘2
(?) 1
Similarly,
gin1 =
1
n 2
n2X
j=0

xij
⌘2 n 1
n 2

x¯in1
⌘2
=) n 2
n 1g
i
n1 +
1
n 1

xin1
⌘2
=
1
n 1
n1X
j=0

xij
⌘2 ⇣x¯in1⌘2
=) 1
n 1
n1X
j=0

xij
⌘2
=
n 2
n 1g
i
n1 +
1
n 1

xin1
⌘2
+

x¯in1
⌘2
1
Substituting this into (?),
gin =
n 2
n 1g
i
n1 +
1
n 1

xin1
⌘2
+

x¯in1
⌘2 n
n 1

x¯in
⌘2
1
Also,
x¯in =
1
n
n1X
j=0
xij =
n 1
n

1
n 1
n2X
j=0
xij

+
xin1
n
=
n 1
n
x¯in1 +
xin1
n
1
and substituting this into the above gives the required expression
Setter’s initials Checker’s initials Page number
EXAMINATION SOLUTIONS 2016-17 Course
M3/4S9
Question 4 Marks &
seen/unseen
Part (e)(i) seen
Both the integrated autocorrelation time and average square jumping
distance measure provide a measure of Markov chain mixing.
They measure how well the chain explores the state space, and provide
an indication of the level of dependence present in the simulated chain.
The IACT can also be used as a measure of convergence speed for the
algorithm. 1
Part (e)(ii) seen
For the integrated autocorrelation times, low values indicate good
performance. 1
For the average square jump distance, high values indicate good
performance. 1
Setter’s initials Checker’s initials Page number

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468