程序代写案例-MATH97084 MATH97185

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
© 2020 Imperial College London Page 1
MATH97084 MATH97185
BSc, MSci and MSc EXAMINATIONS (MATHEMATICS)
May-June 2020
This paper is also taken for the relevant examination for the
Associateship of the Royal College of Science
Time Series
SUBMIT YOUR ANSWERS AS SEPARATE PDFs TO THE RELEVANT DROPBOXES ON
BLACKBOARD (ONE FOR EACH QUESTION) WITH COMPLETED COVERSHEETS WITH
YOUR CID NUMBER, QUESTION NUMBERS ANSWERED AND PAGE NUMBERS PER
QUESTION.
.
Date: 6th May 2020
Time: 09.00am - 11.30am (BST)
Time Allowed: 2 Hours 30 Minutes
Upload Time Allowed: 30 Minutes
This paper has 5 Questions.
Candidates should start their solutions to each question on a new sheet of paper.
Each sheet of paper should have your CID, Question Number and Page Number on the
top.
Only use 1 side of the paper.
Allow margins for marking.
Any required additional material(s) will be provided.
Credit will be given for all questions attempted.
Each question carries equal weight.
Note: Throughout this paper {t} is a sequence of uncorrelated random variables (white noise) having
zero mean and variance σ2 , unless stated otherwise. The term “stationary” will always be taken to mean
second-order stationary. All processes are real-valued unless stated otherwise. The sample interval is
unity unless stated otherwise. B denotes the backward shift operator.
1. (a) Consider the ARMA(1,1) model
Xt =
1
2Xt−1 + t +
1
8t−1. (†)
(i) Show {Xt} is both stationary and invertible. (2 marks)
(ii) Express {Xt} in general linear process form. (3 marks)
(iii) Show Var{Xt} = 7348σ2 . (4 marks)
(b) Consider a stationary process {Xt} that can be written as a general linear process,
Xt =
∞∑
k=0
ψkt−k = Ψ(B)t.
We wish to construct the l-step ahead forecast of the form
Xt(l) =
∞∑
k=0
δkεt−k.
(i) Show that the l-step prediction variance σ2(l) = E{(Xt+l − Xt(l))2} is minimized by
setting δk = ψk+l, k ≥ 0.
(4 marks)
(ii) Show the l-step ahead forecast can be written in the form
Xt(l) = Ψ(l)(B)Ψ−1(B)Xt,
where Ψ(l)(z) = ∑∞k=0 ψk+lzk. (2 marks)
(iii) For the ARMA(1,1) model given in (†) in part (a), express the 2-step ahead forecast
Xt(2) in the form
Xt(2) =
∞∑
k=0
pikXt−k.
(5 marks)
(Total: 20 marks)
MATH96053/MATH97084/MATH97185 Time Series Analysis (2020) Page 2
2. (a) Let L{·} be a linear time invariant (LTI) filter with frequency response function G(f) defined
as L{ei2pift} = G(f)ei2pift. You may take it as a fact that any LTI filter can be expressed in
the form
L{Xt} =
∞∑
u=−∞
guXt−u = Yt.
Furthermore, if the spectral representations of {Xt} and {Yt} are
Xt =
∫ 1/2
−1/2
ei2piftdZX(f) and Yt =
∫ 1/2
−1/2
ei2piftdZY (f),
respectively, then dZY (f) = G(f)dZX(f).
(i) Show that G(f) and {gu} form a Fourier transform pair. (2 marks)
(ii) Show SY (f) = |G(f)|2SX(f), if the spectral density functions SX(f) and SY (f) exist.
(2 marks)
(iii) Consider the LTI filter Yt = L{Xt} = Xt−1 + Xt + Xt+1. Derive the spectral density
function of the output {Yt} when the input is a white noise process with variance σ2 .
(3 marks)
(iv) Show that the spectral density function SX(f) for an AR(p) process
Xt − φ1,pXt−1 − . . .− φp,pXt−p = t,
is given by
SX(f) =
σ2
|1− φ1,pe−i2pif − . . .− φp,pe−i2pifp|2 .
(4 marks)
(b) (i) Let Φ(B)Xt = t be an AR(2) process where Φ(z) has roots z = 1/a and z = 1/b.
Show
SX(f) =
σ2
|1− ae−i2pif |2|1− be−i2pif |2 .
(3 marks)
(ii) Let Φ(B)Xt = t be an AR(2) process where Φ(z) has complex conjugate roots and
{Xt} has has spectral density function
S(f) = σ
2

[1− cos(2pi(0.125− f)) + 0.25][1− cos(2pi(0.125 + f)) + 0.25] .
Express {Xt} in the form Xt = φ1,2Xt−1 + φ2,2Xt−2 + t, clearly stating the parameters
φ1,2 and φ2,2.
HINT: Express the conjugate roots as
1
r
ei2pif0 and 1
r
e−i2pif0
and derive the spectral density function. (6 marks)
(Total: 20 marks)
MATH96053/MATH97084/MATH97185 Time Series Analysis (2020) Page 3
3. (a) Let X1, ..., XN be a realisation from a stationary process {Xt}. The following is an estimator
for the autocovariance sequence
sˆ(p)τ =
1
N
N−|τ |∑
t=1
(Xt − X¯)(Xt+|τ | − X¯) for all τ with |τ | ≤ N − 1,
where X¯ = (1/N)∑Nt=1Xt. When the mean is known, X¯ is replaced by µ.
(i) When the mean of {Xt} is known, show sˆ(p)τ is a biased estimator of the autocovariance
sequence for {Xt} when τ 6= 0. Comment on the bias of the estimator as N →∞.
(3 marks)
(ii) Let {Xt} be the MA(1) process Xt = t − θt−1. For some fixed constant C > 0, show
that to obtain |bias{sˆ(p)τ }| < C for all |τ | < N − 1, we require N > σ2 |θ|/C. You may
assume the mean of {Xt} is known to be zero.
(5 marks)
(b) Let X1, ..., XN be a realisation from a stationary process {Xt} with a known mean of zero.
The direct spectral estimator is defined as
Sˆ(d)(f) =
∣∣∣∣∣
N∑
t=1
htXte−i2pift
∣∣∣∣∣
2
for all f with |f | ≤ 1/2,
where {ht} is a data taper of length N normalised such that ∑Nt=1 h2t = 1.
(i) Show
Sˆ(d)(f) =
N−1∑
τ=−(N−1)
sˆ(d)τ e−i2pifτ ,
where
sˆ(d)τ =
N−|τ |∑
t=1
htXtht+|τ |Xt+|τ | for all τ with |τ | ≤ N − 1.
(4 marks)
(ii) Using (b)(i), show ∫ 1/2
−1/2
E{Sˆ(d)(f)}df = s0.
(6 marks)
(iii) Let S(p)(f) denote the periodogram. Is ∫ 1/2−1/2E{Sˆ(p)(f)}df less than, greater than, or
equal to s0? Justify your answer. (2 marks)
(Total: 20 marks)
4. (a) Let {Xt} be a stationary process whose autocovariance sequence is non-negative for all τ ∈ Z.
Show that the spectral density function of {Xt} attains its maximum value at f = 0.
(4 marks)
(b) Define what is means for a pair of processes {Xt} and {Yt} to be jointly stationary.
(3 marks)
(c) Let {Xt} be a zero mean stationary process with autocovariance sequence {sX,τ} and spectral
density function SX(f). Let {Yt} be defined as Yt = WtXt, where {Wt} is a sequence of
independent and identically distributed Bernoulli(p) random variables and is independent of
{Xt}.
(i) Show {Xt} and {Yt} are jointly stationary, deriving both the autocovariance sequence
{sY,τ} and cross covariance sequence {sXY,τ} in terms of sX,τ and p. (6 marks)
(ii) Show γ2XY (f), the coherence between processes {Xt} and {Yt}, is given as
γ2XY (f) =
1
1 + (1−p)sX,0
pSX(f)
.
(4 marks)
(iii) Let {Xt} be the MA(1) process
Xt = t +
1
2t−1.
Show γ2XY (f) attains its maximum value at f = 0. (3 marks)
(Total: 20 marks)
MATH96053/MATH97084/MATH97185 Time Series Analysis (2020) Page 5
5. PRELIMINARY INFORMATION
– If G1, ..., GN is a realisation from a stationary Gaussian zero mean process {Gt}, we define
J(f) =
N∑
t=1
htGte−i2pift,
where {ht} is a data taper of length N normalised such that ∑Nt=1 h2t = 1.
– In this question, you may assume the following results.
N∑
t=1
cos2(2pifjt) =
N∑
t=1
sin2(2pifjt) =
N
2
N∑
t=1
cos(2pifjt) sin(2pifjt) =
N∑
t=1
cos(2pifjt) sin(2pifkt) = 0
N∑
t=1
cos(2pifjt) cos(2pifkt) =
N∑
t=1
sin(2pifjt) sin(2pifkt) = 0,
where fj = j/N and fk = k/N with j and k both integers such that j 6= k and
1 ≤ j, k < N/2.
– You may use the following version of Isserlis’ Theorem. If Z1, Z2, Z3 and Z4 and four complex
valued random variables with zero means, then
Cov{Z1Z2, Z3Z4} = Cov{Z1, Z3}Cov{Z2, Z4}+ Cov{Z1, Z4}Cov{Z2, Z3}.
Recall: for a pair of zero mean complex random variables S and T , Cov{S, T} = E{S∗T},
where ∗ denotes complex conjugation.
– Fejér’s kernel is defined as
F(f) =
∣∣∣∣∣ 1√N
N∑
t=1
e−i2pift
∣∣∣∣∣
2
= sin
2(Npif)
N sin2(pif) .
QUESTION BEGINS ON NEXT PAGE
MATH96053/MATH97084/MATH97185 Time Series Analysis (2020) Page 6
(a) Consider the case of {Gt} being Gaussian white noise with variance σ2. Let ht = 1/

N for
all t = 1, ..., N , and consider the decomposition of J(f) into its real and imaginary parts,
J(f) = A(f) + iB(f).
(i) Show Var{A(fk)} = Var{B(fk)} = σ2/2 for fk 6= 0 or 1/2. (2 marks)
(ii) Show
Cov{A(fj), A(fk)} = 0 for all fj 6= fk.
Cov{B(fj), B(fk)} = 0 for all fj 6= fk.
Cov{A(fj), B(fk)} = 0 for all fj and fk.
(3 marks)
(iii) Recall that, if Y1, Y2, ..., Yν are independent zero mean, unit variance Gaussian random
variables, then χ2ν ≡ Y 21 + Y 22 + ...+ Y 2ν has a chi-square distribution with ν degrees of
freedom. For fk 6= 0 or 1/2, show the periodogram is distributed

(p)
G (fk)
d= σ
2
2 χ
2
2,
where d= means equal in distribution. (4 marks)
(b) Consider now a general data taper {ht}. We can write
J(f) =
∫ 1/2
−1/2
H(f − u)dZ(u),
where H(f) is the Fourier transform of {ht} and {Z(·)} is the orthogonal increment process
associated with a Gaussian zero mean stationary process {Gt}, with spectral density function
SG(·).
(i) Show
Cov{Sˆ(d)G (f), Sˆ(d)G (f + η)} =∣∣∣∣∣
∫ 1/2
−1/2
H∗(f − u)H(f + η − u)SG(u)du
∣∣∣∣∣
2
+
∣∣∣∣∣
∫ 1/2
−1/2
H(f + u)H(f + η − u)SG(u)du
∣∣∣∣∣
2
.
(7 marks)
(ii) For η > 0, it can be shown that the correlation between Sˆ(d)G (f) and Sˆ
(d)
G (f + η) is given
approximately by
R(η) ≡ R(η, f)
R(0, f) where R(η, f) = S
2
G(f)
∣∣∣∣∣
N∑
t=1
h2t e−i2piηt
∣∣∣∣∣
2
.
In the case of the rectangular taper ht = 1/

N , express R(η) in terms of Fejér’s kernel
and hence determine the values of η for which R(η) = 0. (4 marks)
(Total: 20 marks)
MATH96053/MATH97084/MATH97185 Time Series Analysis (2020) Page 7
BSc, MSci and MSc EXAMINATIONS (MATHEMATICS)
May – June 2020
MATH96053/MATH97084/MATH97185
Time Series Analysis [SOLUTIONS]
c© 2020 Imperial College London MATH96053/MATH97084/MATH97185 draft cover
sim. seen ⇓1. (a) (i) Writing the process as Φ(B)Xt = Θ(B)t, we have (1 − 12B)Xt =
(1 + 1
8
B)t, i.e. Φ(z) = 1− 12z and Θ(z) = 1 + 18z.
To show {Xt} is stationary, we are required to show the roots of Φ(z)
lie outside of the unit circle. The only root of Φ(z) is z = 2, which lies
outside of the unit circle, hence the process is stationary. To show {Xt}
is invertible, we are required to show the roots of Θ(z) lie outside of the
unit circle. The only root of Θ(z) is z = −8, which lies outside the unit
circle, hence the process is invertible. 2 marks (A)
(ii) The general linear process form is Xt = G(B)t where G(z) =
Θ(z)
Φ(z)
=
1+ 1
8
z
1− 1
2
z
. Expanding gives
G(z) =
(
1 +
1
8
z
)(
1 +
1
2
z +
1
4
z2 +
1
8
z3 + ...
)
= 1 +
(
1
2
+
1
8
)
z +
(
1
4
+
1
16
)
z2 +
(
1
8
+
1
32
)
z3 + ...
= 1 +
5
4
∞∑
k=1
1
2k
zk
Therefore, the general linear process form is Xt = t +
5
4
∑∞
k=1
1
2k
t−k. 3 marks (A)
(iii) For a process {Xt} in general linear process form Xt =
∑∞
k=0 gkt−k, we
have var{Xt} = σ2
∑∞
k=0 g
2
k. Therefore
var{Xt} = σ2
(
1 +
∞∑
k=1
(
5
4
)2
·
(
1
2k
)2)
= σ2
(
1 +
25
16
∞∑
k=1
1
4k
)
= σ2
(
1 +
25
16
· 1
4
∞∑
k=0
1
4k
)
= σ2
(
1 +
25
64
· 1
1− 1
4
)
= σ2
(
1 +
25
64
· 4
3
)
= σ2
(
1 +
25
48
)
=
73
48
σ2 .
4 marks (A)
MATH96053/MATH97084/MATH97185 Time Series Analysis
[SOLUTIONS] (2020) Page 1
seen ⇓(b) (i) Using the GLP representation, we have Xt+l =
∑∞
k=0 ψkt+l−k. We
want to minimize,
E{(Xt+l −Xt(l))2} = E

( ∞∑
k=0
ψkt+l−k −
∞∑
k=0
δkt−k
)2
= E

(
l−1∑
k=0
ψkt+l−k +
∞∑
k=0
[ψk+l − δk]t−k
)2
= σ2
{(
l−1∑
k=0
ψ2k
)
+
∞∑
k=0
(ψk+l − δk)2
}
.
The first term is independent of the choice of {δk} and the second term
is clearly minimized by choosing δk = ψk+l, k = 0, 1, 2, . . .. 4 marks (A)
(ii) Part (i) means the l-step ahead forecast can be written Xt(l) =
Ψ(l)(B)t, where Ψ
(l)(z) is as stated in the question. Given Xt =
Ψ(B)t, we have t = Ψ
−1(B)Xt, giving Xt(l) = Ψ(l)(B)Ψ−1(B)Xt. 2 marks (A)
sim. seen ⇓(iii) Using (a)(ii), we have Xt(2) = Ψ
(2)(B)Ψ−1(B)Xt, where Ψ(z) =
1+ 1
8
z
1− 1
2
z
and
Ψ(2)(z) =
5
4
(
1
4
+
1
8
z +
1
16
z2 + ...
)
=
5
4
· 1
4
∞∑
k=0
1
2k
zk =
5
16
· 1
1− 1
2
z
.
Therefore,
Xt(2) = Ψ
(2)(B)Ψ−1(B)Xt
=
5
16
· 1
1− 1
2
B
· 1−
1
2
B
1 + 1
8
B
Xt
=
5
16
· 1
1 + 1
8
B
Xt
=
∞∑
k=0
5
16
·
(
−1
8
)k
Xt−k
=
∞∑
k=0
5
16
·
(
−1
8
)k
Xt−k
5 marks (C)
MATH96053/MATH97084/MATH97185 Time Series Analysis [SOLUTIONS] (2020)
Page 2
seen ⇓2. (a) (i) Using the given result, it follows that
L{ei2pift} =
∞∑
u=−∞
gue
i2pif(t−u)
= ei2pift
∞∑
u=−∞
gue
−i2pifu.
Therefore, given L{ei2pift} = ei2piftG(f), it follows that G(f) =∑∞
u=−∞ gue
−i2pifu, and hence G(f) and {gu} are a Fourier transform
pair. 2 marks (A)
(ii) Taking the given identity dZY (f) = G(f)dZX(f), we have |dZY (f)|2 =
|G(f)dZX(f)|2 = |G(f)|2|dZX(f)|2. Taking expectations gives
E{|dZY (f)|2} = |G(f)|2E{|dZX(f)|2} ⇐⇒
dS
(I)
Y (f) = |G(f)|2dS(I)X (f) ⇐⇒
SY (f)df = |G(f)|2SX(f)df ⇐⇒
SY (f) = |G(f)|2SX(f),
should the spectral density functions exist. 2 marks (A)
sim. seen ⇓(iii) L{ei2pift} = ei2pif(t−1) + ei2pift + ei2pif(t+1) = ei2pift(e−i2pif + 1 + ei2pif ) =
ei2pift(1 + 2 cos(2pif)). Therefore G(f) = 1 + 2 cos(2pif), and SY (f) =
σ2 (1+2 cos(2pif))
2. The Fourier transform method for computing G(f)
can also be used here. 3 marks (A)
seen ⇓(iv) The Autoregressive Process can be written as t = L{Xt} where
L{Xt} = Xt − φ1,pXt−1 − ... − φp,pXt−p. To compute the frequency
response function, consider
L{ei2pift} = ei2pift − φ1,pei2pif(t−1) − ...− φp,pei2pif(t−p)
= ei2pift(1− φ1,pe−i2pif − ...− φp,pe−i2pifp).
Therefore G(f) = 1 − φ1,pe−i2pif − ... − φp,pe−i2pifp. Using (a)(ii), it
follows that S(f) = |G(f)|2SX(f), and hence
SX(f) =
S(f)
|G(f)|2 =
σ2
|1− φ1,pe−i2pif − ...− φp,pe−i2pifp|2 .
4 marks (A)
MATH96053/MATH97084/MATH97185 Time Series Analysis
[SOLUTIONS] (2020) Page 3
(b) (i) The spectrum can be written in terms of the complex roots, by
substituting z = e−i2pif into the characteristic equation.
SX(f) =
σ2
|1− φ1,pe−i2pif − φp,pe−i4pif |2
=
σ2
|1− φ1,pz − φp,pz2|2
∣∣∣∣
z=e−i2pif
=
σ2
|(1− az)(1− bz)|2
∣∣∣∣
z=e−i2pif
=
σ2
|(1− az)|2|(1− bz)|2
∣∣∣∣
z=e−i2pif
=
σ2
|(1− ae−i2pif )|2|(1− be−i2pif )|2 .
3 marks (B)
sim. seen ⇓
(ii) Using (i) and the hint, when have a = re−i2pif0 and b = rei2pif0 , the
spectral density function is therefore
S(f) =
σ2
[1− 2r cos(2pi(f0 − f)) + r2][1− 2r cos(2pi(f0 + f)) + r2] .
Comparing with the given spectral density function, we identify r = 0.5
and f0 = 0.125.
To obtain the parameters φ1,2 and φ2,2 we recognise
Φ(z) = (1− rei2pif0z)(1− re−i2pif0z)
= 1− r(ei2pif0 + e−i2pif0)z + r2z2
= 1− 2r cos(2pif0)z + r2z2.
Therefore, φ1,2 = 2r cos(2pif0) and φ2,2 = −r2. Substituting the
identified values of r and f0, we get φ1,2 = cos(pi/4) = 1/

2 and
φ2,2 = −1/4. Therefore Xt = 1√2Xt−1 − 14Xt−2 + t. 6 marks (C)
MATH96053/MATH97084/MATH97185 Time Series Analysis [SOLUTIONS] (2020)
Page 4
seen ⇓3. (a) (i) When the mean is known, the estimator becomes
ŝ(p)τ =
1
N
N−|τ |∑
t=1
(Xt − µ)(Xt+|τ | − µ) |τ | ≤ N − 1.
Taking expectations of both sides gives
E{ŝ(p)τ } =
1
N
N−|τ |∑
t=1
E{(Xt − µ)(Xt+|τ | − µ)}
=
1
N
N−|τ |∑
t=1

=
N − |τ |
N

=
(
1− |τ |
N
)
sτ ,
and therefore is biased (τ 6= 0). The bias tends to zero as N →∞. 3 marks (A)
unseen ⇓
(ii) With bias{ŝ(p)τ } = E{ŝ(p)τ } − sτ , we have bias{ŝ(p)τ } = − |τ |N sτ . Now,
the MA(1) process has acvs
sτ =

σ2 (1 + θ
2) τ = 0
−θσ2 |τ | = 1
0 |τ | 6= 0
Therefore bias{ŝ(p)0 } = 0, bias{ŝ(p)1 } = bias{ŝ(p)−1} = 1N θσ2 , and
bias{ŝ(p)τ } = 0 for all |τ | > 1.
Therefore, for |bias{ŝ(p)τ }| < C, we require
∣∣ 1
N
θσ2
∣∣ < C, which implies
N > |θ|σ2/C. 5 marks (B)
sim. seen ⇓
(b) (i) Substituting in the given form, we get
Ŝ(d)(f) =
(N−1)∑
τ=−(N−1)
ŝ(d)τ e
−i2pifτ =
(N−1)∑
τ=−(N−1)
N−|τ |∑
t=1
htXtht+τXt+|τ |e−i2pifτ
=
N∑
j=1
N∑
k=1
hjXjhkXke
−i2pif(k−j)
=
∣∣∣∣∣
N∑
t=1
htXte
−i2pift
∣∣∣∣∣
2
,
where the summation interchange has occurred by swapping diagonal
sums for row sums. 4 marks (B)
MATH96053/MATH97084/MATH97185 Time Series Analysis
[SOLUTIONS] (2020) Page 5
unseen ⇓(ii) Using the form
Ŝ(d)(f) =
(N−1)∑
τ=−(N−1)
N−|τ |∑
t=1
htXtht+τXt+|τ |e−i2pifτ
and taking expectations gives
E{Ŝ(p)(f)} =
(N−1)∑
τ=−(N−1)
N−|τ |∑
t=1
htht+τE{XtXt+|τ |}e−i2pifτ
=
(N−1)∑
τ=−(N−1)
N−|τ |∑
t=1
htht+τsτe
−i2pifτ
Therefore∫ 1/2
−1/2
E{Ŝ(p)(f)}df =
(N−1)∑
τ=−(N−1)
N−|τ |∑
t=1
htht+τsτ
∫ 1/2
−1/2
e−i2pifτdf
and with ∫ 1/2
−1/2
e−i2pifτdf =
{
1 τ = 0
0 τ 6= 0
we obtain ∫ 1/2
−1/2
E{Ŝ(p)(f)}df = s0
N∑
t=1
h2t = s0
6 marks (D)
seen ⇓
(iii) The periodogram can be considered a direct spectral estimator where
ht = 1/

N for all t. Therefore, the result will also hold for the
periodogram, i.e., it is equal to s0.
2 marks (B)
MATH96053/MATH97084/MATH97185 Time Series Analysis [SOLUTIONS] (2020)
Page 6
seen ⇓4. (a) First note
S(0) =
∞∑
τ=−∞
sτe
−i2pi0τ =
∞∑
τ=−∞
sτ .
Now, S(f) is always non-negative, therefore
S(f) = |S(f)| =
∣∣∣∣∣
∞∑
τ=−∞
sτe
−i2pifτ
∣∣∣∣∣ ≤
∞∑
τ=−∞
|sτe−i2pifτ | =
∞∑
τ=−∞
|sτ |.
If sτ is positive for all τ , this gives
S(f) ≤
∞∑
τ=−∞
sτ = S(0).
4 marks (B)
(b) Processes {Xt} and {Yt} are said to be jointly stationary if the are both
individually stationary and the cross-covariance cov(Xt, Yt+τ ) depends only
on τ . 3 marks (A)
sim. seen ⇓(c) (i) To show joint stationarity, we must show {Xt} and {Yt} are individually
stationary, and that cov{Xt, Yt+τ} depends only on τ . The question
states {Xt} is stationary. To show stationarity of {Yt}, we first need to
show it has a constant mean. By the independence of {Xt} and {Wt},
it follows that
E{Yt} = E{WtXt} = E{Wt}E{Xt} = p · 0 = 0.
Next we need to show cov{Yt, Yt+τ} depends only on τ .
cov{Yt, Yt+τ} = E{YtYt+τ}
= E{WtXtWt+τXt+τ}
= E{WtWt+τ}E{XtXt+τ}
= E{WtWt+τ}sX,τ ,
again, by the independence of {Xt} and {Wt}.
When τ = 0, we have E{WtWt+τ} = E{W 2t } = p. When τ 6= 0, we
have E{WtWt+τ} = E{Wt}E{Wt+τ} = p2. Therefore
sY,τ =
{
psX,0 τ = 0
p2sX,τ τ 6= 0
which depends only on τ , hence {Yt} is stationary.
MATH96053/MATH97084/MATH97185 Time Series Analysis
[SOLUTIONS] (2020) Page 7
Finally, consider the cross-covariance
cov{Xt, Yt+τ} = E{XtYt+τ}
= E{XtWt+τXt+τ}
= E{Wt}E{XtXt+τ}
= psX,τ ,
which depends only on τ , and we have sXY,τ = psX,τ . 6 marks (D)
(ii) The cross-spectrum SXY (f) is given as
SXY (f) =
∑∞
τ=−∞ sXY,τe
−i2pifτ = p
∑∞
τ=−∞ sX,τe
−i2pifτ = pSX(f).
The spectral density function of {Yt} is given as
SY (f) =
∞∑
τ=−∞
sY,τe
−i2pifτ
= p2
∞∑
τ=−∞
sX,τe
−i2pifτ − p2sX,0 + psX,0
= p2SX(f) + p(1− p)sX,0.
Therefore
γ2XY (f) =
|SXY (f)|2
SX(f)SY (f)
=
p2S2X(f)
SX(f)(p2SX(f) + p(1− p)sX,0)
=
1
1 +
(1−p)sX,0
pSX(f)
.
4 marks (D)
(iii) The MA(1) process has a non-negative autocovariance sequence (sX,0 =
5/4, sX,1 = sX,−1 = 1/2, sX,τ = 0 for all |τ | > 1). Therefore, from (a)
SX(f) attains is maximum at f = 0, which means the denominator in
the above expression for γ2XY (f) will attain its minimum at f = 0, and
hence γ2XY (f) attains its maximum at f = 0. 3 marks (B)
MATH96053/MATH97084/MATH97185 Time Series Analysis [SOLUTIONS] (2020)
Page 8
seen ⇓5. (a) (i) Representing J(f) = A(f) + iB(f), we have A(f) =∑N
t=1 htGt cos(2pift) and B(f) =
∑N
t=1 htGt sin(2pift). It is immediate
that E{A(f)} = E{B(f)} = 0, and therefore
var{A(fk)} = E{A2(fk)} =
N∑
t=1
N∑
t′=1
htht′E{GtGt′} cos(2pifkt) cos(2pifkt′).
With {Gt} a white noise process and {ht} the rectangular kernel, it
follows that
var{A(fk)} = σ
2
N
N∑
t=1
cos2(2pifkt) =
σ2
N
· N
2
=
σ2
2
using the given identities. The result for var{B(fk)} follows in an
identical way. 2 marks
(ii) As above
cov{A(fj), A(fk)} = E{A(fj)A(fk)}
=
N∑
t=1
N∑
t′=1
htht′E{GtGt′} cos(2pifjt) cos(2pifkt′)
=
σ2
N
N∑
t=1
cos(2pifjt) cos(2pifkt) = 0 for all fj 6= fk.
cov{B(fj), B(fk)} = E{B(fj)B(fk)}
=
N∑
t=1
N∑
t′=1
htht′E{GtGt′} sin(2pifjt) sin(2pifkt′)
=
σ2
N
N∑
t=1
sin(2pifjt) sin(2pifkt) = 0 for all fj 6= fk.
cov{A(fj), B(fk)} = E{A(fj)B(fk)}
=
N∑
t=1
N∑
t′=1
htht′E{GtGt′} cos(2pifjt) sin(2pifkt′)
=
σ2
N
N∑
t=1
cos(2pifjt) sin(2pifkt) = 0 for all fj and fk.
3 marks
(iii) We can write S(p)(fk) = |J(fk)|2 = A2(fk) + B2(fk). Both√
(2/σ2)A(fk) and

(2/σ2)B(fk) are unit variance zero mean
Gaussian random variables, and furthermore are independent by part
(ii) (uncorrelated Gaussian rvs =⇒ independence). Therefore,
(2/σ2)(A2(fk) + B
2(fk)) = (2/σ
2)S(p)(fk)
d
= χ22, which gives
S(p)(fk)
d
= (σ2/2)χ22. 4 marks
MATH96053/MATH97084/MATH97185 Time Series Analysis
[SOLUTIONS] (2020) Page 9
(b) (i) Let f ′ = f + η. From Isserlis theorem, we have
cov{Ŝ(d)G (f), Ŝ(d)G (f ′)} = cov{J(f)J∗(f), J(f ′)J∗(f ′)}
= cov{J(f), J(f ′)} cov{J∗(f), J∗(f ′)}
+ cov{J(f), J∗(f ′)} cov{J∗(f), J(f ′)}
= E{J∗(f)J(f ′)}E{J(f)J∗(f ′)}+ E{J∗(f)J∗(f ′)}E{J(f)J(f ′)}
= |E{J∗(f)J(f ′)}|2 + |E{J(f)J(f ′)}|2.
Using the stated identity, it follows that
E{J∗(f)J(f ′)} =
∫ 1/2
−1/2
∫ 1/2
−1/2
H∗(f − u)H(f ′ − u′)E{dZ∗(u)dZ(u′)}
=
∫ 1/2
−1/2
H∗(f − u)H(f ′ − u)SG(u)du.
Since dZ(−u) = dZ∗(u), it is also true that
J(f) = −
∫ 1/2
−1/2
H(f + u)dZ∗(u),
and hence
E{J(f)J(f ′)} = −
∫ 1/2
−1/2
H(f + u)H(f ′ − u)SG(u)du.
It follows that
cov{Ŝ(d)G (f), Ŝ(d)G (f + η)} =∣∣∣∣∣
∫ 1/2
−1/2
H∗(f − u)H(f + η − u)SG(u)du
∣∣∣∣∣
2
+
∣∣∣∣∣
∫ 1/2
−1/2
H(f + u)H(f + η − u)SG(u)du
∣∣∣∣∣
2
.
7 marks
unseen ⇓(ii) Using the given result, for η > 0
R(η) =
R(η, f)
R(η, 0)
=
S2G(f)
∣∣∣∑Nt=1 h2t e−i2piηt∣∣∣2
S2G(f)
∣∣∣∑Nt=1 h2t ∣∣∣2 =
∣∣∣∣∣
N∑
t=1
1
N
e−i2piηt
∣∣∣∣∣
2
=
1
N
F(η).
Therefore R(η) = 0 when F(η) = 0, which occurs at η = k/N for
k = ±1,±2,±3, .... 4 marks
MATH96053/MATH97084/MATH97185 Time Series Analysis [SOLUTIONS] (2020)
Page 10
ExamModuleCode QuestionNComments for Students
MATH97084 MATH97185 1 Too many minor errors, more care was needed in answering this question.
MATH97084 MATH97185 2
This question was answered quite well on the whole. (a)(i) did not cause many problems at all, 
neither did (ii) or (iii), although a mark was lost on part (ii) for those who were careless and said 
E(|dZ(f)|^2)=S(f), when of course it should be S(f)df. (iv) was bookwork, so in the open book 
format was almost universally correct. (b) was the more difficult part here and caused some 
problems. A common mistake was to either have a and b as being the roots instead of 1/a and 
1/b, or expanding out (z‐1/a)(z‐1/b) but then not noticing this expansion is not in characteristic 
polynomial form.
MATH97084 MATH97185 3
3(a), on the whole, was answered well. Part (a)(ii) caused some difficulties for some ‐ a common 
mistake was that some people were wrongly trying to bound the expected value rather than the 
bias. The typo in (b) was unfortunate, albeit minor. I am sorry about this, however by the looks of 
things the vast majority of people either spotted it immediately or didn't even notice it and 
attained a correct solution. Where it was apparent a student had suffered because of it, marks 
were still awarded.
MATH97084 MATH97185 4
4 appears to be the most challenging question on the paper. There were at least 3 different ways 
to approach part (a), although it is directly taken off of one of the problem sheets, so one of valid 
solutions was there for those of you who knew where to look. To get full marks you needed a 
really rigourous solution. Far too many answers lacked this rigour. A very common error is saying 
things like "e(i 2 pi f) <= 1", or "e(i 2 pi f) attains its maximum at f=0". Both these statements are 
nonsensical. You cannot bound a complex number with a real number. You can only bound its 
absolute value!   (c) was a tricky. A common mistake here was to not consider the tau = 0 case 
separately to the tau neq 0 case for the acvs of Y_t, which you must do. It was also common for 
people to use the var(W_t) instead of E(W_t^2), which are different because W_t is not zero 
mean. Not getting this right obviously caused problems with (c)(ii). Part (iii) produced some very 
elaborate answers, with many not noticing that 4(a) could be simply applied to this process.
MATH97084 MATH97185 5
In the end, Q5 did not prove that challenging as contained a lot of book work, making it rather
trivial in the open book format. However, this meant that I was quite strict with the marking, so 
answers had to be very clear and rigourous. b(ii), the unseen part, naturally caused the most 
issues, but mainly these were minor.
If your module is taught across multiple year levels, you might have received this form for each level of the module. You are only required to fill this out once, for each question.
Please record below, some brief but non‐trivial comments for students about how well (or otherwise) the questions were answered. For example, you may wish to comment on common errors and 
misconceptions, or areas where students have done well. These comments should note any errors in and corrections to the paper. These comments will be made available to students via the MathsCentral 
Blackboard site and should not contain any information which identifies individual candidates. Any comments which should be kept confidential should be included as confidential comments for the  Exam Board 
and Externals. If you would like to add formulas, please include a sperate pdf file with your email. 

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468