辅导案例-STAT5221

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
STAT5221: Probability and Stochastic Processes
Tutorial 1
1. (a). Consider two urns A and B containing a total of N balls. An
experiment is performed in which a ball is selected at random (all se-
lections equally likely) at time t(t = 1, 2, · · ·) from among the totality
of N balls. Then an urn is selected at random (A is chosen with prob-
ability p and B is chosen with probability q) and the ball previously
drawn is placed in this urn. The state of the system at each trial is
represented by the number of balls in A. Determine the transition
matrix for this Markov chain. Determine the equivalence classes.
(b). Now assume that at time t+1 a ball and an urn are chosen with
probability depending on the contents of the urn (i.e., if there are k
balls in A, a ball is chosen from A with probability k/N or from B
with probability (N −k)/N . Urn A is chosen with probability k/N or
urn B is chosen with probability (N−k)/N .) Determine the transition
matrix of the Markov chain with states represented by the contents of
A. Determine the equivalence classes.
Solution (a). Let Xi, i = 0, 1, · · · denote the number of balls in A
after the ith experiment. Then for i = 0, 1, · · · , N
Pi,i+1 = P (X1 = i+ 1|X0 = i)
= P (a ball in B is drawn, A is selected |X0 = i)
=
N − i
N
p,
Pi,i−1 = P (X1 = i− 1|X0 = i)
= P (a ball in A is drawn, B is selected |X0 = i)
=
i
N
q,
Pi,i = P (X1 = i|X0 = i)
= 1− P (X1 = i+ 1|X0 = i)− P (X1 = i− 1|X0 = i)
= 1− N − i
N
p− i
N
q.
The other transition prob are zero. There is one equivalence class
1
0, 1, · · · , N since all states communicate.
(b).
Pi,i+1 = P (X1 = i+ 1|X0 = i)
= P (a ball in B is drawn, A is selected |X0 = i)
=
(N − i)i
N 2
,
Pi,i−1 = P (X1 = i− 1|X0 = i)
= P (a ball in A is drawn, B is selected |X0 = i)
=
i(N − i)
N 2
,
Pi,i = P (X1 = i|X0 = i)
= 1− P (X1 = i+ 1|X0 = i)− P (X1 = i− 1|X0 = i)
=
i2
N 2
+
(N − i)2
N 2
The other transition prob are zero.
Equivalence classes are {0}, {1, 2, · · · , N − 1}, {N}.
2. Every stochastic n×nmatrix corresponds to a Markov chain for which
it is the one-step transition matrix. (By “Stochastic matrix” we mean
P = (Pij) with 0 ≤ Pij ≤ 1 and ∑j Pij = 1.) However, not every
stochastic n× n matrix is the two-step transition matrix of a Markov
chain. In particular, show that a 2 × 2 stochastic matrix is the two-
step transition matrix of a Markov chain if and only if the sum of its
principal diagonal terms is greater than or equal to 1.
Proof. Suppose Q = (Qij) = P
2. (We assume that the two states are
“1” and “2”.) Then Q11 + Q22 = P
2
11 + 2(1 − P11)(1 − P22) + P 222 =
1+ [(P11+P22)− 1]2 ≥ 1. Conversely, given Q, from Q = P2, we have
P11+P22 = 1±a, P 211−P 222 = Q11−Q22, where a = (Q11+Q22−1)1/2.
Solving for P11 and P22 in Q = P
2, we get
P11 =
Q11 + a
1 + a
, P22 =
Q22 + a
1 + a
or
P11 =
Q11 − a
1− a , P22 =
Q22 − a
1− a .
Note that a is real if Q11 +Q22 ≥ 1.
3. Consider a sequence of Bernoulli trials X1, X2, · · ·, where Xn = 1 or
0. Assume
P (Xn = 1|X1, X2, · · · , Xn−1) ≥ α, n = 1, 2, · · ·
2
Prove that
(a) P (Xn = 1 for some n) = 1,
(b) P (Xn = 1 infinitely often) = 1.
Proof. . (a). Let pn = P (X1 = · · · = Xn = 0). Then by induction
pn = pn−1P (Xn = 0|X1 = · · · = Xn−1 = 0) ≤ (1−α)pn−1 ≤ (1−α)n →
0 as n → ∞. (For p1, P (X1 = 0) = P (X1 = 0|X0) since X0 is
undefined. So conditioning on X0 is equal to no condition.) Hence
P (0 = X1 = X2 = · · ·) = 0.
(b). Let Cn = {Xn = 1, Xn+k = 0, k = 1, 2, · · ·}. By (a), P (Cn) = 0.
But
P (Xn = 1 infinitely often) = 1− P (Xn = 1 finitely often)
= 1−
∞∑
n=1
P (Cn) = 1.
4. Let
P =
 1− a a
b 1− b
 , 0 < a, b < 1.
Prove
Pn =
1
a+ b
 b a
b a
+ (1− a− b)n
a+ b
 a −a−b b
 .
Proof. First verify 1− a a
b 1− b
×
 a −a−b b
 = (1− a− b)
 a −a−b b
 .
With this, using the given form for Pn, it is easy to verify that P0 is
the identity and then use induction to prove Pn+1 = P×Pn.
3
STAT5221: Probability and Stochastic Processes
Tutorial 2
1. (a) A psychological subject can make one of two responses A1 and A2.
Associated with these responses are a set ofN stimuli fS1; S2; : : : ; SNg.
Each stimulus is conditioned to one of the responses. A single stimulus
is sampled at random (all possibilities equally likely) and the subject
responds according to the stimulus sampled. Reinforcement occurs at
each trial with probability (0 < < 1) independent of the previ-
ous history of the process. When reinforcement occurs, the stimulus
sampled does not alter its conditioning state. In the contrary event
the stimulus becomes conditioned to the other response. Consider
the Markov chain whose state variable is the number of stimuli con-
ditioned to response A1. Determine the transition probability matrix
of this M.C.
(b) A subject S can make one of three responses A0; A1, and A2. The
A0 response corresponds to a guessing state. If S makes response A1,
the experiment reinforces the subject with probability 1 and at the
next trial S will make the same response. If no reinforcement occurs
(probability 1 1), then at the next trial S passes to the guessing
state. Similarly 2 is the probability of reinforcement for response A2.
Again the subject remains in this state if reinforced and otherwise
passes to the guessing state. When S is in the guessing state, he stays
there for the next trial with probability 1 c and with probabilities
c=2 and c=2 makes responses A1 and A2 respectively. Consider the
Markov chain of the state of the subject and determine its transition
probability matrix.
Solution: (a) Pi;i+1 = ((N i)=N)(1 ) noting that the stimulus
sampled was conditioned to A2 and there is no reinforcement. (i
N 1)
Pi;i1 = (i=N)(1) noting that the stimulus sampled was conditioned
to A1 and there is no reinforcement. (i 1)
Pii = 1 Pi;i+1 Pi;i1 = ; (P00 = 1 P01; PNN = 1 PN;N1)
1
Pij = 0 otherwise (i; j = 0; 1; 2; : : : ; N).
(b) P00 = 1c, P0;1 = P0;2 = c=2; P10 = 11, P11 = 1; P20 = 12,
P22 = 2.
2. Determine the classes and the periodicity of the various states for a
Markov chain with transition probability matrix
(a)
0BB@
0 0 1 0
1 0 0 0
1
2
1
2 0 0
1
3
1
3
1
3 0
1CCA ; (b)
0BB@
0 1 0 0
0 0 0 1
0 1 0 0
1
3 0
2
3 0
1CCA ;
Solution:0BB@
0 0 1 0
1 0 0 0
1=2 1=2 0 0
1=3 1=3 1=3 0
1CCA
2
=
0BBBB@
1=2 1=2 0 0
0 0 1 0
1=2 0 1=2 0
1=2 1=6 1=3 0
1CCCCA
0BB@
0 0 1 0
1 0 0 0
1=2 1=2 0 0
1=3 1=3 1=3 0
1CCA
3
=
0BBBB@
1=2 0 1=2 0
1=2 1=2 0 0
1=4 1=4 1=2 0
1=3 1=6 1=2 0
1CCCCA
Two classes: f0; 1; 2g f3g. d(0) = 1 since P 200 = 1=2; P 300 = 1=2.
d(3) = 0.
0BB@
0 1 0 0
0 0 0 1
0 1 0 0
1=3 0 2=3 0
1CCA
2
=
0BBBB@
0 0 0 1
1=3 0 2=3 0
0 0 0 1
0 1 0 0
1CCCCA
0BB@
0 1 0 0
0 0 0 1
0 1 0 0
1=3 0 2=3 0
1CCA
3
=
0BBBB@
1=3 0 2=3 0
0 1 0 0
1=3 0 2=3 0
0 0 0 1
1CCCCA
2
0BB@
0 1 0 0
0 0 0 1
0 1 0 0
1=3 0 2=3 0
1CCA
4
=
0BB@
0 1 0 0
0 0 0 1
0 1 0 0
1=3 0 2=3 0
1CCA
Only one class. d(0) = 3.
3. Given a nite aperiodic irreducible Markov chain, prove that for some
n all terms of P n are positive.
Solution: Since the chain is aperiodic, for every state i there exists
N(i) satisfying P nii > 0 whenever n N(i). (Th2.4.1) Since the
chain is irreducible, for every distinct pair i; j of states we can nd
N(i; j) satisfying P nij > 0 whenever n N(i; j). There are only a
nite number of states so N = maxN(i)+maxN(i; j) <1. Suppose
n N . Then P nij PN(i;j)ij P nN(i;j)jj > 0 for any i; j.
4. Let a Markov chain contain r states. Prove the following:
(a) If a state k can be reached from j, then it can be reached in r 1
steps or less.
Solution:
(a) State k can be reached from k in zero steps. Hence assume j 6= k. If
j = i0 ! i1 ! ! in = k is a path leading from j to k and n > r1,
then some state, say il = im, must appear twice in fi0; : : : ; ing. Then
j = i0 ! ! il1 ! il = im ! ! in = k is a reduced path.
Hence, the length minimal path has length n r 1.
5. Consider a random walk on the integers such that Pi;i+1 = p, Pi;i1 = q
for all integer i (0 < p < 1; p+ q = 1). Determine P n00.
Solution: A return to zero occurs only when the number of moving
up in the random walk equals the number of moving down. Thus
P 2m+100 = 0, and by the binomial distribution for the number of upward
jumps, P 2m00 =

2m
m

pmqm
6. Suppose g:c:d:fn : P nii > 0g = 1. Then g:c:d:fn : fnii > 0g = 1. (This
result provides the justication that we can use Theorem 3.1.1 in the
proof of Theorem 3.1.2 in the lecture notes.)
Proof. Let d = g:c:d:fn : fnii > 0g. If d = 1, we are done. Now let us
assume d > 1. Then
fnd+mii = 0; 1 m < d; n 0: (0.1)
3
We will use the induction method to prove
P ld+mii = 0; 1 m < d; l 0: (0.2)
Let l = 0. Then by (0.1)
Pmii =
mX
j=1
f jiiP
mj
ii = 0:
Now we assume that (0.2) is true when l n 1. Let us consider the
case where l = n.
P nd+mii =
nd+mX
j=1
f jiiP
nd+mj
ii
=
nX
k=1
fkdii P
(nk)d+m
ii
= 0;
where in the 2nd equality we used (0.1), and in the 3rd equality we
used our assumption that (0.2) is true if l n 1. So we have
completed the proof of (0.2). But (0.2) implies that
g:c:d:fn : P nii > 0g d > 1:
This is a contradiction. So d = 1.
4
STAT5221: Probability and Stochastic Processes
Tutorial 3
1. Consider a Markov chain with transition probability matrix
P =
0BBB@
P0 P1 P2 : : : Pm
Pm P0 P1 : : : Pm1
...
...
...
...
P1 P2 P3 : : : P0
1CCCA
where 0 < Pi < 1 and P0 + P1 + + Pm = 1. Determine limn!1 P nij,
the stationary distribution.
Solution Solving the equations (0; 1; ; m) = (0; 1; ; m)P
and
P
i i = 1 gives limn!1 P
n
ij = j = 1=(m + 1) by Theorem 3.1.3.
The uniqueness of the solution is also from Theorem 3.1.3.
2. An airline reservation system has two computers only one of which is
in operation at any given time. A computer may break down on any
given day with probability p. There is a single repair facility which
takes 2 days to restore a computer to normal. The facilities are such
that only one computer at a time can be dealt with. Form a Markov
chain by taking as states the pairs (x; y) where x is the number of
machines in operating condition at the end of a day and y is 1 if a
day's labor has been expended on a machine not yet repaired and 0
otherwise. The transition matrix is
P =
0BB@
(2; 0) (1; 0) (1; 1) (0; 1)
(2; 0) q p 0 0
(1; 0) 0 0 q p
(1; 1) q p 0 0
(0; 1) 0 1 0 0
1CCA
where p+ q = 1. Find the stationary distribution in terms of p and q.
Solution Solving the equations (0; 1; 2; 3) = (0; 1; 2; 3)P and
1
P
i i = 1 gives
0 =
q2
2p+ q2
; 1 =
p
2p+ q2
; 2 =
pq
2p+ q2
; 3 =
p2
2p+ q2
:
3. Let P be a 3 3 Markov matrix and dene (P) = maxil;i2;j[Pi1;j
Pi2;j]. Show that (P) = 1 if and only if P has the form0@ 1 0 00 p q
r s t
1A (p; q 0; p+ q = 1; r; s; t 0; r + s+ t = 1)
or any matrix obtained from this one by interchanging rows and/or
columns.
Proof. The sucient part is clear. For the necessary part, we can
assume that maxil;i2[Pi1;0 Pi2;0] = 1 without loss of generality. This
shows that there are one \1" and one \0" in the rst column. So we
can suppose that P00 = 1; P10 = 0. Noting the sum of each row should
be 1, we can conclude the proof.
4. Sociologists often assume that the social classes of successive gener-
ations in a family can be regarded as a Markov chain. Thus, the
occupation of a son is assumed to depend only on his father's occu-
pation and not on his grandfather's. Suppose that such a model is
appropriate and that the transition probability matrix is given by
Son0s Class
Father0s Class
8>><>>:
Lower Middle Upper
Lower 0:40 0:50 0:10
Middle 0:05 0:70 0:25
Upper 0:05 0:50 0:45
For such a population, what fraction of people are middle class in the
long run?
Solution Solving the equations
0 = 0:40+0:051+0:052; 1 = 0:50+0:71+0:52; 0+1+2 = 1
gives 0 =
1
13 ; 1 =
5
8 ; 2 =
31
104 .
2
STAT5221: Probability and Stochastic Processes
Tutorial 4
1. Consider a discrete time Markov chain with states 0, 1, . . . , N whose
matrix has elements
Pij =

µi, j = i− 1,
λi j = i+ 1, i, j = 0, 1, . . . , N.
1− λi − µi, j = i,
0, |j − i| > 1,
Suppose that µ0 = λ0 = µN = λN = 0 and all other µi’s and λi’s are
positive, and that the initial state of the process is k. Determine the
absorption probabilities at 0 and N .
Solution: Set vi = P{ absorption at N |X0 = i} and derive vi =
λivi+1 + µivi−1 + (1− λi − µi)vi, 0 < i < N , by Eq (2.8) in the notes.
Of course v0 = 0 and vN = 1. Simplify to wi+1/wi = µi/λi where
wi = vi − vi−1, and iterate by multiplication to show wi+1 = w1ρi
where ρ0 = 1 and ρi = (µ1 × · · · × µi)/(λ1 × · · · × λi), i ≥ 1.
Hence, vi = wi + · · · + w1 = v1
∑i−1
k=0 ρk. Obtain v1 =
(∑N−1
k=0 ρk
)−1
from vN = 1 and thus
vi =
∑i−1
k=0 ρk∑N−1
k=0 ρk
2. A Markov chain on states {0, 1, 2, 3, 4, 5} has transition probability
matrix
(a)

1
3
2
3 0 0 0 0
2
3
1
3 0 0 0 0
0 0 14
3
4 0 0
0 0 15
4
5 0 0
1
4 0
1
4 0
1
4
1
4
1
6
1
6
1
6
1
6
1
6
1
6
 , (b)

1 0 0 0 0 0
0 34
1
4 0 0 0
0 18
7
8 0 0 0
1
4
1
4 0
1
8
3
8 0
1
3 0
1
6
1
6
1
3 0
0 0 0 0 0 1

Find all classes. Compute the limiting probabilities limn→∞ P n5i for
i = 0, 1, 2, 3, 4, 5.
1
Solution (a) Three classes: C = {0, 1}, C ′ = {2, 3}, T = {4, 5}. We
first show that {4, 5} is a transient class. Since
P n44 = P (Xn = 4|X0 = 4)
=

s1,··· ,sn−1
P (Xn = 4, Xn−1 = sn−1, · · · , X1 = s1|X0 = 4)
where the summation is over all possible sj, j = 1, · · · , n−1 such that
sj = 4 or 5, and
P (Xj = sj|Xj−1 = sj−1) ≤ 1/4,
we have ∞∑
n=1
P n44 ≤
∞∑
n=1
2n−14−n <∞.
It follows from Theorem 2.5.1 that state 4 is transient. Since 4 ↔ 5,
state 5 is transient too.
We will use Theorem 3.2.1 to find limn→∞ P n5i. The next step is to
calculate the stationary prob by Theorem 3.1.3. pi0 and pi1 satisfy the
equations
pi0 =
1
3
pi0 +
2
3
pi1, pi1 =
2
3
pi0 +
1
3
pi1, pi0 + pi1 = 1.
So
pi0 = pi1 = 1/2.
Similarly,
pi2 =
4
19
, pi3 =
15
19
.
The above calculation also show that the four states are recurrent
since limn→∞ P nii = pii implies
∑∞
n=0 P
n
ii =∞, i = 0, 1, 2, 3.
It remains to find pi5(C) and pi5(C
′). From formula (2.8) of Section
3.2 in Chapter 3, we have
pi4(C) =
1
4
+
1
4
pi4(C) +
1
4
pi5(C), pi5(C) =
1
3
+
1
6
pi4(C) +
1
6
pi5(C).
This gives
pi4(C) = pi5(C) = 1/2.
Similarly,
pi4(C
′) =
1
4
+
1
4
pi4(C
′) +
1
4
pi5(C
′), pi5(C ′) =
1
3
+
1
6
pi4(C
′) +
1
6
pi5(C
′).
2
So
pi4(C
′) = pi5(C ′) = 1/2.
Finally,
lim
n→∞P
n
50 = pi5(C)pi0 = 1/4, lim
n→∞P
n
51 = pi5(C)pi1 = 1/4,
lim
n→∞P
n
52 = pi5(C
′)pi2 =
2
19
, lim
n→∞P
n
53 = pi5(C
′)pi3 =
15
38
,
lim
n→∞P
n
54 = 0, lim
n→∞P
n
55 = 0.
(b). Four classes: {0}, {1, 2}, {3, 4}, {5}. Clearly, limn→∞ P n5i = 0 for
i = 0, 1, 2, 3, 4, limn→∞ P n55 = 1.
3
STAT5221: Probability and Stochastic Processes
Tutorial 5
1. Suppose the state space of a Markov chain is {1, 2, 3, 4}. Its transition
probability matrix is
P =

1 0 0 0
0 1 0 0
1
3
2
3 0 0
1
4
1
4 0
1
2

Find limn→∞ P ni1.
Solution:
P n11 = 1, P
n
21 = 0, P
n
31 = 1/3
Let C = {1}. Then pi4(C) = 1/4 + 1/2 × pi4(C) gives limn→∞ P n41 =
pi4(C) = 1/2.
Method 2:
P n41 =
n∑
i=1
f i41P
n−i
11
=
n∑
i=1
f i41
=
1
4
+
1
2
· 1
4
+ (
1
2
)2 · 1
4
+ · · ·+ (1
2
)n−1 · 1
4
=
1
2
− 1
2n+1
.
So limn→∞ P n41 = 1/2.
2. A pure birth process starting from X(0) = 0 has birth parameters
λ0 = 1, λ1 = 3, λ2 = 2, λ3 = 5. Let S3 be the random time that it
takes the process to reach state 3.
(a). Write S3 as a sum of waiting times and deduce that the
mean time is ES3 = 11/6.
(b). Calculate E(S1 + S2 + S3) and V arS3.
1
Solution: Let Tk be the time between the k-th and (k + 1)-th birth.
Tk, k ≥ 0 is exponentially distributed with parameter λk and T0, T1, . . .
are mutually independent. Noting that
S1 = T0
S2 = T0 + T1
S3 = T0 + T1 + T2,
we have
(a).
ES3 = E(T0 + T1 + T2) = 1/λ0 + 1/λ1 + 1/λ2 = 11/6;
(b).
E(S1 + S2 + S3) = E(3T0 + 2T1 + T2)
= 3/λ0 + 2/λ1 + 1/λ2 = 25/6,
and
Var(S3) = Var(T0 + T1 + T2) = Var(T0) + Var(T1) + Var(T2)
= 1/λ20 + 1/λ
2
1 + 1/λ
2
2 = 49/36.
3. Prove that
Pn(t) = λn−1 exp(−λnt)
∫ t
0
exp(λnx)Pn−1(x)dx, n = 1, 2, · · ·
satisfy equation (1.1) in the notes of Chapter 4.
Proof Direct calculation shows P ′n(t) = −λnPn(t) + λn−1Pn−1(t).
4. LetX(t) be a Yule process that is observed at a random time U , where
U is uniformly distributed over [0, 1). Suppose X(0) = 1. Show that
P (X(U) = k) = pk/(βk) for k = 1, 2, . . . , with p = 1− e−β.
Solution:
P (X(U) = k) = EU [P (X(U) = k|U)] =
∫ 1
0
P (X(u) = k)du
=
∫ 1
0
e−βu(1− e−βu)k−1du
=
1
β
∫ 1
e−β
(1− x)k−1dx (x = e−βu)
=
(1− e−β)k

.
2
STAT5221: Probability and Stochastic Processes
Tutorial 6
1. Consider a Poisson process with parameter . Let T be the time
required to observe the rst event, and let N(T=) be the number of
events in the next T= units of time. Find the rst two moments of
N(T=)T .
Solution : The law of total probability gives
E

N

T


T

= E

E

N

T


T j T

= (=)E(T 2) = 2=()
Similarly
E

N

T


T
2
jT
!
= T 2E

N

T

2
jT
!
= T 3=+ 2T 4=2:
So
E

N

T


T
2!
= ET 3=+ 2ET 4=2 =
6
2
+
24
22
:
2. Consider n independent objects (such as light bulbs) whose failure
time (i.e., lifetime) is a random variable exponentially distributed with
density function f(x; ) =
(
1exp(x=); x > 0;
0 x < 0:
( is a positive
parameter). The observations of lifetime become available in order of
failure. Let
X1;n X2;n Xr;n
denote the lifetimes of the rst r objects that fail. Determine the joint
density function of Xi;n; i = 1; 2; : : : ; r.
Solution : Fix 0 x1 < y1 < x2 < y2 < < xr < yr. The
multinomial probability distribution gives
P (xi < Xi;n yi for i = 1; : : : ; r and Xi;n > yr for i > r)
=
n!
(n r)!
rY
i=1
(F (yi) F (xi)) (1 F (yr))nr :
1
Divide by yi xi for i = 1; : : : ; r and let yi decrease to xi to deduce
the density
f(x1; : : : ; xr) =
n!
(n r)!
rY
i=1
f(xi) (1 F (xr))nr
= r!

n
r

1
r
exp

x1 + x2 + + xr1 + (n r + 1)xr


:
for 0 x1 x2 xr.
3. Consider a Poisson process of parameter . Given that n events hap-
pen in time t, nd the density function of the time of the occurrence
of the rth event (r < n).
Solution : Let Sr be time of occurrence of the rth event in the Poisson
process N(u). Then Sr u if and only if N(u) r, whence
P (Sr u and N(t) = n) = P (N(u) r and N(t) = n)
=
nX
k=r
P (N(u) = k and N(t)N(u) = n k)
=
nX
k=r
(u)k
k!
((t u))nk
(n k)! e
t:
Divide by P (N(t) = n) to get
P (Sr ujN(t) = n) =
nX
k=r
(u)k
k!
((t u))nk
(n k)! e
t=

(t)n
n!
et

=
nX
k=r

n
k
u
t
k
1 u
t
nk
:
2
Dierentiate the above with respect to u to obtain the density for Sr,
dP (Sr ujN(t) = n) =du
=
nX
k=r

n
k

k
uk1
tk
(1 u
t
)nk (n k)u
k
tk
(1 u
t
)nk1t1

=
nX
k=r
n

n 1
k 1

t1
u
t
k1
1 u
t
nk

n1X
k=r
n

n 1
k

t1
u
t
k
1 u
t
nk1
=
n1X
l=r1
nt1

n 1
l
u
t
l
1 u
t
n1l

n1X
k=r
nt1

n 1
k
u
t
k
1 u
t
nk1
=nt1

n 1
r 1
u
t
r1
1 u
t
nr
=
n!
(r 1)!(n r)!
ur1
tr

1 u
t
nr
; 0 < u < t:
3
STAT5221: Probability and Stochastic Processes
Tutorial 7
Assume X(t) is standard Brownian motion in the following.
1. Assume X(t) is standard Brownian motion. Let T0 be the largest zero
time of X(t) not exceeding . Establish the formula
P (T0 < t0) =
2

arcsin
p
t0= :
Solution By Theorem 5.3.1,
P (T0 < t0) = 1 2

arccos
p
t0= =
2

arcsin
p
t0= :
2. Assume X(t) is standard Brownian motion. Determine the covariance
functions for
U(t) = etX(e2t); t 0;
and
V (t) = X(t) tX(1); 0 t 1:
Solution : The covariance function of a process U(t) is dened by
Cov (U(t); U(s)) = E (U(t)U(s)) E (U(t))E (U(s)) :
Suppose t s.
Cov (U(t); U(s)) =E (U(t)U(s)) E (U(t))E (U(s))
=E (U(t)U(s))
=etsEX(e2t)X(e2s)
=ets

EX(e2t)[X(e2s)X(e2t)]

+ etsEX2(e2t)
= exp (t s) :
When t > s, Cov (U(t); U(s)) = exp (s t).
1
Cov (V (t); V (s)) =E (V (t)V (s)) E (V (t))E (V (s))
=E (V (t)V (s)) = t(1 s); for t s:
When t > s, Cov (V (t); V (s)) = s(1 t).
3. Let M(t) = max0utX(u), Y (t) = M(t)X(t). Prove that Y (t) =
M(t)X(t) is a continuous-time Markov process.
Hint : Note that for t0 < t,
Y (t) = maxfmax
t0ut
(X(u)X(t0)); Y (t0)g (X(t)X(t0)):
Solution : For t0 < t,
M(t)X(t0) =
(
M(t0)X(t0) = Y (t0) if M(t) = M(t0);
maxt0utX(u)X(t0) if M(t) > M(t0)
=maxfY (t0); max
t0ut
X(u)X(t0)g
whence
Y (t) = M(t)X(t) = maxfY (t0); max
t0ut
X(u)X(t0)g (X(t)X(t0)):
So for t1 < t2 < < tk < t0,
P (a < Y (t) bjY (t1) = y1; ; Y (tk) = yk; Y (t0) = y0)
= P (a < Y (t) bjY (t0) = y0);
and the process is Markov.
4. Find the conditional probability that X(t) is not zero in the interval
(t0; t2), given that it is not zero in the interval (t0; t1); 0 < t0 t1 t2.
Solution : Elementary conditional probability in conjunction with
Theorem 5.3.1 shows the desired probability to be
1 2 arccos
p
t0=t2
1 2 arccos
p
t0=t1
=
arcsin
p
t0=t2
arcsin
p
t0=t1
:
5. Show that the probability that X(t) is not zero in (0; t2), given that
it is not zero in the interval (0; t1); 0 < t1 < t2, is
p
t1=t2.
2
Hint : Compute
P (X(t) 6= 0; 0 < t0 t t1jX(t) 6= 0; 0 < t0 t t2)
and then let t0 ! 0.
Solution : Let t0 ! 0 in the solution to Problem 4 and use limx!0 sinxx =
1 to get the required probability,
p
t1=t2.
3
STAT5221: Probability and Stochastic Processes
Tutorial 8
Assume X(t) is standard Brownian motion in the following.
1. Establish the identity
E
(
exp
(
λ
∫ t
0
f(s)X(s)ds
))
= exp
(
λ2
∫ t
0
f(v)
(∫ v
0
uf(u)du
)
dv
)
,
−∞ < λ <∞ for any continuous function f(s), 0 ≤ s <∞.
Solution : By examining the approximating sums,∫ t
0
f(s)X(s)ds = lim
max1≤i≤n(si−si−1)→0
n∑
i=1
f(si)X(si)(si − si−1)
where 0 = s0 ≤ s1 ≤ s2 ≤ · · · ≤ sn−1 ≤ sn = t and noting that
n∑
i=1
f(si)X(si)(si − si−1)
is normally distributed with mean zero and variance
E
(
n∑
i=1
f(si)X(si)(si − si−1)
)2
= E
n∑
i=1
n∑
j=1
f(si)X(si)(si − si−1)f(sj)X(sj)(sj − sj−1)
=
n∑
i=1
n∑
j=1
f(si)f(sj)(si − si−1)(sj − sj−1)EX(si)X(sj)
=
n∑
i=1
n∑
j=1
f(si)f(sj)(si − si−1)(sj − sj−1)min(si, sj)
which tends to∫ t
0
∫ t
0
f(u)f(v)min(u, v)dudv = 2
∫ t
0
f(v)
(∫ v
0
uf(u)du
)
dv
1
as max1≤i≤n(si − si−1) → 0, we see that
∫ t
0 f(s)X(s)ds is normally
distributed with mean zero and variance 2
∫ t
0 f(v)
(∫ v
0 uf(u)du
)
dv.
Now use the known formula for the moment generating function of
a normally distributed random variable,
E exp
(
N(µ, σ2)
)
= exp
(
µ+ 0.5σ2
)
.
to complete the proof.
2. Assume X(t) is a Brownian motion with variance parameter σ = 1.
(a) Find Cov(X(t),
∫ 1
0 X(s)ds).
(b) Find E[X(u)X(u+v)X(u+v+w)X(u+v+w+x)], where x > 0,
0 < u < u+ v < u+ v + w.
Solution: (a)
EX(t) = 0, E
∫ 1
0
X(s)ds = 0.
So
Cov(X(t),
∫ 1
0
X(s)ds) = E
∫ 1
0
X(t)X(s)ds =
∫ 1
0
EX(t)X(s)ds
=
∫ 1
0
min(t, s)ds =
{
t− t2/2, if t ≤ 1;
1/2, if t > 1.
(b) For simplicity, write the increments as
Bu = X(u),
Bv = X(u+ v)−X(u),
Bw = X(u+ v + w)−X(u+ v),
Bx = X(u+ v + w + x)−X(u+ v + w),
then since the increments are independent and normally distributed
with mean 0, we have
EX(u)X(u+ v)X(u+ v + w)X(u+ v + w + x)
= EBu(Bu +Bv)(Bu +Bv +Bw)(Bu +Bv +Bw +Bx)
= EB4u + 3EB
2
uEB
2
v + EB
2
uEB
2
w
= (3u2 + 3uv + uw)σ4
= (3u2 + 3uv + uw).
2

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468