辅导案例-STAT5221
STAT5221: Probability and Stochastic Processes Tutorial 1 1. (a). Consider two urns A and B containing a total of N balls. An experiment is performed in which a ball is selected at random (all se- lections equally likely) at time t(t = 1, 2, · · ·) from among the totality of N balls. Then an urn is selected at random (A is chosen with prob- ability p and B is chosen with probability q) and the ball previously drawn is placed in this urn. The state of the system at each trial is represented by the number of balls in A. Determine the transition matrix for this Markov chain. Determine the equivalence classes. (b). Now assume that at time t+1 a ball and an urn are chosen with probability depending on the contents of the urn (i.e., if there are k balls in A, a ball is chosen from A with probability k/N or from B with probability (N −k)/N . Urn A is chosen with probability k/N or urn B is chosen with probability (N−k)/N .) Determine the transition matrix of the Markov chain with states represented by the contents of A. Determine the equivalence classes. Solution (a). Let Xi, i = 0, 1, · · · denote the number of balls in A after the ith experiment. Then for i = 0, 1, · · · , N Pi,i+1 = P (X1 = i+ 1|X0 = i) = P (a ball in B is drawn, A is selected |X0 = i) = N − i N p, Pi,i−1 = P (X1 = i− 1|X0 = i) = P (a ball in A is drawn, B is selected |X0 = i) = i N q, Pi,i = P (X1 = i|X0 = i) = 1− P (X1 = i+ 1|X0 = i)− P (X1 = i− 1|X0 = i) = 1− N − i N p− i N q. The other transition prob are zero. There is one equivalence class 1 0, 1, · · · , N since all states communicate. (b). Pi,i+1 = P (X1 = i+ 1|X0 = i) = P (a ball in B is drawn, A is selected |X0 = i) = (N − i)i N 2 , Pi,i−1 = P (X1 = i− 1|X0 = i) = P (a ball in A is drawn, B is selected |X0 = i) = i(N − i) N 2 , Pi,i = P (X1 = i|X0 = i) = 1− P (X1 = i+ 1|X0 = i)− P (X1 = i− 1|X0 = i) = i2 N 2 + (N − i)2 N 2 The other transition prob are zero. Equivalence classes are {0}, {1, 2, · · · , N − 1}, {N}. 2. Every stochastic n×nmatrix corresponds to a Markov chain for which it is the one-step transition matrix. (By “Stochastic matrix” we mean P = (Pij) with 0 ≤ Pij ≤ 1 and ∑j Pij = 1.) However, not every stochastic n× n matrix is the two-step transition matrix of a Markov chain. In particular, show that a 2 × 2 stochastic matrix is the two- step transition matrix of a Markov chain if and only if the sum of its principal diagonal terms is greater than or equal to 1. Proof. Suppose Q = (Qij) = P 2. (We assume that the two states are “1” and “2”.) Then Q11 + Q22 = P 2 11 + 2(1 − P11)(1 − P22) + P 222 = 1+ [(P11+P22)− 1]2 ≥ 1. Conversely, given Q, from Q = P2, we have P11+P22 = 1±a, P 211−P 222 = Q11−Q22, where a = (Q11+Q22−1)1/2. Solving for P11 and P22 in Q = P 2, we get P11 = Q11 + a 1 + a , P22 = Q22 + a 1 + a or P11 = Q11 − a 1− a , P22 = Q22 − a 1− a . Note that a is real if Q11 +Q22 ≥ 1. 3. Consider a sequence of Bernoulli trials X1, X2, · · ·, where Xn = 1 or 0. Assume P (Xn = 1|X1, X2, · · · , Xn−1) ≥ α, n = 1, 2, · · · 2 Prove that (a) P (Xn = 1 for some n) = 1, (b) P (Xn = 1 infinitely often) = 1. Proof. . (a). Let pn = P (X1 = · · · = Xn = 0). Then by induction pn = pn−1P (Xn = 0|X1 = · · · = Xn−1 = 0) ≤ (1−α)pn−1 ≤ (1−α)n → 0 as n → ∞. (For p1, P (X1 = 0) = P (X1 = 0|X0) since X0 is undefined. So conditioning on X0 is equal to no condition.) Hence P (0 = X1 = X2 = · · ·) = 0. (b). Let Cn = {Xn = 1, Xn+k = 0, k = 1, 2, · · ·}. By (a), P (Cn) = 0. But P (Xn = 1 infinitely often) = 1− P (Xn = 1 finitely often) = 1− ∞∑ n=1 P (Cn) = 1. 4. Let P = 1− a a b 1− b , 0 < a, b < 1. Prove Pn = 1 a+ b b a b a + (1− a− b)n a+ b a −a−b b . Proof. First verify 1− a a b 1− b × a −a−b b = (1− a− b) a −a−b b . With this, using the given form for Pn, it is easy to verify that P0 is the identity and then use induction to prove Pn+1 = P×Pn. 3 STAT5221: Probability and Stochastic Processes Tutorial 2 1. (a) A psychological subject can make one of two responses A1 and A2. Associated with these responses are a set ofN stimuli fS1; S2; : : : ; SNg. Each stimulus is conditioned to one of the responses. A single stimulus is sampled at random (all possibilities equally likely) and the subject responds according to the stimulus sampled. Reinforcement occurs at each trial with probability (0 < < 1) independent of the previ- ous history of the process. When reinforcement occurs, the stimulus sampled does not alter its conditioning state. In the contrary event the stimulus becomes conditioned to the other response. Consider the Markov chain whose state variable is the number of stimuli con- ditioned to response A1. Determine the transition probability matrix of this M.C. (b) A subject S can make one of three responses A0; A1, and A2. The A0 response corresponds to a guessing state. If S makes response A1, the experiment reinforces the subject with probability 1 and at the next trial S will make the same response. If no reinforcement occurs (probability 1 1), then at the next trial S passes to the guessing state. Similarly 2 is the probability of reinforcement for response A2. Again the subject remains in this state if reinforced and otherwise passes to the guessing state. When S is in the guessing state, he stays there for the next trial with probability 1 c and with probabilities c=2 and c=2 makes responses A1 and A2 respectively. Consider the Markov chain of the state of the subject and determine its transition probability matrix. Solution: (a) Pi;i+1 = ((N i)=N)(1 ) noting that the stimulus sampled was conditioned to A2 and there is no reinforcement. (i N 1) Pi;i 1 = (i=N)(1 ) noting that the stimulus sampled was conditioned to A1 and there is no reinforcement. (i 1) Pii = 1 Pi;i+1 Pi;i 1 = ; (P00 = 1 P01; PNN = 1 PN;N 1) 1 Pij = 0 otherwise (i; j = 0; 1; 2; : : : ; N). (b) P00 = 1 c, P0;1 = P0;2 = c=2; P10 = 1 1, P11 = 1; P20 = 1 2, P22 = 2. 2. Determine the classes and the periodicity of the various states for a Markov chain with transition probability matrix (a) 0BB@ 0 0 1 0 1 0 0 0 1 2 1 2 0 0 1 3 1 3 1 3 0 1CCA ; (b) 0BB@ 0 1 0 0 0 0 0 1 0 1 0 0 1 3 0 2 3 0 1CCA ; Solution:0BB@ 0 0 1 0 1 0 0 0 1=2 1=2 0 0 1=3 1=3 1=3 0 1CCA 2 = 0BBBB@ 1=2 1=2 0 0 0 0 1 0 1=2 0 1=2 0 1=2 1=6 1=3 0 1CCCCA 0BB@ 0 0 1 0 1 0 0 0 1=2 1=2 0 0 1=3 1=3 1=3 0 1CCA 3 = 0BBBB@ 1=2 0 1=2 0 1=2 1=2 0 0 1=4 1=4 1=2 0 1=3 1=6 1=2 0 1CCCCA Two classes: f0; 1; 2g f3g. d(0) = 1 since P 200 = 1=2; P 300 = 1=2. d(3) = 0. 0BB@ 0 1 0 0 0 0 0 1 0 1 0 0 1=3 0 2=3 0 1CCA 2 = 0BBBB@ 0 0 0 1 1=3 0 2=3 0 0 0 0 1 0 1 0 0 1CCCCA 0BB@ 0 1 0 0 0 0 0 1 0 1 0 0 1=3 0 2=3 0 1CCA 3 = 0BBBB@ 1=3 0 2=3 0 0 1 0 0 1=3 0 2=3 0 0 0 0 1 1CCCCA 2 0BB@ 0 1 0 0 0 0 0 1 0 1 0 0 1=3 0 2=3 0 1CCA 4 = 0BB@ 0 1 0 0 0 0 0 1 0 1 0 0 1=3 0 2=3 0 1CCA Only one class. d(0) = 3. 3. Given a nite aperiodic irreducible Markov chain, prove that for some n all terms of P n are positive. Solution: Since the chain is aperiodic, for every state i there exists N(i) satisfying P nii > 0 whenever n N(i). (Th2.4.1) Since the chain is irreducible, for every distinct pair i; j of states we can nd N(i; j) satisfying P nij > 0 whenever n N(i; j). There are only a nite number of states so N = maxN(i)+maxN(i; j) <1. Suppose n N . Then P nij PN(i;j)ij P n N(i;j)jj > 0 for any i; j. 4. Let a Markov chain contain r states. Prove the following: (a) If a state k can be reached from j, then it can be reached in r 1 steps or less. Solution: (a) State k can be reached from k in zero steps. Hence assume j 6= k. If j = i0 ! i1 ! ! in = k is a path leading from j to k and n > r 1, then some state, say il = im, must appear twice in fi0; : : : ; ing. Then j = i0 ! ! il 1 ! il = im ! ! in = k is a reduced path. Hence, the length minimal path has length n r 1. 5. Consider a random walk on the integers such that Pi;i+1 = p, Pi;i 1 = q for all integer i (0 < p < 1; p+ q = 1). Determine P n00. Solution: A return to zero occurs only when the number of moving up in the random walk equals the number of moving down. Thus P 2m+100 = 0, and by the binomial distribution for the number of upward jumps, P 2m00 =