辅导案例-CSCI 5512

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
CSCI 5512: Artificial Intelligence II (Fall ’19)
Homework 2
(Due Thu, Oct 17, 11:59 pm central)
1. (40 points) [Programming Assignment] Consider the Hidden Markov Model in Figure 1.
Figure 1: Hidden Markov Model
Assume each of the hidden variables Xi, i = 0, 1, 2, 3, . . . and the evidence variables Ei, i =
1, 2, 3, . . . to be boolean, and can take two values T and F . Let P (X0 = T ) = P (X0 = F ) =
0.5. Let the transition matrix P (Xt+1|Xt) and sensor matrix P (Et|Xt) be given by
T =
[
0.7 0.3
0.4 0.6
]
E =
[
0.9 0.1
0.3 0.7
]
,
where, in the T matrix,
T11 = P (Xt+1 = T |Xt = T ) , T12 = P (Xt+1 = F |Xt = T ) ,
T21 = P (Xt+1 = T |Xt = F ) , T22 = P (Xt+1 = F |Xt = F ) ,
and in the E matrix,
E11 = P (Et = T |Xt = T ) , E12 = P (Et = F |Xt = T ) ,
E21 = P (Et = T |Xt = F ) , E22 = P (Et = F |Xt = F ) .
Consider two sequences of evidence e1:10 over 10 time steps:
Evidence sequence 1: e1:10 = 〈F, F, F, T, T, T, T, F, F, F 〉
Evidence sequence 2: e1:10 = 〈F, T, F, T, F, T, F, T, F, T 〉
For each of the above two sequences of evidence:
(a) (20 points) Write a program to compute the smoothed estimates of Xt, t = 1, . . . , 10
given evidence e1:10.
1
(b) (20 points) Write a program to find the most likely sequence of states X1:10 given evidence
e1:10.
In addition to the smoothed estimates and sequence of states, you have to submit code for
SmoothHMM implementing computation of smoothed estimate and MaxSeq implementing
computation of the most likely sequence. The code for both algorithms should take two input
arguments:
(i) n, the length of the evidence (n = 10 for the two examples above)
(ii) The evidence sequence containing a ‘1’ for T and ‘0’ for F . Thus, for the first example
e1:10 = 〈F, F, F, T, T, T, T, F, F, F 〉, implying the input should be 0 0 0 1 1 1 1 0 0 0.
The output for SmoothHMM should be an list of length n with smoothed estimates of P (Xt =
T ), t = 1, . . . , n. The output should be clearly displayed on screen after running the program.
The output for MaxSeq should be a binary list of length n containing the most likely sequence
of states with 1 corresponding to T and 0 corresponding to F. The output should be clearly
displayed on screen after running the program.
Sample input for Python 3.6 for (a) and (b) when n = 10 and e1:10 = 〈F, F, F, T, T, T, T, F, F, F 〉
$python SmoothHMM.py 10 0 0 0 1 1 1 1 0 0 0
$python MaxSeq.py 10 0 0 0 1 1 1 1 0 0 0
2. (45 points) [Programming Assignment] Consider the umbrella network shown in Figure 2.
Let U1, U2, . . . denote the sequence of evidence variables (umbrella), where ∀i, Ui = T (true)
or F (false). Let Ri be the random variable corresponding to the hidden state (rain) at step
i. Assume the prior probability of rain P (R0 = T ) = P (R0 = F ) =
1
2 . Consider the following
three evidence sequences:
(i) u1:10 = (T, T, T, T, T, F, F, F, F, F )
(ii) u1:10 = (F, F, F, F, F, F, F, T, T, T )
(iii) u1:10 = (F, T, F, T, F, T, F, T, F, T )
tRain
tUmbrella
Raint −1
Umbrellat −1
Raint +1
Umbrellat +1
Rt −1 tP(R )
0.3f
0.7t
tR tP(U )
0.9t
0.2f
Figure 2: The Umbrella Network
For each of the three choices of u1:10, your code should output the filtering probability
P (R10|u1:10). We want to approximate this probability using two separate sample meth-
ods as follows:
2
(a) (20 points) Estimate P (R10|u1:10) using likelihood weighting with 100 and 1000 samples.
You have to submit code for lwUmbrella which takes 3 arguments: an integer numSamples,
denoting the number of samples (set to 100 and 1000), an integer numSteps, denoting the
number of steps (set to 10), and evidence, denoting the evidence u1:numSteps (T = 1, F =
0) of length numSteps. The output should be the estimate P (RnumSteps|u1:numSteps) and
the variance of the estimate.
Sample input for Python 3.6 for when numSamples = 100, numSteps = 10, and u1:10 =
(T, T, T, T, T, F, F, F, F, F )
$python lwUmbrella.py 100 10 1 1 1 1 1 0 0 0 0 0
(b) (20 points) Estimate P (R10|u1:10) using particle filtering with 100 and 1000 parti-
cles. You have to submit code for pfUmbrella, which takes 3 arguments: an integer
numSamples, denoting the number of particles (set to 100 and 1000), an integer num-
Steps, denoting the number of steps (set to 10), and evidence, denoting the evidence
u1:numSteps (T = 1, F = 0) of length numSteps. The output should be the estimate
P (RnumSteps|u1:numSteps) and the variance of the estimate.
Sample input for Python 3.6 for when numSamples = 100, numSteps = 10, and u1:10 =
(T, T, T, T, T, F, F, F, F, F )
$python pfUmbrella.py 100 10 1 1 1 1 1 0 0 0 0 0
(c) (5 points) Comment on the relative performance of the two methods on the three evidence
sequences and two sample sizes. In particular, how close are the estimates to the true
probabilities and what are the variance of the estimates.
3. (15 points) This question considers the value of perfect information (VPI) V PIE(Ej) which
evaluates the value of additional information Ej given existing information E.
(a) (7 points) Show that VPI is non-negative, i.e., V PIE(Ej) ≥ 0,∀j, E.
(b) (8 points) Show that VPI is order independent, i.e.,
V PIE(Ej , Ek) = V PIE(Ej) + V PIE,Ej (Ek) = V PIE(Ek) + V PIE,Ek(Ej) .
Instructions
Please follow these instructions carefully. Code submitted without adhering to these instructions
will not receive any credit.
For each programming assignment, you have to submit the code as required by the problem and
the algorithm must be implemented using a main file as named in the problem (e.g., SmoothHMM.py).
Only Python 3.6 will be accepted, any other language will receive zero credit. The program must
run on the CSE labs machines and will not receive credit if it fails this.
3
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468