程序代写案例-CS 179

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
Last Modified: April 20, 2021
CS 179: Introduction to Graphical Models: Spring 2021
Homework 3
Due Date: Wednesday, April 28th
The submission for this homework should be a single PDF file containing all of the relevant code, figures, and any
text explaining your results. When coding your answers, try to write functions to encapsulate and reuse code,
instead of copying and pasting the same code multiple times. This will not only reduce your programming efforts,
but also make it easier for us to understand and give credit for your work. Show and explain the reasoning
behind your work!
Problem 1 (adapted from R. Dechter) (40 points)
Consider the Bayesian network shown below.
(a) (8 points) Draw the Markov random field (undirected graph) corresponding to this model and factorization.
(b) (8 points) Compute the marginal probability p(F) by eliminating variables in the order [A, B, C , D, E, G]
(here meaning, A first, then B, then C , etc.) Report which functions are combined at each step (e.g., in each
bucket of bucket elimination), and show the scope (arguments of) and table corresponding to the function
produced (λi). What is the induced width of this order (the largest scope of a function produced during the
computation)?
Note: you may check your answer easily using the pyGMs GraphModel.eliminate function, but please compute
the intermediate functions manually.
(c) (8 points) Without actually computing the tables, what is the induced width of the ordering [D, G, E, F, B, C , A]?
(D first, then G, etc.)
(d) (8 points) Suppose that we observe evidence D = 0, C = 1. First, simplify the graphical model by assigning
the values of these variables (this will remove those nodes from the graph); what is the new Markov random
field (undirected graph) associated with this conditional model?
(e) (8 points) Now, compute the probability of evidence p(D = 0, C = 1) by eliminating (summing out) the
other variables. Select a good order and compute the intermediate functions manually as in part (b); you do
not need to output the tables themselves for this part, just the scopes and the final probability. What was the
induced width of the order you selected for this problem? Again, you can check your result easily using the
pyGMs condition and eliminate functions; see lecture and slides for examples.
A
B C
D E
F G
A p(A)
0 0.3
1 0.7
B p(B)
0 0.6
1 0.4
D p(D)
0 0.7
1 0.3
A C p(C|A)
0 0 0.15
0 1 0.85
1 0 0.25
1 1 0.75
E G p(G|E)
0 0 0.10
0 1 0.90
1 0 0.30
1 1 0.70
B C E p(E|B,C)
0 0 0 0.40
0 0 1 0.60
0 1 0 0.45
0 1 1 0.55
1 0 0 0.60
1 0 1 0.40
1 1 0 0.30
1 1 1 0.70
D E F p(F|D,E)
0 0 0 0.25
0 0 1 0.75
0 1 0 0.60
0 1 1 0.40
1 0 0 0.10
1 0 1 0.90
1 1 0 0.20
1 1 1 0.80
Homework 3 UC Irvine 1/ 2
CS 179: Introduction to Graphical Models Spring 2021
Problem 2: Hidden Markov Model (40 points)
Consider a four-state 4-state finite state machine, with states Yt ∈ {0, 1, 2, 3}, with state transition diagram shown
at right. We always start (at time t = 0) in state Y0 = 0. In this problem, we will compute marginal probabilities
and posterior marginal probabilities.
0
1
2
3
.5
.5
1.0
0.3
0.4
0.3 0.3
0.7
(a) (8 points) Create (as a numpy array) the transition matrix T associated with this
state transition diagram. For consistency in solutions, please make the first axis
correspond to the current time t, and the second axis to the next time (t + 1),
i.e., T[a,b] = p(Yt+1 = b|Yt = a). Print the array T . To verify your answer, also
calculate by hand and using T the values of
(1) p(Y2 = 0|Y0 = 0)
(2) p(Y2 = 2|Y0 = 0)
(note: at time 2), and verify that the two are the same.
(b) (8 points) Our observations X t will be deterministic given the current state Yt ,
with X t = 0 when Yt ∈ {0,2} and X t = 1 when Yt ∈ {1,3}, i.e., X t indicates
whether we are in an odd-numbered state. Create (as a numpy array) the
observation matrix O, with O[a,c] = p(X t = c|Yt = a). Print the array O.
(c) (16 points) Compute the following probability distributions, either by hand or
using your arrays:
(1) p(Y1)
(2) p(Y1|X1 = 0)
(3) p(Y2)
(4) p(Y2|X1 = 0, X2 = 1)
(5) p(Y2|X1 = 0, X2 = 0)
(6) p(Y3|X1 = 0, X2 = 0, X3 = 1)
(d) (8 points) What is the distribution of states far in the future, e.g., p(Y100)?
Problem 3 (Python): (20 points)
In this problem, we will use the same model as Problem 2, but compute the most likely state sequence, i.e., the (arg)
maximum over the Y variables, in Python. Write a python function or script that takes in the transition probability
matrix T , where T[i, j] is the probability of transitioning from state i to state j; an observation probability matrix
O (where O[i, k] is the probability of observing value k when in state i), an initial state distribution vector p0,
and an observation sequence O = [O1, . . . , OT ], where Ot ∈ {0, . . .}}, and computes the most likely state sequence
S = [S1, . . . , ST ] given O using dynamic programming. You may use either numpy arrays or pyGMs factors to
represent the messages.
You can use the following test cases to help debug:
◦ argmaxS p(Y |X = [ 001 ]) → [0,2, 3]
◦ argmaxS p(Y |X = [ 0100 ]) → [0,1, 2,2]
◦ argmaxS p(Y |X = [ 0000 ]) → [0,2, 2,2]
◦ argmaxS p(Y |X = [ 010001001 ]) → [0, 1,2, 2,0, 1,2,0, 1]
◦ argmaxS p(Y |X = [ 0011100 ]) → [0, 2,3, 3,3, 2,2]
For your solution, provide a listing of your function, and your predictions on a few more test cases:
(a) argmaxS p(Y |X = [ 0001 ]) → ?
(b) argmaxS p(Y |X = [ 00011 ]) → ?
(c) argmaxS p(Y |X = [ 00010 ]) → ?
(d) argmaxS p(Y |X = [ 000100 ]) → ?
Homework 3 UC Irvine 2/ 2

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468