程序代写案例-EEEE4119

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
Ex
am
ple
Qu
es
tio
ns
EEEE4119
The University of Nottingham
DEPARTMENT OF ELECTRICAL AND ELECTRONIC ENGINEERING
A LEVEL 4 MODULE, SPRING 2020-21
ARTIFICIAL INTELLIGENCE AND INTELLIGENT SYSTEMS
Time allowed: TWO hours
Candidates may complete the front cover of their answer book and sign their desk card but
must NOT write any thing else until the start of the examination period is announced
Answer ALL questions.
Only a calculator from approved list A (or one functionally equivalent) may be used in this
examination.
Dictionaries are not allowed with one exception. Those whose first language is not English may
use a standard translation dictionary to translate between that language and English provided
that neither language is the subject of this examination. Subject specific translation dictionaries
are not permitted.
No electronic devices capable of storing and retrieving text, including electronic dictionaries,
may be used.
DO NOT turn examination paper over until instructed to do so
ADDITIONAL MATERIAL: [none]
INFORMATION FOR INVIGILATORS:
Question papers should be collected in at the end of the exam – do not allow candidates to
take copies from the exam room.
EEEE4119 Turn over
Ex
am
ple
Qu
es
tio
ns
1
EEEE4119
1. (a) Consider a dichotomisation problem defined as
C1 = {a1, a2 ∈ R : a1 ≤ 2 ∧ a2 ≤ 3} ∪ {a1, a2 ∈ R : a1 ≥ 2 ∧ a2 ≥ 3}
C0 = {a1, a2 ∈ R : a1, a2 /∈ C1}
(i) Draw and annotate the dichotomisation problem in the input space. Then,
provide a description of the given dichotomisation problem. [3]
(ii) Using the Lippmann’s multilayer perceptron rule, describe and draw with
detailed annotation the dichotomies process tree. [7]
(iii) Based on the dichotomies process in (b), draw a neural network architecture
and find suitable values for the weights in order to perform the required
classification [9]
(b) Now, consider the following case. An engineer was given a classification prob-
lem to be solved using neural network system. The engineer proposed a single
operational layer neural network and then attempted to design such network
using the Perceptron Learning Law. The engineer found that the Perceptron
Learning Law failed to converge.
(i) Describe what is the meaning of this finding? [3]
(ii) Give a recommendation and its rationale how to solve the classification prob-
lem! [3]
EEEE4119 Turn over
Ex
am
ple
Qu
es
tio
ns
2
EEEE4119
2. A Support Vector Machine (SVM) classifier is to be built for a two class problem.
There are a total of m, d-dimensional, training samples x1 to xm with associated
labels y1 to ym where yi ∈ {−1, 1}. Note that our notation assumes a final constant
term, i.e. x = [x1 · · ·xd1]⊤.
(a) The decision boundary for linear SVMs has the form
w⊤x = 0
How would you expand this model into higher dimensions without using a kernel
approach? Give an example of how you might add 1 extra dimension. [2]
Why do we need to use kernel functions to go to higher dimensions with
SVMs? [2]
(b) A polynomial kernel can be written in the following forms
κ(xk,xt) =
(
x⊤k xt
)c
(i) How does the parameter c impact the form of the decision boundary? [2]
(ii) Discuss how you select an appropriate value for c that provides maximum
accuracy while preventing overfitting. [4]
(iii) Consider the kernel κ(xk,xt) = (xk⊤xt − 1)3 with dimension d = 2. Show that
this equivalent to training an SVM on the following 4-dimensional trans-
formed space:
Φ
(
x1
x2
)
=
[
x31

3x21x2

3x1x
2
2 x
3
2
]⊤
[6]
(iv) If the kernel is modified to κ(xk,xt) = (xk⊤xt)3, how many dimensions are
in this new transformed space? [4]
(v) For a quadratic kernel, κ(xk,xt) = (xk⊤xt)2, show that the effective dimen-
sionality of the space (including the constant term) in terms of d, the number
of dimensions of the data points, is given by:
d3 + 3d
2
+ 1
[5]
EEEE4119 Turn over
Ex
am
ple
Qu
es
tio
ns
3
EEEE4119
3. Consider the following Markov Decision Process (MDP):
Figure 1: Markov decision process
We have states S1, S2, S3, S4, and S5. We have actions left and right, and the chosen
action happens with probability 1. In S1 the only option is to go back to S2, and
similarly in S5 we can only go back to S4. The reward for taking any action is r = 1,
except for taking action right from state S4 , which has a reward r = 10. For all parts
of this problem, assume that γ = 0.8.
(a) What is the optimal policy for this MDP? [5]
(b) What is the optimal value function of state S5, i.e. V ∗(S5)? It is acceptable to
state it in terms of γ, but not in terms of state values. [6]
(c) Consider executing Q-learning on this MDP. Assume that the Q values for all
(state, action) pairs are initialized to 0, that α = 0.5, and that Q-learning uses
a greedy exploration policy, meaning that it always chooses the action with
maximum Q value. The algorithm breaks ties by choosing left. What are
the first 10 (state, action) pairs if our robot learns using Q-learning and starts
in state S3? For example, a (not necessarily correct) sequence might read:
{S3, left}, {S2, right}, {S3, right}, . . .). [7]
(d) How would you modify the Q−learning approach above to improve performance
in a realistic setting? [2]
(e) If this MDP represents a robot navigating in an unknown environment, would
it be better to use Q-learning, TD-learning or Monte-Carlo learning? Dis-
cuss. [5]
EEEE4119 Turn over
Ex
am
ple
Qu
es
tio
ns
4
EEEE4119
4. Consider the following Bayesian network developed for graduates from Nottingham:
Figure 2: Bayesian network
(a) What is the probability that an EEEE4119 student (W = true) had quality in-
struction (I = true) and became successful in life (S = true), but did not have
raw talent (T = false) yet was hard-working (H = true) and confident (C = true).
Leave your answer unsimplified in terms of constants from the probability ta-
bles. [6]
(b) What is probability of success in life (S = true) given that a student has high
quality instruction (I = true)? Express your final answer in terms of expressions
of probabilities that could be read off the Bayes Net. You do not need to simplify
down to constants defined in the Bayes Net tables. You may use summations
s necessary. [6]
(c) What is the probability a student is hardworking (H = true), given that she
was an EEEE4119 student (W = true)? Express your final answer in terms of
expressions of probabilities that could be read off the Bayes Net. You do not
need to simplify down to constants defined in the Bayes Net tables. You may
use summations as necessary. [6]
(d) A biased coin has a probability p of landing on heads. We flip the coin 10 times
and find that 4 times it lands on heads.
EEEE4119 Turn over
Ex
am
ple
Qu
es
tio
ns
5
EEEE4119
(i) Write down Bayes rule, relating the posterior distribution of p, P (p|D) where
D is the observed set of flips, to P (D|p). The prior distribution is P (p). [1]
(ii) Using the binomial distribution write an expression for P (D|p) (whereD is the
data set i.e. 4 heads in a set of 10). Hint: If you have a binary variable with
probability p of success and you run n trials with k successes, the binomial
distribution is P =
(
n
k
)
pk(1− p)n−k
[2]
(iii) Calculate the ratio P (p=0.4|D)P (p=0.5|D) assuming a uniform distribution for the P (p).
Comment on whether or not you think the coin is truly biased. [4]
EEEE4119 END

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468