辅导案例-COMP9414

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
COMP9414 Review 1
Lectures
Artificial Intelligence and Agents
Problem Solving and Search
Constraint Satisfaction Problems
Logic and Knowledge Representation
Reasoning with Uncertainty
Machine Learning
Natural Language Processing
Knowledge Based Systems
Neural Networks and Reinforcement Learning
UNSW c©W. Wobcke et al. 2019
COMP9414: Artificial Intelligence
Lecture 10: Review
Wayne Wobcke
e-mail:[email protected]
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 3
Environment Types
Fully Observable vs Partially Observable
Agent’s sensors give access to complete state of environment (no
internal state required)
Deterministic vs Stochastic
Next state of environment determined only by current state and
agent’s choice of action
Episodic vs Sequential
Agent’s experience divided into “episodes”; agent doesn’t need
to think ahead in episodic environment
Static vs Dynamic
Environment changes while agent deliberates
Discrete vs Continuous
Limited number of distinct, clearly defined percepts and actions
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 2
What is an Agent?
An entity
situated: operates in a dynamically changing environment
reactive: responds to changes in a timely manner
autonomous: can control its own behaviour
proactive: exhibits goal-oriented behaviour
communicating: coordinate with other agents??
Examples: humans, dogs, ..., insects, sea creatures, ..., thermostats?
Where do current robots sit on the scale?
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 5
Example Agents
Agent Type Percepts Actions Goals Environment
Medical
diagnosis
system
Symptoms,
findings, pa-
tient responses
Questions,
tests, treat-
ments
Healthy patient,
minimise costs
Patient, hospital
Satellite im-
age system
Pixels of vary-
ing intensity,
colour
Print cate-
gorisation of
scene
Correct cate-
gorisation
Images from or-
biting satellite
Automated
taxi driver
Cameras,
speedometer,
GPS, sonar,
microphone
Steer, acceler-
ate, brake, talk
to passenger
Safe, fast, legal,
comfortable
trip, maximise
profits
Roads, other
traffic, pedestri-
ans, customers
Robocup
robot
Camera im-
ages, laser
range finder
readings, sonar
readings
Move motors,
“kick” ball
Score goals
Playing field
with ball and
other robots
Based on Russell and Norvig (2010) Figure 2.5.
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 4
Specifying Agents
percepts: inputs to the agent via sensors
actions: outputs available to the agent via effectors
goals: objectives or performance measure of the agent
environment: world in which the agent operates
Most generally, a function from percept sequences to actions
Ideally rational agent does whatever action is expected to maximize some
performance measure – the agent may not know the performance measure
(Russell and Norvig 2010)
Resource bounded agent must make “good enough” decisions based on its
perceptual, computational and memory limitations (design tradeoffs)
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 7
Example Problem — Romania Map
Bucharest
Giurgiu
Urziceni
Hirsova
Eforie
Neamt
Oradea
Zerind
Arad
Timisoara
Lugoj
Mehadia
Dobreta
Craiova
Sibiu
Fagaras
Pitesti
Rimnicu Vilcea
Vaslui
Iasi
Straight−line distance
to Bucharest
0
160
242
161
77
151
241
366
193
178
253
329
80
199
244
380
226
234
374
98
Giurgiu
Urziceni
Hirsova
Eforie
Neamt
Oradea
Zerind
Arad
Timisoara
Lugoj
Mehadia
Dobreta
Craiova
Sibiu Fagaras
Pitesti
Vaslui
Iasi
Rimnicu Vilcea
Bucharest
71
75
118
111
70
75
120
151
140
99
80
97
101
211
138
146 85
90
98
142
92
87
86
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 6
State Space Search Problems
State space — set of all states reachable from initial state(s) by any
action sequence
Initial state(s) — element(s) of the state space
Transitions
◮ Operators — set of possible actions at agent’s disposal; describe
state reached after performing action in current state, or
◮ Successor function — s(x)= set of states reachable from state x by
performing a single action
Goal state(s) — element(s) of the state space
Path cost — cost of a sequence of transitions used to evaluate
solutions (apply to optimization problems)
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 9
A∗ Search
Idea: Use both cost of path generated and estimate to goal to order
nodes on the frontier
g(n) = cost of path from start to n; h(n) = estimate from n to goal
Order priority queue using function f (n) = g(n) + h(n)
f (n) is the estimated cost of the cheapest solution extending this path
Expand node from frontier with smallest f -value
Essentially combines uniform-cost search and greedy search
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 8
Summary — Blind Search
Criterion Breadth Uniform Depth- Depth- Iterative Bidirectional
First Cost First Limited Deepening
Time bd bd bm bl bd b
d
2
Space bd bd bm bl bd b
d
2
Optimal Yes Yes No No Yes Yes
Complete Yes Yes No Yes, if l ≥ d Yes Yes
b— branching factor
d — depth of shallowest solution
m—maximum depth of tree
l — depth limit
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 11
Forward Checking
Idea: Keep track of remaining legal values for unassigned variables
Terminate search when any variable has no legal values
WA NT Q NSW V SA T
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 10
Constraint Satisfaction Problems
Constraint Satisfaction Problems are defined by a set of variables Xi,
each with a domain Di of possible values, and a set of constraints C
Aim is to find an assignment to each the variables Xi (a value from
the domain Di) such that all of the constraints C are satisfied
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 13
Hill Climbing by Min-Conflicts
2
2
1
2
3
1
2
3
3
2
3
2
3
0
Variable selection: randomly select any conflicted variable
Value selection by min-conflicts heuristic
◮ Choose value that violates fewest constraints
◮ Can (often) solve n-Queens for n≈ 10,000,000
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 12
Arc Consistency
Simplest form of constraint propagation makes each arc consistent
X → Y is consistent if
for every value x in dom(X) there is some allowed y in dom(Y )
WA NT Q NSW V SA T
Make X → Y arc consistent by removing any such x from dom(X)
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 15
Truth Table Semantics
The semantics of the connectives can be given by truth tables
P Q ¬P P∧Q P∨Q P→ Q P↔ Q
True True False True True True True
True False False False True False False
False True True False True True False
False False True False False True True
One row for each possible assignment of True/False to variables
Important: P and Q are any sentences, including complex sentences
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 14
Propositional Logic
Use letters to stand for “basic” propositions; combine them into more
complex sentences using operators for not, and, or, implies, iff
Propositional connectives:
¬ negation ¬P “not P”
∧ conjunction P∧Q “P and Q”
∨ disjunction P∨Q “P or Q”
→ implication P→ Q “If P then Q”
↔ bi-implication P↔ Q “P if and only if Q”
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 17
Conversion to Conjunctive Normal Form
Eliminate↔ rewriting P↔ Q as (P→ Q)∧ (Q→ P)
Eliminate→ rewriting P→ Q as ¬P∨Q
Use De Morgan’s laws to push ¬ inwards (repeatedly)
◮ Rewrite ¬(P∧Q) as ¬P∨¬Q
◮ Rewrite ¬(P∨Q) as ¬P∧¬Q
Eliminate double negations: rewrite ¬¬P as P
Use the distributive laws to get CNF [or DNF] – if necessary
◮ Rewrite (P∧Q)∨R as (P∨R)∧ (Q∨R) [for CNF]
◮ Rewrite (P∨Q)∧R as (P∧R)∨ (Q∧R) [for DNF]
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 16
Definitions
A sentence is valid if it is True under all possible assignments of
True/False to its variables (e.g. P∨¬P)
A tautology is a valid sentence
Two sentences are equivalent if they have the same truth table, e.g.
P∧Q and Q∧P
◮ So P is equivalent to Q if and only if P↔ Q is valid
A sentence is satisfiable if there is some assignment of True/False to
its variables for which the sentence is True
A sentence is unsatisfiable if it is not satisfiable (e.g. P∧¬P)
◮ Sentence is False for all assignments of True/False to its variables
◮ So P is a tautology if and only if ¬P is unsatisfiable
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 19
Applying Resolution Refutation
Negate query to be proven (resolution is a refutation system)
Convert knowledge base and negated query into CNF
Repeatedly apply resolution until either the empty clause (contradic-
tion) is derived or no more clauses can be derived
If the empty clause is derived, answer ‘yes’ (query follows from
knowledge base), otherwise answer ‘no’ (query does not follow from
knowledge base)
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 18
Resolution Rule of Inference
A1∨·· ·∨Am∨B ¬B∨C1∨·· ·∨Cn
A1∨·· ·∨Am∨C1∨·· ·∨Cn














where B is a propositional variable and Ai and C j are literals
B and ¬B are complementary literals
A1∨·· ·∨Am∨C1∨·· ·∨Cn is the resolvent of the two clauses
Special case: If no Ai andC j, resolvent is empty clause, denoted
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 21
Conditional Probability by Enumeration
cavityL
toothache
cavity
catch catchL
toothacheL
catch catchL
.108 .012
.016 .064
.072
.144
.008
.576
P(¬cavity|toothache) =
P(¬cavity∧ toothache)
P(toothache)
=
0.016+0.064
0.108+0.012+0.016+0.064
= 0.4
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 20
Random Variables
Propositions are random variables that can take on several values
P(Weather = Sunny) = 0.8
P(Weather = Rain) = 0.1
P(Weather = Cloudy) = 0.09
P(Weather = Snow) = 0.01
Every random variable X has a domain of possible values
〈x1, x2, · · · ,xn〉
Probabilities of all possible values P(Weather)= 〈0.8, 0.1, 0.09, 0.01〉
is a probability distribution
P(Weather, Appendicitis) is a combination of random variables
represented by cross product (can also use logical connectives
P(A∧B) to represent compound events)
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 23
Bayesian Networks
Example (Pearl, 1988)
Burglary
P(Burglary)
0.001
Earthquake
P(Earthquake)
0.002
Alarm
Burglary
True
True
False
False
Earthquake
True
False
True
Flase
P(Alarm)
0.95
0.94
0.29
0.001
JohnCalls MaryCalls
Alarm
True
False
P(JohnCalls)
0.90
0.05
Alarm
True
False
P(MaryCalls)
0.70
0.01
Probabilities summarize potentially infinite set of possible circum-
stances
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 22
Bayes’ Rule
P(B|A) =
P(A|B)P(B)
P(A)
AI systems abandon joint probabilities and work directly with
conditional probabilities using Bayes’ Rule
Deriving Bayes’ Rule:
P(A∧B) = P(A|B)P(B) (Definition)
P(B∧A) = P(B|A)P(A) (Definition)
So P(A|B)P(B) = P(B|A)P(A) since P(A∧B) = P(B∧A)
Hence P(B|A) = P(A|B)P(B)
P(A) if P(A) 6= 0
Note: If P(A) = 0, P(B|A) is undefined
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 25
Bigram Model
Maximize P(w1, · · · ,wn|t1, · · · , tn).P(t1, · · · , tn)
Apply independence assumptions (Markov assumptions)
◮ P(w1, · · · ,wn|t1, · · · , tn) = ΠP(wi|ti)
◮ Observations (words) depend only on states (tags)
◮ P(t1, · · · , tn) = P(tn|tn−1). · · · .P(t0|φ), where φ = start
◮ Bigram model: state (tag) depends only on previous state (tag)
Estimate probabilities
◮ P(ti|t j) = #((t j,ti occurs)/#(t j starts a bigram)
◮ Choose tag sequence that maximizes ΠP(wi|ti).P(ti|ti−1)
◮ Parts of speech generated by finite state machine
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 24
Example – Causal Inference
P(JohnCalls|Burglary)
P(J|B) = P(J|A∧B).P(A|B)+P(J|¬A∧B).P(¬A|B)
= P(J|A).P(A|B)+P(J|¬A).P(¬A|B)
= P(J|A).P(A|B)+P(J|¬A).(1−P(A|B))
Now P(A|B) = P(A|B∧E).P(E|B)+P(A|B∧¬E).P(¬E|B)
= P(A|B∧E).P(E)+P(A|B∧¬E).P(¬E)
= 0.95×0.002+0.94×0.998= 0.94002
Therefore P(J|B) = 0.90×0.94002+0.05×0.05998= 0.849017
Fact 3: P(X |Z) = P(X |Y ∧Z).P(Y |Z)+P(X |¬Y ∧Z).P(¬Y |Z), since
X ∧Z⇔ (X ∧Y ∧Z)∨ (X ∧¬Y ∧Z) (conditional version of Fact 2)
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 27
Viterbi Algorithm
1. Sweep forward (one word at a time) saving only the most likely
sequence (and its probability) for each tag ti of wi
2. Select highest probability final state
3. Follow chain backwards to extract tag sequence
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 26
Hidden Markov Model for POS Tagging
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 29
Restaurant Training Data
Alt Bar F/S Hun Pat Price Rain Res Type Est Wait?
X1 T F F T Some $$$ F T French 0–10 T
X2 T F F T Full $ F F Thai 30–60 F
X3 F T F F Some $ F F Burger 0–10 T
X4 T F T T Full $ F F Thai 10–30 T
X5 T F T F Full $$$ F T French >60 F
X6 F T F T Some $$ T T Italian 0–10 T
X7 F T F F None $ T F Burger 0–10 F
X8 F F F T Some $$ T T Thai 0–10 T
X9 F T T F Full $ T F Burger >60 F
X10 T T T T Full $$$ F T Italian 10–30 F
X11 F F F F None $ F F Thai 0–10 F
X12 T T T T Full $ F F Burger 30–60 T
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 28
Supervised Learning
Given a training set and a test set, each consisting of a set of items for
each item in the training set, a set of features and a target output
Learner must learn a model that can predict the target output for any
given item (characterized by its set of features)
Learner is given the input features and target output for each item in
the training set
◮ Items may be presented all at once (batch) or in sequence (online)
◮ Items may be presented at random or in time order (stream)
◮ Learner cannot use the test set at all in defining the model
Model is evaluated by its performance on predicting the output for
each item in the test set
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 31
Information Gain
None Some Full
Patrons?
French Italian Thai Burger
Type?
For Patrons, Entropy =
1
6
(0)+
1
3
(0)+
1
2
[

1
3
log(
1
3
)−
2
3
log(
2
3
)
]
= 0+0+
1
2
[1
3
(1.585)+
2
3
(0.585)
]
= 0.459
For Type, Entropy =
1
6
(1)+
1
6
(1)+
1
3
(1)+
1
3
(1) = 1
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 30
Choosing an Attribute to Split
None Some Full
Patrons?
French Italian Thai Burger
Type?
Patrons is a “more informative” attribute than Type, because it splits the
examples more nearly into sets that are “all positive” or “all negative”
This notion of “informativeness” can be quantified using the mathematical
concept of “entropy”
A parsimonious tree can be built by minimizing the entropy at each step
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 33
Laplace Error and Pruning
Following Ockham’s Razor, prune branches that do not provide much
benefit in classifying the items (aids generalization, avoids over-fitting)
For a leaf node, all items will assigned the majority class at that node.
Estimate error rate on the (unseen) test items using the Laplace error
E = 1−
n+1
N+ k
N = total number of (training) items at the node
n = number of (training) items in the majority class
k = number of classes
If the average Laplace error of the children exceeds that of the parent node,
prune off the children
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 32
Induced Decision Tree
No Yes
Fri/Sat?
None Some Full
Patrons?
No Yes
Hungry?
Type?
French Italian Thai Burger
F T
T F
F
T
F T
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 35
Bernoulli Model
Maximize P(x1, · · · ,xn|c).P(c)
Features are presence or absence of word wi in document
Apply independence assumptions
◮ P(x1, · · · ,xn|c) = P(x1|c). · · · .P(xn|c)
◮ Probability of word w (not) in class c independent of context
Estimate probabilities
◮ P(w|c) = #(w in document in class c)/#(documents in class c)
◮ P(¬w|c) = 1−P(w|c)
◮ P(c) = #(documents in class c)/#documents
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 34
Text Classification
Input: A document (e-mail, news article, review, tweet)
Output: One class drawn from a fixed set of classes
◮ So text classification is a multi-class classification problem
◮ . . . and sometimes a multi-label classification problem
Learning Problem
◮ Input: Training set of labelled documents {(d1, c1), · · ·}
◮ Output: Learned classifier that maps d to predicted class c
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 37
Bag of Words Model
I love this movie! It’s sweet, but with satirical
humor. The dialogue is great and the adventure
scenes are fun... It manages to be whimsical and
romantic while laughing at the conventions of
the fairy tale genre. I would recommend it to
just about anyone. I’ve seen it several times,
and I’m always happy to see it again whenever
I have a friend who hasn’t seen it yet!
it 6
I 5
the 4
to 3
and 3
seen 2
yet 1
would 1
whimsical 1
times 1
sweet 1
satirical 1
adventure 1
genre 1
fairy 1
humor 1
have 1
great 1
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 36
Naive Bayes Classification
w1 w2 w3 w4 Class
1 0 0 1 1
0 0 0 1 0
1 1 0 1 0
1 0 1 1 1
0 1 1 0 0
1 0 0 0 0
1 0 1 0 1
0 1 0 0 1
0 1 0 1 0
1 1 1 0 0
Class= 1 Class= 0
P(Class) 0.40 0.60
P(w1|Class) 0.75 0.50
P(w2|Class) 0.25 0.67
P(w3|Class) 0.50 0.33
P(w4|Class) 0.50 0.50
To classify document with w2, w3, w4
• P(Class= 1|¬w1,w2,w3,w4)
= ((1−0.75)∗0.25∗0.5∗0.5)∗0.4
= 0.00625
• P(Class= 0|¬w1,w2,w3,w4)
= ((1−0.5)∗0.5∗0.67∗0.33)∗0.6
= 0.03333
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 39
Natural Languages – Ambiguity
Natural languages exhibit ambiguity
“The fisherman went to the bank” (lexical)
“The boy saw a girl with a telescope” (structural)
“Every student took an exam” (semantic)
“The table won’t fit through the doorway because it is too
[wide/narrow]” (pragmatic)
Ambiguity makes it difficult to interpret meaning of phrases/sentences
◮ But also makes inference harder to define and compute
Resolve ambiguity by mapping to unambiguous representation
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 38
MNB Example
Words Class
d1 Chinese Beijing Chinese c
d2 Chinese Chinese Shanghai c
d3 Chinese Macao c
d4 Tokyo Japan Chinese j
d5 Chinese Chinese Chinese Tokyo Japan ?
P(Chinese|c) = (5+1)/(8+6) = 3/7
P(Tokyo|c) = (0+1)/(8+6) = 1/14
P(Japan|c) = (0+1)/(8+6) = 1/14
P(Chinese| j) = (1+1)/(3+6) = 2/9
P(Tokyo| j) = (1+1)/(3+6) = 2/9
P(Japan| j) = (1+1)/(3+6) = 2/9
To classify document d5
• P(c|d5) ∝ [(3/7)
3 . 1/14 . 1/14] . 3/4
≈ 0.0003
• P( j|d5) ∝ [(2/9)
3 . 2/9 . 2/9] . 1/4
≈ 0.0001
• Choose Class c
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 41
Syntactic Structure
Syntactically ambiguous = more than one parse tree
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 40
Typical (Small) Grammar
S→ NP VP
NP→ [Det] Adj∗ N [AP | PP | Rel Clause]∗
VP→ V [NP] [NP] PP∗
AP→ Adj PP
PP→ P NP
Det→ a | an | the | . . .
N→ John | park | telescope | . . .
V→ saw | likes | believes | . . .
Adj→ hot | hotter | . . .
P→ in | . . .
Special notation: ∗ is “0 or more”; [ . . ] is “optional”
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 43
Example Chart
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 42
Chart Parsing
Use a chart to record parsed fragments and hypotheses
Hypotheses N → α • β where N → αβ is a grammar rule means
“trying to parse N as αβ and have so far parsed α”
One node in chart for each word gap, start and end
One arc in chart for each hypothesis
At each step, apply fundamental rule
◮ If chart has N→ α•Bβ from n1 to n2 and B→ γ• from n2 to n3
add N→ αB•β from n1 to n3
Accept sentence when S→ α• is added from start to end
Can produce any sort of derivation
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 45
Converting English into First-Order Logic
Everyone likes lying on the beach — ∀x likes lying on beach(x)
Someone likes Fido — ∃x likes(x, Fido)
No one likes Fido — ¬∃x likes(x, Fido) (or ∀x¬likes(x, Fido))
Fido doesn’t like everyone — ¬∀x likes(Fido, x)
All cats are mammals — ∀x(cat(x)→ mammal(x))
Some mammals are carnivorous — ∃x(mammal(x)∧carnivorous(x))
Note: ∀xA(x)⇔¬∃x¬A(x), ∃xA(x)⇔¬∀x¬A(x)
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 44
First-Order Logic
Terms: constants, variables, functions applied to terms (refer to
objects)
◮ e.g. a, f (a), mother of (Mary), . . .
Atomic formulae: predicates applied to tuples of terms
◮ e.g. likes(Mary, mother of (Mary)), likes(x, a)
Quantified formulae:
◮ e.g. ∀x likes(x, a), ∃x likes(x, mother of (y))
◮ here the second occurrences of x are bound by the quantifier (∀ in
the first case, ∃ in the second) and y in the second formula is free
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 47
First-Order Resolution
A1∨·· ·∨Am∨B ¬B
′∨C1∨·· ·∨Cn
(A1∨·· ·∨Am∨C1∨·· ·∨Cn)θ














where B, B′ are positive literals, Ai,C j are literals, θ is an mgu of B and B

B and ¬B′ are complementary literals
(A1∨·· ·∨Am∨C1∨·· ·∨Cn)θ is the resolvent of the two clauses
Special case: If no Ai andC j, resolvent is empty clause, denoted
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 46
Defining Semantic Properties
Brothers are siblings
∀x∀y(brother(x, y)→ sibling(x, y))
“Sibling” is symmetric
∀x∀y(sibling(x, y)↔ sibling(y, x))
One’s mother is one’s female parent
∀x∀y(mother(x, y)↔ (female(x)∧parent(x, y))
A first cousin is a child of a parent’s sibling
∀x∀y(firstcousin(x, y)↔∃p∃sparent(p, x)∧ sibling(p, s)∧parent(s, y)
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 49
Examples
{P(x,a),P(b,c)} is not unifiable
{P( f (x),y),P(a,w)} is not unifiable
{P(x,c),P(b,c)} is unifiable by {x/b}
{P( f (x),y),P( f (a),w)} is unifiable by
σ = {x/a,y/w}, τ = {x/a,y/a,w/a}, υ = {x/a,y/b,w/b}
Note that σ is an mgu and τ = σθ where θ = . . .?
{P(x),P( f (x))} is not unifiable (c.f. occur check!)
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 48
Unification
A unifier of two atomic formulae is a substitution of terms for
variables that makes them identical
◮ Each variable has at most one associated term
◮ Substitutions are applied simultaneously
Unifier of P(x, f (a), z) and P(z, z, u) : {x/ f (a), z/ f (a), u/ f (a)}
Substitution σ1 is a more general unifier than a substitution σ2 if for
some substitution τ, σ2 = σ1τ (i.e. σ1 followed by τ)
Theorem. If two atomic formulae are unifiable, they have a most
general unifier (mgu).
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 51
Perceptron Learning Rule
Adjust the weights as each input is presented
Recall s= w1x1+w2x2+w0
if g(s) = 0 but should be 1,
wk ← wk+ηxk
w0 ← w0+η
so s ← s+η(1+∑
k
x2k )
if g(s) = 1 but should be 0,
wk ← wk−ηxk
w0 ← w0−η
so s ← s−η(1+∑
k
x2k )
otherwise weights are unchanged (η > 0 is called the learning rate)
Theorem: This will eventually learn to classify the data correctly,
as long as they are linearly separable
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 50
McCulloch & Pitts Model of a Single Neuron
x1
x2
Σ ✲ g ✲
1
✟✟
✟✟
✟✟

❍❍❍❍❍❍❥



✁✁✕
w1
w2
w0 =−θ
s
g(s)
s= w1x1+w2x2−θ
= w1x1+w2x2+w0
x1, x2 are inputs
w1, w2 are synaptic weights
θ is a threshold
w0 is a bias weight
g is transfer function
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 53
Reinforcement Learning Framework
Agent interacts with its environment
There is a set S of states and a set A of actions
At each time step t, the agent is in some state st and must choose an
action at , whereupon it goes into state st+1 = δ(st ,at) and receives
reward r(st ,at)
In general, r() and δ() can be multi-valued, with a random element
The aim is to find an optimal policy pi : S→ A which maximizes the
cumulative reward
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 52
Multi-Layer Neural Networks
XOR
NOR
AND NOR
−1
+1
+1 −1
−1.5
−1
−1
+0.5
+0.5
Question: Given an explicit logical function, we can design a multi-layer
neural network by hand to compute that function – but if we are just given
a set of training data, can we train a multi-layer network to fit this data?
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 55
Calculation
Theorem: In a deterministic environment, for an optimal policy, the value
function V ∗ satisfies the Bellman equations: V ∗(s) = r(s,a)+ γV ∗δ(s,a)
where a= pi∗(s) is the optimal action at state s.
Let δ∗(s) be the transition function for pi∗(s) and suppose γ = 0.9.
1. Suppose δ∗(s1) = s1. Then V
∗(s1) = 5+0.9V
∗(s1) so V
∗(s1) = 50
Suppose δ∗(s2) = s2. Then V
∗(s2) = 10+0.9V
∗(s2) so V
∗(s2) = 100
2. Suppose δ∗(s1) = s2. Then V
∗(s1) = 2+ 0.9V
∗(s2) so V
∗(s1) = 92
Suppose δ∗(s2) = s2. Then V
∗(s2) = 10+0.9V
∗(s2) so V
∗(s2) = 100
3. Suppose δ∗(s1) = s2. Then V
∗(s1) = 2+0.9V
∗(s2) so V
∗(s1) = 81.6
Suppose δ∗(s2) = s1. ThenV
∗(s2) = 15+0.9V
∗(s1) soV
∗(s2) = 88.4
So 2 is the optimal policy.
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 54
Example: Delayed Rewards
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 57
Examination Rules
Commencing Exam
If you are unwell do not commence this exam – contact an exam supervisor.
If you sit an exam you are declaring yourself fit to do so and cannot later apply for Special
Consideration.
If you become unwell during the exam, notify a supervisor immediately. You must leave
the exam and you have 24 hours to apply for Special Consideration, which must include a
medical certificate.
Submission
Submit your answers by pressing the “Submit” button on this application.
You may submit your answers as many times as you like. However, ONLY the latest
submission will be marked.
Make sure to submit your final answers before leaving the exam room.
Rough Working
You will be provided with paper that can be used for rough working. This will not be marked.
Write your name on the top of each sheet of rough working paper that you use.
You must hand in ALL rough working paper at the end of the exam.
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 56
Examination Instructions
(1) READING TIME – 10 MINUTES
(2) TIME ALLOWED – 2 HOURS
(3) TOTAL NUMBER OF PAGES – 10
(4) THIS EXAMINATION COUNTS FOR 60% OF THE FINAL MARK
(5) TOTAL NUMBER OF QUESTIONS – 35
(6) ANSWER ALL QUESTIONS
(7) ALL QUESTIONS ARE OF EQUAL WEIGHT
(8) CHOOSE ONE ANSWER PER QUESTION
(9) UNIVERSITY APPROVED CALCULATORS MAY BE USED
(10) THIS PAPER MAY NOT BE RETAINED BY THE CANDIDATE
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 59
Sample Questions
Question 3. Consider the following propositional knowledge base:
A∨B,¬A∨C∨D,B∨¬C,¬B∨¬D
Which of the following can be concluded using resolution?
(a) B∨C∨D
(b) · · ·
Question 4. Consider these sentences over proposition symbols K, L,M.
L→ K∧¬M, ¬M∧¬L→ K
What is the full list of models that satisfy both sentences?
(a) {},{L},{L,M},{K,L,M}
(b) · · ·
UNSW c©W. Wobcke et al. 2019
COMP9414 Review 58
Sample Questions
Question 1. What does the PEAS model of an agent specify?
(a) Perception, Evaluation, Action, System
(b) Performance measure, Environment, Actuators, Sensors
(c) · · ·
Question 2. Suppose best-first search is applied using the evaluation
function f (n) =−g(n). The resulting search is equivalent to:
(a) Greedy Search
(b) Depth-First Search
(c) · · ·
UNSW c©W. Wobcke et al. 2019
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468