程序代写案例-ECS 170

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
ECS 170
Introduction to Artificial Intelligence
May 24, 2022
Administrative stuff
Final exam next week
Midterm exam grades – real soon now

Quiz 4 – Tuesday of next week
HW5 – out this week, due next week – it’s about…
2
Learning
Learning is the “holy grail” of artificial intelligence
because it is the essential element in intelligence
-- both natural and artificial. Why?
Learning
Learning is the “holy grail” of artificial intelligence
because it is the essential element in intelligence
-- both natural and artificial. Why?
People change as a result of their experiences.
We adapt to new situations and learn from our
experiences. An intelligent agent must be able to
do the same.
Learning
Learning is the “holy grail” of artificial intelligence
because it is the essential element in intelligence
-- both natural and artificial. Why?
It’s probably impossible to build in the large
amount of knowledge required for any realistic
domain by hand.
Learning
Learning is the “holy grail” of artificial intelligence
because it is the essential element in intelligence
-- both natural and artificial. Why?
Dealing with novel input inherently requires
adaptation and learning (otherwise the system
will only be able to deal with situations for which it
was designed).
Learning
Learning is the “holy grail” of artificial intelligence
because it is the essential element in intelligence
-- both natural and artificial. Why?
Dealing with changing environments requires
learning (since the knowledge base may
otherwise become obsolete).
Learning
Learning is the “holy grail” of artificial intelligence
because it is the essential element in intelligence
-- both natural and artificial. Why?
It’s the only way that artificially intelligent systems
will seem really intelligent to people.
Learning
Definition: learning is the adaptive changes that occur in
a system which enable that system to perform the same
task or similar tasks more efficiently or more effectively
over time.
This could mean:
• The range of behaviors is expanded: the agent can
do more
• The accuracy on tasks is improved: the agent can
do things better
• The speed is improved: the agent can do things faster
What kinds of learning do we do?
Here are some examples of the kinds of learning
that people do. This is not an exhaustive list...
What kinds of learning do we do?
Rote learning
“1 times 3 is 3, 2 times 3 is 6, 3 times 3 is 9,...”
Taking advice from others
“If you have a choice between sliding and jumping in the peg puzzle,
always jump.”
Learning from problem solving experiences
“I have to stack these blocks again...what do I know from last time that’ll
make this time easier so I don’t have to do the planning thing again?”
Learning from examples
“Hmmm, last time at the watering hole, Og was eaten. The time before
that, Zorg was eaten. I’m getting kind of thirsty, should I…”
Learning by experimentation and discovery
“I wonder what will happen if I move this pawn to that space?”
What kinds of learning do AI folks study?
supervised learning: given a set of pre-classified
examples, learn to classify a new instance into its
appropriate class
unsupervised learning: learning classifications when the
examples are not already classified
reinforcement learning: learning what to do based on
rewards and punishments
analytic learning: learning to reason faster
(again, this is not an exhaustive list)
What kinds of learning do AI folks study?
supervised learning: given a set of pre-classified
examples, learn to classify a new instance into its
appropriate class
unsupervised learning: learning classifications when the
examples are not already classified
reinforcement learning: learning what to do based on
rewards and punishments
analytic learning: learning to reason faster
(again, this is not an exhaustive list)
Example: Supervised learning of concept
Say it’s important for your system to know what an arch is,
in a structural sense. You want to teach the program by
a series of examples. You tell your system that this is an
arch:
What does your system know
about “archness” now?
Example: Supervised learning of concept
Now you tell it that this isn’t an arch:
What does your system know
about “archness” now?
Example: Supervised learning of concept
And then you tell it that this isn’t an arch:
What does your system know
about “archness” now?
Example: Supervised learning of concept
This may not seem all that exciting, but consider the same
sort of task in a different domain....
What does your system know
about “archness” now?
Example: Supervised learning of concept
What about classifying chickens being processed for retail
sale? “They’ll buy this one, but they wouldn’t buy that one…”
Example: Supervised learning of concept
What about classifying chickens being processed for retail
sale? “They’ll buy this one, but they wouldn’t buy that one…”
Example: Supervised learning of concept
What about classifying chickens being processed for retail
sale? “They’ll buy this one, but they wouldn’t buy that one…”
Example: Supervised learning of concept
What does your system
know about “winning
horses” now?
Example: Supervised learning of concept
Let’s go back to the simpler arch problem and see how a
computer program could learn the concept
Example: Supervised learning of concept
So let’s say our arch-learning program doesn’t yet have
a concept for arch. We need to provide a representation
language for these arch examples. A semantic network
with nodes like “upright block” and “sideways block”
and relations like “supports” and “has_part” works.
This is now what it knows about
“archness”...its internalized arch
concept.
arch-1
upright
block
upright
block
sideways
block
supports supports
has_part
Example: Supervised learning of concept
Now we present the program with a negative example or
“near miss”...almost an arch, but not quite:
not-arch-2
upright
block
upright
block
sideways
block
supports supports
has_part
sideways
block
arch-1
upright
block
upright
block
has_part
Example: Supervised learning of concept
The program must now figure out what the difference is
between its arch concept and the near miss. What is it?
not-arch-2
upright
block
upright
block
sideways
block
supports supports
has_part
sideways
block
arch-1
upright
block
upright
block
has_part
Example: Supervised learning of concept
The difference is that the support links are missing in the
negative example. Since that’s the only difference, the
support links must be required. That is, the upright blocks
must support the sideways block.
not-arch-2
upright
block
upright
block
sideways
block
supports supports
has_part
sideways
block
arch-1
upright
block
upright
block
has_part
Example: Supervised learning of concept
The program revises its concept of the arch accordingly.
not-arch-2
upright
block
upright
block
sideways
block
must_support must_support
has_part
sideways
block
arch-1
upright
block
upright
block
has_part
Example: Supervised learning of concept
Negative examples help the learning procedure specialize.
If the model of an arch is too general (too inclusive), the
negative examples will tell the procedure in what ways to
make the model more specific.
upright
block
upright
block
sideways
block
must_support must_support
has_part
arch-1
Example: Supervised learning of concept
Here comes another near miss. What’s the difference
between the near miss and the current concept of an arch?
not-arch-3
upright
block
upright
block
sideways
block
must_support must_support
has_part
sideways
block
arch-1
upright
block
upright
block
has_part
supports supports
touches
touches
Example: Supervised learning of concept
The difference is the existence of the touches links in the
near miss. That is, there’s no gap between the upright
blocks. Since that’s the only difference, then the supporting
blocks in an arch
must not touch.
not-arch-3
upright
block
upright
block
sideways
block
must_support must_support
has_part
sideways
block
arch-1
upright
block
upright
block
has_part
supports supports
touches
touches
Example: Supervised learning of concept
The program updates its representation to reflect that the
touches links between the upright blocks are forbidden.
not-arch-3
upright
block
upright
block
sideways
block
must_support must_support
has_part
sideways
block
arch-1
upright
block
upright
block
has_part
supports supports
touches
touches
must_not_touch
must_not_touch
Example: Supervised learning of concept
Because of the second negative example, the concept of
the arch is even more specific than before.
upright
block
upright
block
sideways
block
must_support must_support
has_part
arch-1
must_not_touch
must_not_touch
Example: Supervised learning of concept
Here’s yet another training example, but this time it’s a
positive example. What’s the difference between the new
positive example and the current concept of an arch?
arch-4
upright
block
upright
block
sideways
block
must_support must_support
has_part
sideways
wedge
arch-1
upright
block
upright
block
has_part
supports supports
must_not_touch
must_not_touch
Example: Supervised learning of concept
The difference is that the block being supported has a
different shape: it’s a wedge. So the block being supported
can be either a rectangular block or a wedge. The model of
the arch is updated
accordingly.
arch-4
upright
block
upright
block
sideways
block or wedge
must_support must_support
has_part
sideways
wedge
arch-1
upright
block
upright
block
has_part
supports supports
must_not_touch
must_not_touch
Example: Supervised learning of concept
Positive examples tell the learning procedure how to make
its model more general, to cover more instances with the
model.
upright
block
upright
block
sideways
block or wedge
must_support must_support
has_part
arch-1
must_not_touch
must_not_touch
Example: Supervised learning of concept
If we take the program out of learning mode and ask it to
classify a new input, what happens?
maybe-arch-5
upright
block
upright
block
sideways
block or wedge
must_support must_support
has_part
sideways
arc
arch-1
upright
block
upright
block
has_part
supports supports
must_not_touch
must_not_touch
Example: Supervised learning of concept
Warning: Do not learn the wrong things from this example.
It is not the case that negative examples are only about
links and positive examples are only about nodes!
upright
block
upright
block
sideways
block or wedge
must_support must_support
has_part
arch-1
must_not_touch
must_not_touch
Example: Supervised learning of concept
This is a very simple model of one specific kind of learning,
but it’s easy to understand and easy to implement. That’s
one reason it’s presented in just about every introductory
AI course. But it also presents many of the issues that are
common to all sorts of approaches to learning.
upright
block
upright
block
sideways
block or wedge
must_support must_support
has_part
arch-1
must_not_touch
must_not_touch
Example: Supervised learning of concept
Note that with this approach to learning, we have to begin
with a positive example. Also, the order in which the
training examples are presented may influence what’s
being learned.
upright
block
upright
block
sideways
block or wedge
must_support must_support
has_part
arch-1
must_not_touch
must_not_touch
What’s this guy teaching to his class?
What Is Artificial Intelligence?
Some of the workshop participants continued to
make significant contributions to AI research for
decades:
John McCarthy Marvin Minsky
Herbert Simon Allen Newell
41
Questions?
42
Learning is choosing the best representation
That’s certainly true in a knowledge-based AI world.
Our arch learner started with some internal representation
of an arch. As examples were presented, the arch learner
modified its internal representation to either make the
representation accommodate positive examples
(generalization) or exclude negative examples
(specialization).
There’s really nothing else the learner could modify...
the reasoning system is what it is. So a learning
problem can be mapped onto one of choosing the
best representation...
Learning is about search
...but wait, there’s more!
By now, you’ve figured out that the arch learner was
doing nothing more than searching the space of
possible representations, right?
So learning, like everything else, boils down to search.
Same problem - different representation
The arch learner could have represented the
arch concept as a decision tree if we wanted
Same problem - different representation
The arch learner could have represented the
arch concept as a decision tree if we wanted
arch
Same problem - different representation
The arch learner could have represented the
arch concept as a decision tree if we wanted
do upright blocks
support sideways block?
no yes
not arch arch
Same problem - different representation
The arch learner could have represented the
arch concept as a decision tree if we wanted
do upright blocks
support sideways block?
no yes
not arch do upright blocks
touch each other?
arch not arch
no yes
Same problem - different representation
The arch learner could have represented the
arch concept as a decision tree if we wanted
do upright blocks
support sideways block?
no yes
not arch do upright blocks
touch each other?
is the not arch
top block either
a rectangle or a wedge?
no yes
no yes
not arch arch
Issues with learning by example
The process of supervised learning by example
requires that there is someone to say which
examples are positive and which are negative.
This approach must start with a positive example
to specialize or generalize from.
Learning by example is sensitive to the order in
which examples are presented.
Learning by example doesn’t work well with
noisy, randomly erroneous data.
More supervised learning by example
Here’s another type of learning that searches
for the best representation, but the representation
is very different from what you’ve seen so far.
Learning in neural networks
The perceptron is one of the earliest neural network
models, dating to the early 1960s.
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
The perceptron can’t compute everything, but what it can
compute it can learn to compute.
Here’s how it works.
Inputs are 1 or 0.
Weights are reals (-n to +n).
Each input is multiplied by
its corresponding weight.
If the sum of the products
is greater than the
threshold, then the
perceptron outputs 1,
otherwise the output
is 0.
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
The perceptron can’t compute everything, but what it can
compute it can learn to compute.
Here’s how it works.
The output, 1 or 0, is a
guess or prediction about
the input: does it fall into
the desired classification
(output = 1) or not
(output = 0)?
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
That’s it? Big deal. No, there’s more to it....
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
That’s it? Big deal. No, there’s more to it....
Say you wanted your perceptron
to classify arches.
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
That’s it? Big deal. No, there’s more to it....
Say you wanted your perceptron
to classify arches.
That is, you present inputs
representing an arch, and
the output should be 1.
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
That’s it? Big deal. No, there’s more to it....
Say you wanted your perceptron
to classify arches.
That is, you present inputs
representing an arch, and
the output should be 1.
You present inputs not
representing an arch,
and the output should
be 0.
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
That’s it? Big deal. No, there’s more to it....
Say you wanted your perceptron
to classify arches.
That is, you present inputs
representing an arch, and
the output should be 1.
You present inputs not
representing an arch,
and the output should
be 0. If your perceptron
does that correctly for
all inputs, it knows the
concept of arch.
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
But what if you present inputs for an arch, and your
perceptron outputs a 0???
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
But what if you present inputs for an arch, and your
perceptron outputs a 0???
What could be done to
make it more likely that
the output will be 1 the
next time the ‘tron sees
those same inputs?
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
But what if you present inputs for an arch, and your
perceptron outputs a 0???
What could be done to
make it more likely that
the output will be 1 the
next time the ‘tron sees
those same inputs?
You increase the
weights. Which ones?
How much?
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
But what if you present inputs for not an arch, and your
perceptron outputs a 1?
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
But what if you present inputs for not an arch, and your
perceptron outputs a 1?
What could be done to
make it more likely that
the output will be 0 the
next time the ‘tron sees
those same inputs?
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Learning in neural networks
But what if you present inputs for not an arch, and your
perceptron outputs a 1?
What could be done to
make it more likely that
the output will be 0 the
next time the ‘tron sees
those same inputs?
You decrease the
weights. Which ones?
How much?
x1
x2
x3
xn
.
.
.
S
w1
w2
w3
wn
inputs
weights
sum threshold
Let’s make one...
First we need to come up with a representation language.
We’ll abstract away most everything to make it simple.
Let’s make one...
First we need to come up with a representation language.
We’ll abstract away most everything to make it simple.
All training examples have three blocks.
A and B are upright blocks. A is always left of B.
C is a sideways block. Our language will assume those
things always to be true. The only things our language
will represent are the answers to these five questions...
Let’s make one...
yes = 1, no = 0
Does A support C?
Does B support C?
Does A touch C?
Does B touch C?
Does A touch B?
Let’s make one...
yes = 1, no = 0
Does A support C? 1
Does B support C? 1
Does A touch C? 1
Does B touch C? 1
Does A touch B? 0
A B
C
arch
Let’s make one...
yes = 1, no = 0
Does A support C? 1
Does B support C? 1
Does A touch C? 1
Does B touch C? 1
Does A touch B? 1
A B
C
not arch
Let’s make one...
yes = 1, no = 0
Does A support C? 0
Does B support C? 0
Does A touch C? 1
Does B touch C? 1
Does A touch B? 0
A B
C
not arch
Let’s make one...
yes = 1, no = 0
Does A support C? 0
Does B support C? 0
Does A touch C? 1
Does B touch C? 1
Does A touch B? 0
A B
C
not arch
and so on.....
Our very simple arch learner
x1
x2
x3
x5
S
x4
Our very simple arch learner
x1
x2
x3
x5
S
x4
- 0.5
0
0.5
0
-0.5
0.5
Our very simple arch learner
1
1
1
0
S
1
- 0.5
0
0.5
0
-0.5
0.5
A B
C
arch
Our very simple arch learner
1
1
1
0
S
1
- 0.5
0
0.5
0
-0.5
0.5
A B
C
arch
sum = 1*-0.5 + 1*0 + 1*0.5 + 1*0 + 0*0.5
Our very simple arch learner
1
1
1
0
S
1
- 0.5
0
0.5
0
-0.5
0.5
A B
C
arch
sum = -0.5 + 0 + 0.5 + 0 + 0 = 0 which is not > threshold so output is 0
0
Our very simple arch learner
1
1
1
0
S
1
- 0.5
0
0.5
0
-0.5
0.5
A B
C
arch
sum = -0.5 + 0 + 0.5 + 0 + 0 = 0 which is not > threshold so output is 0
‘tron said no when it should say yes so increase weights where input = 1
0
Our very simple arch learner
1
1
1
0
S
1
- 0.4
0.1
0.6
0.1
-0.5
0.5
A B
C
arch
sum = -0.5 + 0 + 0.5 + 0 + 0 = 0 which is not > threshold so output is 0
‘tron said no when it should say yes so increase weights where input = 1
0

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468