程序代写案例-COMP 4500

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
Recursive State Estimation
COMP 4500 – Reza Ahmadzadeh
So far
• Uncertainty is everywhere
• Probabilistic robotics considers uncertainty in robot perception an
action
• We use the calculus of probability theory to represent uncertainty
Topics
• Robot-Environment Interaction
• Bayes Filter Algorithm
Probability Theory - Basics
A Reminder
Law of Total Probability
Discrete case


= 1
= ෍

,
= ෍

| ()
Continuous case
න = 1
= න ,
= න | ()
The sum rule
The product rule
Bayes Rule
=
()
()
=
ℎ.

=
()
σ ()
Sometimes written as
= ()
Common Mistake
Complement Rule for Probability:
¬ = 1 −
Correct
| = 1 − ¬|
Wrong
| ≠ 1 − |¬
Robot-Environment
Interaction
Robot-Environment Interaction
Robot-Environment Interaction
• Environment (world) is a dynamical system that
possesses internal state
• Robot can acquire information about its
environment using its sensors
• Sensors are noisy and there are usually things
that cannot be sensed directly
• Therefore, robot maintains an internal belief
with regards to the state of its environment
Robot-Environment Interaction
• Robot can influence its environment through
its actuators
• Effect of doing so is often somewhat
unpredictable (uncertainty)
• Therefore, each control actuation affects both
the environment state, and the robot’s internal
belief with regards to this state
Notation
• State at time ,
• Measurement data at time ,
• Control data at time ,
• Belief at time , ()
State
• Collection of all aspects of the robot and its environment that can
impact the future
• State at time ,
Examples:
• Robot pose
• Robot velocity
• Location and features of surrounding objects in the environment
• Location and velocity of moving objects and people
• Many others
State probability
• The evolution of state and measurement is governed by probabilistic
laws.
• State might be conditioned on all past states, measurements, and
controls.
0:−1, 1:−1, 1:
Note: we assume that the robot executes a control action 1 first and
them takes a measurement 1
Conditional Independence
• For all the variables only the control matters if we know the state
−1 , so
0:−1, 1:−1, 1: = (|−1, )
• Similarly, the state is sufficient to predict the measurement and
knowledge of any other variable, such as the past measurements,
controls, or even the past states, is irrelevant
0: , 1:−1, 1: = (|)
Conditional Independence
0:−1, 1:−1, 1: = (|−1, )
0: , 1:−1, 1: = (|)
• A.k.a Markov Assumption
State Transition Probability
• It specifies how environmental state evolves over time as a function
of robot control
(|−1, )
• Robot environments are stochastic so we use a probability
distribution not a deterministic function
Belief
• Reflects the robot’s internal knowledge about the state of the
environment.
• For example, a robot’s pose might be
= , ,
= [14.12, 12.17, 45°] in some global coordinate
frame
• But it usually cannot know its pose, since poses are not measurable
directly
• Instead, the robot must infer its pose from data
• We therefore distinguish the true state from its internal belief with
regards to that state
Measurement Probability
• It specifies how the probabilistic law according to which the
measurements are generated from the environment state
(|)
• We can think of measurements as noisy projections of the state
Belief distribution
• Assigns a probability to each possible hypothesis w.r.t the true state
• Belief distributions are posterior probabilities over state variables
conditioned on the available data
() = (|1:, 1:)
• This assumes that we have gathered measurement
• The posterior before gathering the measurement is written as
() = (|1:−1, 1:)
() is called prediction
Calculating () from () is called measurement update
(correction)
Summary of terms
• State transition
(|−1, )
• Measurement
(|)
• Belief
() = (|1:, 1:)
• Prediction
() = (|1:−1, 1:)
Bayes Filter Algorithm
Bayes Filter Algorithm
• The most general algorithm for calculating belief
• It calculates the belief distribution from measurement and control
data
• It is recursive (i.e. is calculated from −1 )
• The algorithm has two steps: Act, See
See & Act
• Robot is placed somewhere in the environment
See & Act
• See – the robot queries its sensors
See & Act
• Robot finds itself next to a pillar
See & Act
• Act – robot moves one meter forward
See & Act
• Motion estimated by wheel encoders, accumulation of uncertainty
See & Act
• See – robot queries its sensors again, finds itself near to a pillar
See & Act
• Belief update (information fusion)
Act – using motion model and its
uncertainties
• Robot moves and estimates its position through its proprioceptive
sensors (e.g. wheel encoders and odometry)
• During this step, robot’s state uncertainty grows
See – estimation of position based on
perception and map
• Robot makes an observation using its exteroceptive sensors
• Results in a second estimation of the current position
Belief update – fusion of prior belief with
observation
• Robot corrects its position by combining its belief before the
observation with the probability of making exactly that observation
• During this step, robot’s state uncertainty shrinks
Act-See cycle of localization
See – acquire observation
Act – move
See – acquire observation
Belief update (information fusion)

Probabilistic localization – Belief
representation
a. Continuous map with single hypothesis
probability distribution ()
b. Continuous map with multiple
hypothesis probability distribution ()
c. Discretized metric map (grid ) with
probability distribution ()
d. Discretized topological map (node )
with probability distribution ()
Act
• Probabilistic estimation of the robot’s new belief state () based on
the previous location −1 and the probabilistic motion model (state
transition) , −1 with action
• Application of theorem of total probability / convolution
• For continuous probabilities
() = න , −1 −1 −1
• For discrete probabilities
() = ෍ , −1 −1
See
• Probabilistic estimation of the robot’s new belief state () as a
function of its measurement and its former belief state ()
• Application of Bayes rule
() = (|, )()
where (|, ) is the probabilistic measurement model, that is, the
probability of observing the measurement data given the knowledge
of the map and the robot’s position . Thereby = ()
−1 is the
normalization factor so that σ = 1
Bayes Filter Algorithm
The most general algorithm for calculating beliefs
1. Algorithm Bayes_filter( −1 , , ) :
2. for all do
3. () = ׬ , −1 −1 −1 (prediction update)
4. () = (|)() (measurement update)
5. end
6. return ()
Bayes Filter Algorithm Initialization
• Initialize the algorithm, (0), as follows:
• A point mass distribution, when we are certain abut 0
• A uniform random distribution, when we have no knowledge about
0
Bayes Filter Algorithm - Example
• A robot estimating the state of a door using its camera
• Door states (open, closed)
• The robot can change the state of the door
• The robot does not know the state of the door internally
• It assigns equal prior probability to the two possible states
0 = = 0.5
0 = = 0.5
Bayes Filter Algorithm - Example
• Robot’s sensor is noisy and the noise is characterized by
= _| = _ = 0.6
= _| = _ = 0.4
= _| = _ = 0.2
= _| = _ = 0.8
Bayes Filter Algorithm - Example
• Robot uses its manipulator to push the door open. If the door is
already open, it will remain open . If it is closed, the robot has a 0.8
chance that it will be open afterwards:
= _| = ℎ, −1 = _ = 1
= _| = ℎ, −1 = _ = 0
= _| = ℎ, −1 = _ = 0.8
= _| = ℎ, −1 = _ = 0.2
Bayes Filter Algorithm - Example
• Robot also can choose not to use its manipulator, in which case the
state of the world does not change:
= _| = _ℎ, −1 = _ = 1
= _| = _ℎ, −1 = _ = 0
= _| = _ℎ, −1 = _ = 0
= _| = _ℎ, −1 = _ = 1
Bayes Filter Algorithm - Example
• Since the state space is finite:
1. Algorithm Bayes_filter( −1 , , ) :
2. for all do
3. () = σ , −1 −1 (prediction update)
4. () = (|)() (measurement update)
5. end
6. return ()
Bayes Filter Algorithm - Example
• Suppose at time = 1, the robot takes no control action but it senses
an open door.
(1) = ෍
0
1 1, 0 0 =
= 1 1 = . ℎ, 0 = . 0 = .
+ 1 1 = . ℎ, 0 = . (0 = . )
Bayes Filter Algorithm - Example
• We get
(1 = . ) =
= 1 = . 1 = . ℎ, 0 = . 0 = .
+ 1 = . 1 = . ℎ, 0 = . 0 = .
= 1 0.5 + 0 0.5 = 0.5
(1 = . ) =
= 1 = . 1 = . ℎ, 0 = . 0 = .
+ 1 = . 1 = . ℎ, 0 = . 0 = .
= 0 0.5 + 1 0.5 = 0.5
Bayes Filter Algorithm - Example
The fact that (1) = (0) should not be surprising since we did not
take any actions.
• do_nothing does not change the state of the world
• The world does not change by itself (in this example)
• Incorporating the measurement, however, changes the belief.
(1) = (1 = . |1)(1)
Bayes Filter Algorithm - Example
• For two possible cases,
1 = . =
1 = . 1 = . (1 = . )
0.6 0.5 = (0.3)
1 = . =
1 = . 1 = . (1 = . )
0.2 0.5 = (0.1)
Bayes Filter Algorithm - Example
• Normalize the values
= (0.3 + 0.1)−1= 2.5
1 = . = 0.75
1 = . = 0.25
Bayes Filter Algorithm - Example
• Iterate
2 = . = 1 0.75 + 0.8 0.25 = 0.95
2 = . = 0 0.75 + 0.2 0.25 = 0.05
2 = . = (0.6)(0.95) ≈ 0.983
2 = . = (0.2)(0.05)0.017
at this point the robot believes with 0.983 probability the door is open

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468