程序代写案例-ECE 250

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
ECE 250: Stochastic Processes: Week #3
Outline:
• Random Variables, Random Vector, and Random Processes
• Almost Sure Limit of Random Processes
• Distribution of Random Variables
• Independence, Independent Processes, and Independent Increment Processes
How to characterize a Random Variable
• Question: How to show that a function X : Ω→ R is a random variable?
• There are several ways:
a. By definition, showing that for all B ∈ B, it suffices to show that X−1((−∞, a]) ∈ F
for all a ∈ R.
b. Practical way: Let g : Rn → R be a continuous mapping and let X1, . . . , Xn be
r.v’s. Then X = g(X1, X2, . . . , Xn) is a r.v. This allows us to construct new random
variables from the old ones: for example if X, Y are random variables, X+Y , X−Y ,
X × Y , XY , etc. are all random variables.
Example 1. Let X, Y be r.v’s. Then E = {ω|X(ω) = Y (ω)} is an Event. Why?
– Let Z = X − Y .
– By property (c), Z is a random variable.
– {0} is a Borel set (as {0} = ∩∞i=1(−1i , 1i )).
– Therefore
Z−1({0}) = {ω | Z(ω) = X(ω)− Y (ω) = 0} ∈ F
and hence, E = Z−1({0}) is an Event!
1
Limits of Stochastic Processes
• Motivation: Early stage of an epidemics dynamics: We have an initial infected popu-
lation X0 and at each iteration (day) t ≥ 1, its getting multiplied by a positive random
variable wt, i.e., Xt+1 = wtXt, where wt is an independently and identically dis-
tributed random variables. Then, if E[log(wt)] > 0, we have limt→∞Xt = ∞ almost
surely.
• We say that b is an upper bound for a sequence {αk}, if αk ≤ b for all k. Smallest such
b is called is the supremum of {αk} and is denoted by supk≥1 αk. We always assume
that +∞ is an upper bound for a sequence and hence, supremum always exists.
• Similarly, we say that b is a lower bound for a sequence {αk}, if αk ≥ b for all k. Largest
such b is called is the infimum of {αk} and is denoted by infk≥1 αk.
• We define:
lim sup
k→∞
αk = inf
t≥1
sup
k≥t
αk
lim inf
k→∞
αk = sup
t≥1
inf
k≥t
αk.
• Note that for a sequence of r.v.s {Xk} and for an ω ∈ Ω, {Xi(ω)} is a sequence in R.
Then
I. X(ω) = supk≥1Xk(ω) is a random variable,
II. X(ω) = infk≥1Xk(ω) is a random variable,
III. X(ω) := lim supk→∞Xk(ω) is a random variable,
IV. X(ω) := lim infk→∞Xk(ω)is a random variable,
V. if X(ω) = X(ω) for almost all ω ∈ Ω, X defined by X = limk→∞Xk(ω) is a random
variable (HW 3).
2
Distributions of Random Variables
• For a r.v. X, we define the distribution function (or cumulative distribution function
(CDF)) of X, to be the mapping FX : R→ [0, 1] defined by F (x) = Pr(X−1
(
(−∞, x])).
• Properties of Distribution Functions (see HW 3):
a. FX is non-decreasing.
b. limx→−∞ FX(x) = 0, and limx→∞ FX(x) = 1.
c. FX(·) is right-continuous, i.e., for any x ∈ R, limy→x+ FX(y) = FX(x).
d. Define FX(x
−) := limy↑x FX(y), then
FX(x
−) = Pr(X < x) = Pr({ω ∈ Ω | X(ω) < x}).
e. For any x ∈ R, we have Pr(X = x) = FX(x)− FX(x−).
• If we are given a distribution F , the first question one may ask is Does there exist a
probability space (Ω,F ,Pr) and a function X : Ω → R such that X has the given
distribution F? And the following theorem is the answer to this question.
Theorem 1. Suppose that a function F : R → [0, 1] satisfies the above properties
(a), (b) and (c), then there exists a probability space (Ω,F ,Pr) and a r.v. X such
that F is the distribution function of X.
3
Expected Value of a Random Variable
• How to define expected value?
• Many of the constructions in probability theory starts from simple functions (r.v.s): we
say that a random variable X is a simple r.v. if X =
∑m
i=1 αi1Ai for some finite m ≥ 1,
where A1, . . . , Am ∈ F and α1, . . . , αm ∈ R.
• We define the expected value of a random variable using the following steps:
– Expected value of simple r.v.s: For X =
∑m
i=1 αi1Ai, we define
E[X] :=
m∑
i=1
αi Pr(Ai).
– For a positive random variable X (i.e., X ≥ 0 almost surely), we define:
E[X] := sup{E[Y ] | Y ≤ X and Y is a simple function}.
– Define positive and negative side of a random variable as: X+ = 1X≥0X and
X− = −1X≤0X. Note that they are both non-negative r.v.s.
– We say that the expected value of X exists if either E[X+] < ∞ or E[X−] < ∞
and we let it be
E[X] := E[X+]− E[X−].
4
Properties of Expected Value
• If X ≥ 0, then E[X] ≥ 0.
• monotonicity : if X ≤ Y , then E[X] ≤ E[Y ].
• E[|X|] = 0 if and only if X = 0 almost surely.
• Markov Inequality: For a non-negative random variable X,
Pr(X ≥ α) ≤ E[X]
α
for any α > 0.
• Jensen’s Inequality: For a convex function Φ : R→ R,
Φ(E[X]) ≤ E[Φ(X)].
Since −Φ is a concave function, the reverse inequality holds for concave functions.
• Veryn important result: Monotone Convergence Theorem (MCT): Suppose that X1 ≤
X2 ≤ · · · ≤ X = limk→∞Xk. Then,
lim
k→∞
E[Xk] = E[ lim
k→∞
Xk] = E[X].
5
PDF, PMF, Continuous, and Discrete Random Variables
• We say that X is a continuous r.v., if FX(x) is continuous. If further, FX(x) is differen-
tiable, we refer to fX(x) :=
d
dxFX(x) as the probability distribution function (pdf).
• Very important: For a (continuous) r.v. X with the pdf fX(x), and any (sufficiently
nice) g : R→ R
E[X] =
∫ ∞
−∞
xfX(x)dx.
• More generally (and important result), for any integrable function g, for the random
variable Z = g(X), we have
E[Z] = E[g(X)] =
∫ ∞
−∞
g(x)fX(x)dx.
• We say that a random variable is a discrete random variable if Pr(X ∈ B) = 1 for a
(finite or) countable set B = {bk | k ≥ 1}.
• We define the probability mass function p : R → [0, 1] of a discrete random variable X
to be defined by:
pX(x) =
{
Pr(X = bk) if x = bk for some k ≥ 1
0 otherwise.
• Very important: For a discrete r.v. X (see HW3)
E[X] =
∞∑
k=1
bkpX(bk).
6
Independent Random Variables and Processes
• Motivation: We have an initial infected population X0 and at each iteration (day) t ≥ 1,
its getting multiplied by a positive random wt variable, i.e., Xt+1 = wtXt, where wt is an
independently and identically distributed random variables. Then, if E[log(wt)] > 0,
we have limt→∞Xt =∞ almost surely.
• We say X, Y are two random variables, if X−1(B1) and Y −1(B2) are independent for
any Borel sets B1, B2 ∈ B, i.e.,
Pr(X ∈ B and Y ∈ B) = Pr(X ∈ B) Pr(Y ∈ B).
• Important Fact (lemma): X, Y are independent if X−1((−∞, α]) and Y −1((−∞, β])
are independent for all α, β ∈ R, i.e., it suffices to hold the above for sets of the form
(−∞, α]. In other words, X, Y are independent if and only if
Pr(X ≤ α, Y ≤ β) = FX(α)FY (β).
• Similarly, we say that X1, . . . , Xn are independent if for any collection of Borel-sets
B1, . . . , Bn, the events X
−1
1 (B1), . . . , X
−1
n (Bn) are independent.
• Again it follows from a result1 that X1, . . . , Xn are independent iff for any selection of
real numbers α1, . . . , αn:
Pr(X1 ≤ α1, X2 ≤ α2, . . . , Xn ≤ αn) = FX1(α1) · · ·FXn(αn).
1If interested, look for Dynkin’s pi-λ Theorem.
7
Independent and Independent Increment Processes
• We say that a DT or a CT random process {Xt} is
1. An independent process if any finite collection Xt1, . . . , Xtn are independent for any
n ≥ 2.
2. An independent increment process, if for any n ≥ 2, and a1 < b1 ≤ a2 < b2 ≤
. . . ≤ an < bn (in the respective index set), the increments Xb1 −Xa1, Xb2 −Xa2,
..., Xbn −Xan are independent.
8

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468