程序代写案例-MATH4513

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
MATH4513 Tutorials
1. Radon-Nikody´m density
Let P and Q be two equivalent probability measures on some space (Ω,F) where F
is an arbitrary σ-field.
Recall that the equivalence of P and Q means that for every
event A ∈ F the equality P(A) = 0 holds if and only if Q(A) = 0. Then Radon-Nikody´m
theorem states that the Radon-Nikody´m density of Q with respect to P is well defined
and it is a random variable η on (Ω,F) denoted as
dQ
dP
(ω) = η(ω), P-a.s.,
which means that the following equality is satisfied for every A ∈ F
Q(A) =

A
η dP. (1)
Consequently, for any Q-integrable random variable X : (Ω,F)→ R we have∫
A
X dQ =

A
Xη dP. (2)
It is easy to check that the Radon-Nikody´m density η is unique (see below) and it is
strictly positive P-a.s. since P and Q are equivalent probability measures. Further-
more, the random variable η is P-integrable with EP(η) = 1. It is also clear that the
equality EQ(X) = EP(Xη) holds for any Q-integrable random variable X since it suffi-
ces to take A = Ω in (2). Finally, the Radon-Nikody´m density of P with respect to Q is
equal to η−1 since for every A ∈ F
P(A) =

A
η−1 dQ. (3)
For the uniqueness of the Radon-Nikody´m density η, it suffices to argue by contra-
diction: if η and η˜ are any two different densities, then at least one of the events
A := {η − η˜ > 0} ∈ F and B := {η − η˜ < 0} ∈ F has a positive probability P, that is,
P(A) + P(B) > 0. Suppose that P(A) > 0. Then we would get
0 =

A
dQ−

A
dQ =

A
η dP−

A
η˜ dP =

A
(η − η˜) dP > 0,
which is a contradiction. Of course, an analogous argument can be applied to the
event B. Hence P(η = η˜) = 1, that is, η = η˜, P-a.s.
In fact, it is not essential to assume that Q and P are probability measures. It
suffices to assume that they are (possibly signed) finite measures on (Ω,F) and Q is
absolutely continuous with respect to P. Then the Radon-Nikody´m density η of Q with
respect to P is well defined and it is unique, P-a.s., and thus equalities (1) and (2) are
still valid.
1
2. Conditional expectation
Let X be a P-integrable random variable on (Ω,F ,P) so that X : Ω → R is F-
measurable and such that EP|X| < ∞. If G is a sub-σ-field of F , then the conditional
expectation EP(X| G) is a G-measurable random variable such that for every A ∈ G the
following equality holds ∫
A
X dP =

A
EP(X| G) dP. (4)
The equality (4) and the property that EP(X| G) is a G-measurable can be used to show
that the conditional expectation EP(X| G) is unique (if it exists).
To establish the existence of EP(X| G) it suffices to consider the finite measure QX
on (Ω,G), which is given by, for every A ∈ G,
QX(A) =

A
X dP. (5)
The finite measure QX is not necessarily a probability measure or a positive measure.
However, it is manifestly absolutely continuous with respect to P since if A ∈ G and
P(A) = 0 then QX(A) = 0. Therefore, the Radon-Nikody´m density ηX of QX with
respect to P is well defined and it is a G-measurable random variable. Then we set
EP(X| G) := ηX and thus we conclude that the conditional expectation EP(X| G) exists.
3. Abstract Bayes formula
Assume that P and Q are equivalent probability measures on (Ω,F) and η is the
Radon-Nikody´m density of Q with respect to P. Let G be a sub-σ-field of F and let X
be a random variable integrable with respect to Q. Show that the following abstract
version of the Bayes formula holds
EQ(X | G) = EP(Xη | G)EP(η | G) . (6)
• It is easy to check that EP(η | G) is strictly positive so that the right-hand side in (6)
is well defined. By assumption, the random variable X is Q-integrable and thus the
random variable Xη is P-integrable. Therefore, it suffices to show that
EP(Xη | G) = EQ(X | G)EP(η | G).
Since the right-hand side of the last formula defines a G-measurable random variable
and we know that the conditional expectation EP(Xη | G) is unique, it suffices to show
that for any event A ∈ G we have∫
A
Xη dP =

A
EQ(X | G)EP(η | G) dP.
But for every A ∈ G, we obtain∫
A
Xη dP =

A
X dQ =

A
EQ(X | G) dQ =

A
EQ(X | G)η dP
=

A
EP
(
EQ(X | G)η
∣∣G) dP = ∫
A
EQ(X | G)EP(η | G) dP
and was required to demonstrate.
2
4. Martingales and change of a probability measure
Let P and Q be equivalent probability measures on a probability space (Ω,F ,F,P)
where F = (Ft)t∈[0,T ] is an arbitrary filtration such that F0 is trivial. Let (ηt)t∈[0,T ] be
the Radon-Nikody´m density process of Q with respect to P and F so that, in particular,
dQ = ηT dP on (Ω,FT ) where T > 0 is a fixed date and dQ = ηt dP on (Ω,Ft) for every
t ∈ [0, T ].
If a process (Mt)t∈[0,T ] is such that Mt is Ft-measurable for every t, then we say that
M is F-adapted. If for an F-adapted, Q-integrable process M we have that
EQ(Ms | Ft) = Mt, ∀ 0 ≤ t ≤ s ≤ T,
then we say that M is an F-martingale under Q or, briefly, that M is an (Q,F)-
martingale.
(a) Show that (ηt)t∈[0,T ] is a strictly positive process such that η0 = 1 and EP(ηs | Ft) = ηt
for every 0 ≤ t ≤ s ≤ T . Hence the Radon-Nikody´m density process η of Q with respect
to P is a strictly positive F-martingale under P.
• To establish the equality EP(ηs | Ft) = ηt for every 0 ≤ t ≤ s ≤ T , we will first show
that the following equality holds
ηt = EP(ηT | Ft), (7)
which implies that the process η is a (P,F)-martingale. Notice that the process η−1 is
not a (P,F)-martingale but it is a (Q,F)-martingale.
To establish (7), it suffices to observe that the random variable EP(ηT | Ft) is Ft-
measurable and for every A ∈ Ft we have that∫
A
dQ =

A
ηT dP =

A
EP(ηT | Ft) dP.
where the second equality follows from the definition of the conditional expectation.
From the uniqueness of the Radon-Nikody´m density, we deduce that ηt = EP(ηT | Ft)
where, by definition, ηt is the Radon-Nikody´m density ofQwith respect to P on (Ω,Ft).
For the uniqueness of the Radon-Nikody´m density ηt, it suffices to argue by contra-
diction: if ηt and η˜t are any two different densities, then either the event A = {ηt− η˜t >
0} ∈ Ft or the event B = {ηt − η˜t < 0} ∈ Ft has a positive probability P (or both). But
then for A we get
0 =

A
dQ−

A
dQ =

A
ηt dP−

A
η˜t dP =

A
(ηt − η˜t) dP > 0,
which is a contradiction. Of course, an analogous argument can be applied toB. Hence
we conclude that P(ηt = η˜t) = 1.
In the next step, we use the so-called tower property of conditional expectation: for
any dates t ≤ s ≤ T ,
EP(ηs | Ft) = EP
(
EP(ηT | Fs) | Ft
)
= EP(ηT | Ft) = ηt
since Ft ⊂ Fs. Of course, this is simply the martingale property of η under P.
3
To show that the process η is strictly positive, we will argue by contradiction. It is
clear that η is a non-negative process since Q is a probability measure. Suppose that
there exists t ∈ [0, T ] such that the event A = {ηt = 0} ∈ Ft has a positive probability
P so that P(A) > 0. Then
Q(A) =

A
ηt dP = 0,
which contradicts our assumption that the probability measures P and Q are equi-
valent on (Ω,FT ,P) (hence also on (Ω,Ft,P) for every t ∈ [0, T ] since the property of
equivalence of probability measures is inherited by sub-σ-fields). Of course, a similar
argument can be applied to the event B = {ηt < 0} ∈ Ft.
(b) Let X be an FT -measurable random variable. Show that if X is Q-integrable, then
for every t ∈ [0, T ] the abstract Bayes formula holds
EQ(X | Ft) = EP(ηTX | Ft)EP(ηT | Ft) . (8)
• It is clear that (8) is a special case of the abstract Bayes formula since ηT is the
Radon-Nikody´m density of Q with respect to P on (Ω,FT ) and Ft is a sub-σ-field of FT .
(c) Let X be an Fs-measurable and Q-integrable random variable. Using (a) and (b),
establish the following equalities, for every 0 ≤ t ≤ s ≤ T ,
EQ(X | Ft) = η−1t EP(ηsX | Ft) = EP(η−1t ηsX | Ft).
• By applying the abstract Bayes formula to an arbitrary Fs-measurable and Q-
integrable random variable Y , we obtain for every 0 ≤ t ≤ s
EQ(X | Ft) (b)= EP(ηsX | Ft)EP(ηs | Ft)
(a)
= η−1t EP(ηsX | Ft) = EP
(
η−1t ηsX | Ft
)
.
(d) Let (Mt)t∈[0,T ] be an arbitrary Q-integrable and F-adapted process. Show that the
following conditions are equivalent:
(i) EQ(Ms | Ft) = Mt for every 0 ≤ t ≤ s ≤ T ,
(ii) EP(ηsMs | Ft) = ηtMt for every 0 ≤ t ≤ s ≤ T .
Hence the following conditions are equivalent for any F-adapted, Q-integrable pro-
cess M :
(i) the process M is an F-martingale under Q,
(ii) the process ηM is an F-martingale under P where η is the Radon-Nikody´m density
process of Q with respect to P and F.
• It suffices to observe that, for every 0 ≤ t ≤ s ≤ T
EQ(Ms | Ft) = Mt (c)⇔ (ηt)−1 EP(Msηs | Ft) = Mt ⇔ EP(Msηs | Ft) = Mtηt
which shows that conditions (i) and (ii) are equivalent.
4
5. Change of a numeraire
Consider an arbitrary family of strictly positive, F-adapted processes Z1, Z2, . . . , Zn
on (Ω,F ,F,P) where F = (Ft)t∈[0,T ] is a filtration. Assume that P is any probability
measure on (Ω,F) such that the processes Z2/Z1, . . . , Zn/Z1 are (P,F)-martingales.
For a fixed i, we define the probability measure equivalent to P on (Ω,FT ) by postula-
ting that the Radon-Nikody´m density of Pi with respect to P on (Ω,FT ) is given by
ηiT =
dPi
dP
=
Z10Z
i
T
Z1TZ
i
0
, P-a.s.,
Then the Radon-Nikody´m density of Pi with respect to P on (Ω,Ft) equals
ηit = EP(ηiT | Ft) =
Z10Z
i
t
Z1t Z
n
0
, P-a.s.,
Hence, from part (d) in Exercise 4, we deduce that the processes M1 := Z1/Zi,M2 :=
Z2/Zi, . . . ,Mn := Zn/Zi are (Pi,F)-martingales. Indeed, for every j = 1, 2, . . .m, n we
obtain, for every t ∈ [0, T ],
M jt η
i
t =
Zj
Zi
Z10Z
i
t
Z1t Z
n
0
=
Z10
Zn0
Zj0
Z1t
= c1,n
Zj0
Z1t
so that the processes M jt ηit, j = 1, 2, . . . , n are (Pi,F)-martingales (recall that i is fixed).
Notice that the assumption that the family Z1, Z2, . . . , Zn is finite is not relevant and
thus it can be relaxed.
6. Forward martingale measures
In particular, if we consider the family (Bt, B(t, T ), T ∈ [0, T ∗]) then we may define
the forward martingale measure PT for any maturity T . To this end, we assume that
the spot martingale measure P∗ exists and thus the processes (B(t, T )/Bt)t∈[0,T ] are
(P,F)-martingales. Then we set on (Ω,FT ), for any fixed T ∈ [0, T ∗],
dPT
dP∗
:=
B0B(T, T )
BTB(0, T )
, P∗-a.s.,
so that for every t ∈ [0, T ] we have
dPT
dP∗
∣∣Ft = B0B(t, T )
BtB(0, T )
, P∗-a.s.,
We deduce that the processes (Bt/B(t, T ))t∈[0,T ] and (B(t, U)/B(t, T ))t∈[0,T∧U ] for every
U ∈ [0, T ∗] are (PT ,F)-martingales. Furthermore, using the Bayes formula (or part (c)
in Exercise 4), we obtain for every t ∈ [0, T ]
Bt EP(B−1T X | Ft) = B(t, T )EPT (X | Ft)
for any random variable X such that B−1T X is integrable.
5
It is easy to find the Radon-Nikody´m density of PT with respect to PU on (Ω,FT∧U),
namely, for every t ∈ [0, T ∧ U ],
dPT
dPU
∣∣Ft = (dPT
dP∗
∣∣Ft)( dP∗
dPU
∣∣Ft) = B0B(t, T )
BtB(0, T )
BtB(0, U)
B0B(t, U)
=
B(t, T )B(0, U)
B(0, T )B(t, U)
, PU -a.s.
7. Options to exchange assets
Let us consider two assets with the non-vanishing value processes V 1 and V 2 defi-
ned on a probability space (Ω,F ,F,P). Consider the option with the payoff at maturity
T given by
CT = (V
1
T −KV 2T )+ = V 1T 1D −KV 2T 1D
where K > 0 is a constant and D = {V 1T > KV 2T } is the exercise set. It is easy to check
using the abstract Bayes rule that the equality
dP1
dP2
=
V 20
V 10
V 1T
V 2T
, P2-a.s., (9)
gives a link between the martingale measures P1 and P2 associated with the choice of
value processes V 1 and V 2 as numeraires where the probability measures P1 and P2
are considered on (Ω,FT ).
Assume that the process V 1/V 2 satisfies under P2
d (V 1t /V
2
t ) = (V
1
t /V
2
t )γ
1,2
t dW
1,2
t
for some bounded function γ1,2 : [0, T ] → Rd where W 1,2 is a Brownian motion under
P2.
(a) Assume that the option can be replicated. Show that the arbitrage-free price of
the option has the following representation, for every t ∈ [0, T ],
Ct = V
1
t P1(D | Ft)−KV 2t P2(D | Ft).
(b) Using (9), show that the Radon-Nikody´m density of P1 with respect to P2 equals
dP1
dP2
= ET
(∫ ·
0
γ1,2u dW
1,2
u
)
, P2-a.s.,
and the process
W 2,1t := W
1,2
t −
∫ t
0
γ1,2u du, ∀ t ∈ [0, T ],
is a Brownian motion under P1.
(c) Show that
Ct = V
1
t N
(
d1(t, T )
)−KV 2t N(d2(t, T )) (10)
where
d1,2(t, T ) =
ln(V 1t /KV
2
t )± 12 v21,2(t, T )
v1,2(t, T )
and
v21,2(t, T ) =
∫ T
t
|γ1,2u |2 du.
6
8. Change of a numeraire
Let Z1, Z2, . . . , Zn be Itoˆ processes on some probability space (Ω,F ,F,P) so that, for
every i = 1, 2, . . . , n
dZit = α
i
t dt+ β
i
t dWt
for some Brownian motion W . Assume that the process ϕ = (ϕ1, . . . , ϕn) is a self-
financing strategy, so that the wealth process V (ϕ) satisfies Vt(ϕ) =
∑n
i=1 ϕ
i
tZ
i
t and
dVt(ϕ) =
n∑
i=1
ϕit dZ
i
t .
(a) Let X be an arbitrary strictly positive Itoˆ process on (Ω,F ,F,P). Let us define
V˜t(ϕ) =
∑n
i=1 ϕ
i
tZ˜
i
t where, by definition, Z˜it = Zit/Xt. Show that
dV˜t(ϕ) =
n∑
i=1
ϕit dZ˜
i
t .
(b) Assume that a strictly positive Itoˆ processX represents the wealth process of some
self-financing trading strategy ψ. Assume that there exists a martingale measure for
relative prices Z2/Z1, . . . , Zn/Z1. Show that there exists a martingale measure for
Z˜1, Z˜2, . . . , Z˜n.
(c) Are your arguments used in part (b) still valid if a process X is assumed to be
an arbitrary strictly positive Itoˆ process, rather than the wealth process of a self-
financing trading strategy?
9. Futures strategies
Let us fix a time horizon T ≤ T ∗. We consider a European contingent claim X
which settles at time T . By a futures strategy we mean a pair ϕt = (ϕ1t , ϕ2t ) of real-
valued adapted stochastic processes, defined on the probability space (Ω,F ,F,P). The
wealth process V f (ϕ) of a futures strategy ϕ equals, for every t ∈ [0, T ],
V ft (ϕ) = ϕ
2
tBt
and we say that a futures strategy ϕ = (ϕ1, ϕ2) is self-financing if for every t ∈ [0, T ]
V ft (ϕ) = V
f
0 (ϕ) +
∫ t
0
ϕ1u dfu +
∫ t
0
ϕ2u dBu.
A probability measure P˜ equivalent to P is called the futures martingale measure if
the discounted wealth V˜ ft (ϕ) := V
f
t (ϕ)/Bt of any self-financing futures strategy ϕ is a
(local) martingale under P˜.
(a) Show that for any self-financing futures strategy ϕ we have dV˜ ft (ϕ) = ϕ1tB−1t dft.
(b) Let P˜ be a probability measure on (Ω,FT ) equivalent to P. Show that P˜ is a futures
martingale measure if and only if the futures price f is a (local) martingale under P˜.
(c) Show that if P∗ is a unique spot martingale measure in an arbitrage-free model
M and X is a bounded, FT -measurable contingent claim that settles at T , then its
futures price equals ft = EP∗(X | Ft) for every t ∈ [0, T ].
7
10. Multivariate Gaussian distribution
Assume that ζ = (ζ1, ζ2, . . . , ζn) has the Gaussian distribution N(0,Γ) where Γ is
the variance-covariance matrix, that is, γi,j = Cov(ζ i, ζj) = EP(ζ iζj) for every i, j =
1, 2, . . . , n. Recall that the correlation matrix [ρi,j] can be obtained from Γ since
ρi,j =
γi,j√
γi,iγj,j
=
Cov(ζ i, ζj)√
Var(ζ i)Var(ζj)
.
Our goal is to show that there exists some k ≤ n and vectors a1, a2, . . . , an ∈ Rk such
that (ζ1, ζ2, . . . , ζn) ∼ (a1ψ, a2ψ, . . . , anψ) where ψ = (ψ1, ψ2, . . . , ψk) and ψ1, ψ2, . . . , ψk
are i.i.d. random variables with the standard Gaussian distribution N(0, 1) so that ψ
has the k-dimensional standard Gaussian distributionN(0, Ik) where Ik is the identity
matrix. Notice that a1ψ, a2ψ, . . . , anψ are inner products, that is, aiψ =
∑k
j=1 a
j

j.
(a) Show that there exists a k × n matrix Θ = [θ1, θ2, . . . , θn] such that Γ = ΘTΘ where
ΘT is the transpose of Θ. Let δ1, δ2, . . . , δn be the eigenvalues of Γ and v1, v2, . . . , vn ∈
Rn the corresponding orthonormal eigenvectors so that Γvi = δivi for i = 1, 2, . . . , n.
Show that if we set D = diag (δ1, δ2, . . . , δn) and V = [v1, v2, . . . , vn], then Γ = V DV T =
V D1/2(V D1/2)T where V T is the transpose of V .
(b) Let the number 1 ≤ k ≤ n be such that δ1 ≥ δ2 ≥ · · · ≥ δk are strictly positive
numbers and δk+1 = · · · = δn = 0. Show that if we set
V D1/2 = [

δ1v1,

δ2v2, . . . ,

δkvk, 0, . . . , 0]
and
ΘT = [

δ1v1,

δ2v2, . . . ,

δkvk],
then we obtain Γ = ΘTΘ.
(c) Deduce from part (b) that (ζ1, ζ2, . . . , ζn) has the same joint distribution as the
random variable (a1ψ, a2ψ, . . . , anψ) where a1, a2, . . . , an are some vectors in Rk and the
random variable ψ = (ψ1, ψ2, . . . , ψk) where ψ1, ψ2, . . . , ψk are i.i.d. random variables
with the standard Gaussian distribution N(0, 1) so that ψ ∼ N(0, Ik).
(d) Let nk be the standard k-dimensional Gaussian density
nk(x) = (2pi)
−k/2e−|x|
2/2, ∀x ∈ Rk.
Assume that (ζ1, ζ2, . . . , ζn) has the Gaussian distribution N(0,Γ). Find the represen-
tation of the expected value EP
(
g(ζ1, ζ2, . . . , ζn)
)
in terms of the k-dimensional integral
with respect to the Gaussian density nk.
(e) Assume that the matrix Γ is non-singular. Show that the joint density function of
the random variable ζ = (ζ1, ζ2, . . . , ζn) equals
fζ(x) = (2pi)
−n/2(det Γ)−1/2e−x
TΓ−1x/2, ∀x ∈ Rn,
and the characteristic function equals
ϕζ(u) = EP
(
eiuζ
)
= e−u
TΓu/2, ∀u ∈ Rn.
8
11. Correlated Brownian motions
Recall that any positive definite matrix can be a covariance matrix and any positive
definite matrix with ones on the diagonal can be a correlation matrix. It is often
convenient to specify the correlations and the variances separately.
(a) If random variables Y1, Y2, . . . , Yn have correlation matrix [ρYjk] and if Xj = cjYj
for some positive real numbers c1, c2, . . . , cn, then the correlations are the same, that
is, [ρXjk] = [ρYjk]. More generally, for any real numbers c1, c2, . . . , cn we have that ρXjk =
sgn(cicj)ρYjk.
(b) If [ρjk] is a positive definite matrix with ρjj = 1 for all j = 1, 2, . . . , n (hence a
correlation matrix), then we may use the Cholesky factorization LLT = ρ where L is
a lower triangular matrix with positive diagonal. Let W 1,W 2, . . . ,W n be independent
standard Brownian motions. Show that if W˜t = LWt for all t ∈ [0, T ], then the proces-
ses W˜ 1, W˜ 2, . . . , W˜ n are standard Brownian motions with the desired correlations, in
the sense that [ρW˜tjk ] = [ρjk] for every t ∈ [0, T ]. They are called correlated Brownian mo-
tions and it is easy to show that their quadratic covariations satisfy 〈W˜ j, W˜ k〉t = ρjkt
for all j, k = 1, 2, . . . , n.
(c) Suppose that we are given real-valued, deterministic volatilities σ1t , σ2t , . . . , σnt and
a deterministic correlation matrix [ρjk(t)] for every t ∈ [0, T ]. Let W˜ 1, W˜ 2, . . . , W˜ n be
correlated Brownian motions such that the instantaneous correlations satisfy [ρW˜tjk ] =
[ρjk(t)], in the sense that
d〈W˜ j, W˜ k〉t = ρjk(t) dt.
Show that W˜ 1, W˜ 2, . . . , W˜ n can be constructed from independent standard Brownian
motions W 1,W 2, . . . ,W n by setting dW˜ i =
∑i
j=1 αij(t) dW
j
t for i = 1, 2, . . . , n and iden-
tifying by recurrence the coefficients αij(t) such that
∑i
j=1 α
2
ij(t) = 1.
Hint. Start by postulating that dW˜ 1t = dW 1t and dW˜ 2t = ρ21(t) dW 1t +

1− ρ221(t) dW 2t .
Next, set
dW˜ 3t = ρ31(t) dW
1
t + α32(t) dW
2
t + α33(t) dW
3
t
and identify the coefficients α32(t) and α33(t) using equalities d〈W˜ 3, W˜ 2〉t = ρ32(t) dt
and
∑3
j=1 α
2
3j(t) = 1. Show that this procedure can be extended to W˜ 4, W˜ 5, . . . , W˜ n.
(d) Show that the following model is consistent with deterministic volatilities and
instantaneous correlations of asset prices
dSjt = µ
j
tS
j
t dt+ σ
j
tS
j
t dW˜
j
t ,
in the sense that
d〈Sj, Sk〉t = σjtσkt ρjk(t)SjtSkt dt.
Equivalently, the log-prices Y jt = ln(S
j
t ) define the Gaussian process Y = (Y 1, Y 2, . . . , Y n),
which satisfies
d〈Y j, Y k〉t = ρjk(t)σjtσkt dt.
In particular, if ρjk(t), σjt and σkt are constant, then [ρ
Yt
jk] = [ρjk] for every t ∈ [0, T ].
9
(e) Represent the dynamics of S1, S2, . . . , Sn in terms of independent standard Brow-
nian motions W 1,W 2, . . . ,W n, that is,
dSjt = µ
j
tS
j
t dt+ S
j
t
n∑
i=1
λj,it dW
i
t = µ
j
tS
j
t dt+ λ
j
tS
j
t dWt
where W = (W 1,W 2, . . . ,W n) and the functions λj for j = 1, 2, . . . , n are Rn-valued.
12. Dynamics of LIBORs under PTn
Consider a collection of dates 0 < T0 < T1 < · · · < Tn. Assume that each forward
LIBOR Lj(t) = L(t, Tj), j = 0, 1, . . . , n− 1, satisfies under P
dLj(t) = Lj(t)
(
µj(t) dt+ σj(Lj(t), t) dW
j
t
)
(11)
where W 0,W 1, . . . ,W n−1 are correlated,Brownian motions with the instantaneous cor-
relations given by
d〈W j,W i〉t = ρj,i(t) dt
j, i = 0, 1, . . . , n − 1. Recall that the relative bond prices Dj(t) = B(t, Tj)/B(t, Tn) are
martingales under PTn.
(a) Recall that the forward LIBOR L(t, Tj) equals, for every t ∈ [0, Tj],
L(t, Tj) =
B(t, Tj)−B(t, Tj+1)
δj+1B(t, Tj+1)
.
Check that Dj(t) satisfies, for every j = 0, 1, . . . , n− 1 and t ∈ [0, Tj],
Dj(t) =
n−1∏
i=j
(1 + δi+1Li(t)) .
(b) Show that the drift term µ̂j(t) in the dynamics of the forward LIBOR Lj(t) =
L(t, Tj) under the forward measure PTn equals
µ̂j(t) = −
n−1∑
i=j+1
δi+1Li(t)
1 + δi+1Li(t)
σj(Lj(t), t)σi(Li(t), t)ρj,i(t) (12)
and thus the joint dynamics of forward LIBORs L0, L1, . . . , Ln−1 under the forward
measure PTn are
dLj(t) = Lj(t)
(

n−1∑
i=j+1
δi+1Li(t)
1 + δi+1Li(t)
σj(Lj(t), t)σi(Li(t), t)ρj,i(t) dt+ σj(Lj(t), t) dŴ
j
t
)
where Ŵ 0, Ŵ 1, . . . , Ŵ n−1 are correlated Brownian motions under PTn with the instan-
taneous correlations given by
d〈Ŵ i, Ŵ j〉t = ρi,j(t) dt (13)
for every i, j = 0, 1, . . . , n− 1.
10
• Using classical Girsanov’s theorem, we obtain from (11), for j = 0, 1, . . . , n− 1,
dLj(t) = Lj(t)
(
µ̂j(t) dt+ σj(Lj(t), t) dŴ
j
t
)
,
where Ŵ 0, Ŵ 1, . . . , Ŵ n−1 are Brownian motions under PTn with the instantaneous cor-
relations given by (13). Recall that an equivalent change of a probability measure
preserves the correlations between Brownian motions. The drift coefficients µ̂j(t) are
not yet specified, however.
The derivation of the drift coefficient µ̂j(t) is based on the requirement that for
every j the process Dj has the martingale property under the forward measure PTn.
Applying the Itoˆ formula to the equality Dj(t) = Dj+1(t) (1 + δj+1Lj(t)), we obtain
dDj(t) = (1 + δj+1Lj(t)) dDj+1(t) + δj+1Dj+1(t) dLj(t) + δj+1 d〈Dj+1, Lj〉t.
Since the processes Dj and Dj+1 are martingales under PTn and the finite variation
terms in the Itoˆ differential dDj(t) should vanish, we find that the drift µ̂j(t) should
satisfy
Dj+1(t)µ̂j(t)Lj(t) dt = −d〈Dj+1, Lj〉t. (14)
To establish equality (12), it now suffices to compute the cross-variation 〈Dj+1, Lj〉.
To this end, we shall find the martingale component in the canonical decomposition
of Dj+1. Since
Dj+1(t) =
n−1∏
i=j+1
(1 + δi+1Li(t)) ,
we have
dDj+1(t) =
n−1∑
i=j+1
n−1∏
k=j+1, k 6=i
(1 + δk+1Lk(t)) d (1 + δi+1Li(t)) + At
=
n−1∑
i=j+1
n−1∏
k=j+1, k 6=i
(1 + δk+1Lk(t)) δi+1 dLi(t) + At
= Dj+1(t)
n−1∑
i=j+1
δi+1Li(t)
1 + δi+1Li(t)
σi(Li(t), t) dŴ
i
t +Bt
where by A and B we denote some continuous processes of finite variation. Therefore,
d〈Dj+1, Lj〉t = Dj+1(t)Lj(t)
n−1∑
i=j+1
δi+1Li(t)
1 + δi+1Li(t)
σj(Lj(t), t)σi(Li(t), t)ρj,i(t) dt.
By combining the last equality with (14), we conclude that
µ̂j(t) = −
n−1∑
i=j+1
δi+1Li(t)
1 + δi+1Li(t)
σj(Lj(t), t)σi(Li(t), t)ρj,i(t),
as was required to show.
11
13. Girsanov’s theorem for the Poisson process
Let N be a standard Poisson process with intensity λ on (Ω,G,G,P) and let N̂t =
Nt − λt for t ∈ R+. For a fixed T > 0, we introduce a probability measure Q on (Ω,GT )
by setting
dQ
dP
∣∣GT = ηT , P-a.s., (15)
where the Radon-Nikody´m density process (ηt, t ∈ [0, T ]) satisfies
dηt = ηt−κ dN̂t, η0 = 1, (16)
for some constant κ > −1. We first aim to show that (16) has a unique solution, which
is denoted as Et(κN̂).
(a) Assume that κ > −1. Show that the unique solution η to the SDE (16) equals
ηt = e
Nt ln(1+κ)−κλt = eN̂t ln(1+κ)−λt(κ−ln(1+κ)) (17)
Hint. Show by direct calculations that
ηt = e
−κλt ∏
0(1 + κ∆Nu) = e
−κλt(1 + κ)Nt = eNt ln(1+κ)−κλt. (18)
More generally, show that if Y is process of finite variation with Y0 = 0 then a unique
solution to the SDE dηt = ηt− dYt equals
ηt = η0e
Yt

0(1 + ∆Yu)e
−∆Yu = η0eY
c
t

0(1 + ∆Yu) (19)
where Y ct := Yt −

0special case of Y := κN̂ .
(b) Let us denote a = ln(1 + κ). Show that the process η is a strictly positive G-
martingale under P and EP(ηT ) = 1.
Hint. Show that η = Ma and use part (iii) in Proposition 9.20 from the course notes.
Deduce that the process Ma solves the following SDE
dMat = M
a
t−(e
a − 1) dN̂t, Ma0 = 1. (20)
(c) Assume that under P a process N is a Poisson process with intensity λ with re-
spect to the filtration G. Suppose that the probability measure Q is defined on (Ω,GT )
through (15)–(16) for some κ > −1. Show that the process (Nt, t ∈ [0, T ]) is a Poisson
process under Q with respect to G with intensity λ∗ = (1 + κ)λ and thus the process
(N∗t , t ∈ [0, T ]) given by
N∗t = Nt − λ∗t = Nt − (1 + κ)λt = N̂t − κλt
is a G-martingale under Q.
Hint. It suffices to find a positive constant λ∗ such that for every a ∈ R the process
Ma given by
Mat := e
aNt−λ∗t(ea−1), ∀ t ∈ [0, T ], (21)
is a G-martingale under Q.
12
14. Girsanov’s Theorem for the Brownian-Poisson model
We assume that the following conditions are satisfied under the spot martingale
probability measure Q:
(i) the process N is a time-homogeneous Poisson process with intensity λ > 0;
(ii) the process W ∗ is a standard Brownian motion;
(iii) the processes W ∗ and N are independent.
Let G = (Gt)t∈R+ denote the joint filtration generated by the processes W ∗ and N .
We postulate that the firm’s value process satisfies under Q
Vt = V0 e
σW ∗t −Nt−ct, ∀ t ∈ [0, T ],
where σ > 0 and c ∈ R are constants. We assume that the short-term interest rate r is
constant so that dBt = rBt dt and we postulate that any defaultable contingent claim
is priced through the risk-neutral valuation formula under Q.
(a) Compute the value of a constant c for which the discounted value of the firm
V ∗t := VtB
−1
t is a G-martingale under Q.
Hint. Use first the independence of W ∗ and N under Q to find a constant c ∈ R+
using the equality
EQ(V ∗t ) = V ∗0 = V0, ∀ t ∈ [0, T ],
which is a necessary condition for the martingale property of the process V ∗. To this
end, observe that
EQ(V ∗t ) = e−rt EQ(Vt) = V0 e−(r+c)t EQ
(
eσW

t
)
EQ
(
e−Nt
)
.
Next show that for a unique solution c to the above equation the process V ∗ is a
martingale under Q.
(b) Assume that under the real-world probability P the firm’s value process satisfies
Vt = V0 e
σWt−Nt , ∀ t ∈ [0, T ],
where W is a standard Brownian motion and N is a standard Poisson process. Ex-
amine the range of intensities of N under P and give explicit expressions for the
Radon-Nikody´m densities
ζt =
dP
dQ
∣∣Gt, ηt = dQ
dP
∣∣Gt.
Hint. To describe the class of all probability measures equivalent to Q on (Ω,GT ) such
that N is a Poisson process under P you may use Proposition 9.24 from the course
notes.
13
15. Aze´ma supermartingale of a random time
Let τ be a non-negative, finite random variable on a probability space (Ω,G,Q). We
assume that Q(τ = 0) = 0 and Q(τ > t) > 0 for any t ∈ R+. We define the processes
Ft = Q(τ ≤ t | Ft) and Gt = Q(τ > t | Ft). Show that the process F (respectively, G)
is a bounded, non-negative F-submartingale (respectively, F-supermartingale) under
Q and has the right-continuous modification with left-hand limits. The process G is
called the Aze´ma supermartingale of τ with respect to F.
16. Key lemma for conditional expectations
Assume that τ is a random time such that the Aze´ma supermartingale of τ with
respect to F is a positive process.
(a) Show that for any G-measurable and Q-integrable random variable X we have,
for every t ∈ R+,
EQ(1 {τ>t}X | Gt) = 1 {τ>t}EQ(1 {τ>t}X | Gt) = 1 {τ>t}
EQ(1 {τ>t}X | Ft)
Q(τ > t | Ft) . (22)
• Since Ft ⊆ Gt, it suffices to check that
EQ
(
1CXQ(C | Ft)
∣∣Gt) = EQ(1CEQ(1CX | Ft) ∣∣Gt)
where we denote C = {τ > t}. Hence it suffices to show that for every A ∈ Gt we have∫
A
1CXQ(C | Ft) dQ =

A
1CEQ(1CX | Ft) dQ. (23)
It is known that for any A ∈ Gt there exists an event B ∈ Ft such that A ∩ C = B ∩ C.
Therefore, ∫
A
1CXQ(C | Ft) dQ =

A∩C
XQ(C | Ft) dQ =

B∩C
XQ(C | Ft) dQ
=

B
1CXQ(C | Ft) dQ =

B
EQ(1CX | Ft)Q(C | Ft) dQ
=

B
EQ(1CEQ(1CX | Ft) | Ft) dQ =

B∩C
EQ(1CX | Ft) dQ
=

A∩C
EQ(1CX | Ft) dQ =

A
1CEQ(1CX | Ft) dQ.
We thus conclude that (23) is satisfied
(b) Deduce that, for every t ≤ s,
Q(t < τ ≤ s | Gt) = 1 {τ>t} Q(t < τ ≤ s | Ft)Q(t < τ | Ft) = 1 {τ>t} EQ(1− e
Γt−Γs | Ft).
and
Q(τ > s | Gt) = 1 {τ>t} Q(τ > s | Ft)Q(τ > t | Ft) = 1 {τ>t} EQ(e
Γt−Γs | Ft).
14
17. Pricing of the promised payoff at maturity
Show that if X is an FT -measurable and Q-integrable random variable then, for
every t ≤ T ,
EQ(X1{T<τ} | Gt) = 1 {τ>t} EQ(XeΓt−ΓT | Ft)
where the hazard process Γ is given by Γt = − ln(Gt).
18. Pricing of the recovery payoff at default
Let Z be an F-predictable (for instance, F-adapted and continuous) process such
that the random variable Zτ1{τ≤T} is Q-integrable. Assume that F is a continuous,
increasing process then Ft = 1− e−Γt so that the equality dFt = e−Γt dΓt is valid. Show
that
1 {τ>t} EQ(Zτ1{τ≤T} | Gt) = 1 {τ>t} EQ
(∫ T
t
Zue
Γt−Γu dΓu
∣∣∣Ft).
Hint. Show first that the asserted equality holds when Z is an F-predictable stepwise
process on ]t, T ] so that, for every t < u ≤ T ,
Zu =
n∑
i=0
Zti1{tiwhere t0 = t < t1 < · · · < tn < tn+1 = T and Zti is an Fti-measurable random variable
for every i = 0, . . . , n.
19. Pricing of dividends before default
Assume that A is a bounded, F-predictable process of finite variation, which is
right-continuous and with left-hand limits. Show that
EQ
(∫
]t,T ]
(1−Hu) dAu
∣∣∣Gt) = 1 {τ>t}EQ(∫
]t,T ]
eΓt−Γu dAu
∣∣∣Ft).
Hint. Use the following version of the Itoˆ integration by parts formula, which is valid
for arbitrary right-continuous processes A and B of finite variation: for every t ≤ s,
AsBs = AtBt +

]t,T ]
Au− dBu +

]t,T ]
Bu dAu.
Equivalently,
AsBs = AtBt +

]t,T ]
Au− dBu +

]t,T ]
Bu− dAu +

t∆Au∆Bu
where we may also denote [A,B]t =

0quadratic covariation of A and B.
15
20. Conditional independence of σ-fields
Let F1,F2 and F3 be arbitrary σ-fields. We say that the σ-fields F1 and F2 are condi-
tionally independent given F3 if
EQ(ξη | F3) = EQ(ξ | F3)EQ(η | F3) (24)
for any bounded, F1-measurable random variable ξ and any bounded, F2-measurable
random variable η. Equivalently, for any A ∈ F1 and B ∈ F2
Q(A ∩B | F3) = Q(A | F3)Q(B | F3).
We will show that F1 and F2 are conditionally independent given F3 if and only if
EQ(η | F1 ∨ F3) = EQ(η | F3) (25)
for any bounded, F2-measurable random variable η.
Proof of the equivalence (24)⇔ (25).
(⇒) We first show that (24) implies (25). We assume that (24) holds and we take any
bounded, F2-measurable random variable η. From general properties of conditional
expectation, it is known that to show that (25) is valid, it suffices to verify that the
equality
EQ
(
ξψ EQ(η | F1 ∨ F3)
)
= EQ
(
ξψ EQ(η | F3)
)
.
is satisfied for any bounded, F1-measurable random variable ξ and any bounded, F3-
measurable random variable ψ. We have
EQ
(
ξψ EQ(η | F1 ∨ F3)
)
= EQ
(
EQ(ξψη | F1 ∨ F3)
)
= EQ
(
ξψη
)
= EQ
(
EQ(ξψη | F3)
)
= EQ
(
ψ EQ(ξη | F3)
)
(24)
= EQ
(
ψ EQ(ξ | F3)EQ(η | F3)
)
= EQ
(
EQ(ψξ EQ(η | F3) | F3)
)
= EQ
(
ξψ EQ(η | F3)
)
,
which shows that the desired equality is valid.
(⇐) We now show that (25) implies (24). If (25) is satisfied, then for any bounded,
F1-measurable random variable ξ and any bounded, F2-measurable random variable
η we obtain
EQ(ξη | F3) = EQ
(
EQ(ξη | F1 ∨ F3) | F3
)
= EQ
(
ξ EQ(η | F1 ∨ F3) | F3
)
(25)
= EQ
(
ξ EQ(η | F3) | F3
)
= EQ(ξ | F3)EQ(η | F3)
as was required to show.
16
21. Hypothesis (H) and equivalent conditions
Let G = F ∨ H where F is an arbitrary reference filtration and the filtration H is
generated by the default indicator process Ht = 1{t≥τ}. We say that the hypothesis
(H) is satisfied if every F-local martingale is a G-local martingale. That property is
also known as the immersion property between F and G where F ⊂ G are arbitrary
filtrations. If G = F ∨ H then G is called the progressive enlargement of F with the
filtration H (that is, with observations of the default indicator process H).
Our goal is to show that each of the following seven conditions is equivalent to the
hypothesis (H) when G = F ∨H.
(a) For any t ∈ R+ and any bounded, F∞-measurable random variable ξ
EQ(ξ | Gt) = EQ(ξ | Ft). (26)
(b) For any t ∈ R+, the σ-fields F∞ and Gt are conditionally independent given Ft,
that is: for any bounded, F∞-measurable random variable ξ and any bounded, Gt-
measurable random variable η
EQ(ξη | Ft) = EQ(ξ | Ft)EQ(η | Ft) (27)
or, equivalently, for every A ∈ F∞ and B ∈ Gt
Q(A ∩B | Ft) = Q(A | Ft)Q(B | Ft).
(c) For any t ∈ R+ and any u ≥ t, the σ-fields Fu and Gt are conditionally independent
given Ft, that is,
EQ(ξη | Ft) = EQ(ξ | Ft)EQ(η | Ft) (28)
for any bounded, Fu-measurable random variable ξ and any bounded, Gt-measurable
random variable η.
(d) For any t ∈ R+ and any bounded, Gt-measurable random variable η
EQ(η | Ft) = EQ(η | F∞). (29)
(e) For any t ∈ R+, the σ-fields F∞ and Ht are conditionally independent given Ft,
that is: for any bounded, F∞-measurable random variable ξ and any bounded, Ht-
measurable random variable ϕ we have
EQ(ξϕ | Ft) = EQ(ξ | Ft)EQ(ϕ | Ft). (30)
(f) For any t ∈ R+, we have
Q(τ ≤ t | Ft) = Q(τ ≤ t | F∞). (31)
(g) For any t, h ∈ R+, we have
Q(τ ≤ t | Ft) = Q(τ ≤ t | Ft+h). (32)
17
The proof of the equivalence of the hypothesis (H) and each of conditions (a)–(g)
will be done in seven steps. Notice that the equivalences (b) ⇔ (c) and (f) ⇔ (g) are
rather clear (see steps 3 and 7). Except for step 1, all other equivalences are obtained
from the equivalence (24)⇔ (25) by using a suitable choice of σ-fields F1, F2 and F3.
1. Equivalence of (H) and (a).
(⇒) Assume first that the hypothesis (H) is satisfied. Consider an arbitrary bounded,
F∞-measurable random variable ξ. LetMt := EQ(ξ | Ft) be the F-martingale associated
with ξ. Notice that M∞ = ξ since ξ is F∞-measurable. Then the hypothesis (H) implies
that M is a local martingale with respect to G and thus a G-martingale since M is
bounded and any bounded local martingale is a martingale. We conclude that the
equality Mt = EQ(ξ | Gt) is satisfied and thus condition (a) is valid.
(⇐) Suppose now that (a) holds. First, we note that the standard truncation ar-
gument shows that the boundedness of a random variable ξ in condition (a) can be
replaced by the assumption that ξ is Q-integrable. Hence any F-martingale M is a
G-martingale since M is clearly G-adapted and we have, for every t ≤ s,
Mt = EQ(Ms | Ft) = EQ(Ms | Gt)
where the second equality is an immediate consequence of (a).
Assume that M is an F-local martingale. Then there exists an increasing sequence
of F-stopping times τn such that limn→∞ τn = ∞, for any n the stopped process M τn
is a uniformly integrable F-martingale. Hence M τn is also a uniformly integrable G-
martingale and this means that M is a G-local martingale.
2. Equivalence of (a) and (b).
We are going to use Exercise 20 with F2 = Ft for a fixed t ∈ R+ by selecting
particular σ-fields related to the filtrations F,H and G as F1 and F2. Recall also that
we deal with the case whereG = F∨H and thus Ft ⊂ Gt. Hence to show that (a) and (b)
are equivalent, it suffices to use the equivalence of (24) and (25) with F1 = Gt, F2 = F∞
and F3 = Ft (recall that Ft ⊆ Gt).
3. Equivalence of (b) and (c).
It is easy to see that condition (b) is also equivalent to: for every t ∈ R+ and u ≥
t, the σ-fields Fu and Gt are conditionally independent given the σ-field Ft, that is,
condition (c).
4. Equivalence of (b) and (d).
By taking F1 = F∞, F2 = Gt, F3 = Ft and noting that Ft ⊆ F∞, we see that (b) is
equivalent to: for every t ∈ R+, and any bounded, Gt-measurable random variable η
we have EQ(η | Ft) = EQ(η | F∞), that is, to condition (d).
5. Equivalence of (b) and (e).
Since condition (b) is manifestly stronger than (e), it is enough to check that condi-
tion (e) implies (b). From the equivalence of (24) and (25) with F1 = Ht, F2 = F∞ and
18
F3 = Ft, we deduce that condition (e) is equivalent to the following condition: for any
bounded, F∞-measurable random variable ξ we have
EQ(ξ |Ht ∨ Ft) = EQ(ξ | Ft).
Since Gt = Ht ∨ Ft, the last equality gives condition (a), which is already known to be
equivalent to (b) and thus we conclude that condition (e) implies (b).
6. Equivalence of (e) and (f).
We first observe that (31) is equivalent to the property: for any fixed t ∈ R+ and all
s ∈ [0, t] we have that
Q(τ ≤ s | Ft) = Q(τ ≤ s | F∞).
Furthermore, the σ-field Ht is generated by the class of events
{{τ ≤ s}, s ≤ t}.
Hence to prove that (f) is equivalent to the property that the σ-fields F∞ and Ht
are conditionally independent given Ft (that is, to condition (e)) it suffices to take
F1 = F∞, F2 = Ht and F3 = Ft.
7. Equivalence of (f) and (g).
It is easy to see that conditions (f) and (g) are equivalent.
22. Representation theorem for G-martingales
Assume that all F-martingales are continuous and the F-hazard process Γ of τ is
increasing and continuous. Show that the martingale Mht = EQ(hτ | Gt) where h is an
F-predictable process such that EQ|hτ | <∞ can be represented as follows
Mht = m
h
0 +
∫ t∧τ
0
eΓu dmhu +

]0,t∧τ ]
(hu −Mhu−) dMu (33)
where the continuous F-martingale mh is given by
mht = EQ
(∫ ∞
0
hu dFu
∣∣∣Ft)
and the discontinuous G-martingale M equals Mt = Ht − Γt∧τ .
Hint. From the key lemma (see Exercise 16), we obtain
Mht = EQ(hτ |Gt) = 1{t≥τ}hτ + 1{t<τ}eΓt EQ
(∫ ∞
t
hu dFu
∣∣∣Ft)
= 1{t≥τ}hτ + 1{t<τ}eΓt
(
mht −
∫ t
0
hu dFu
)
so that, in particular, Mh0 = mh0 . We note that the last equality can be rewritten as
follows
Mht =
∫ t
0
hu dHu + (1−Ht)eΓt
(
mht −
∫ t
0
hu de
Γu
)
.
Hence equality (33) can be obtained using a suitable version of the Itoˆ integration by
parts formula.
19
The Itoˆ integration by parts formula for the (possibly discontinuous) semimartin-
gales X and Y reads (see also Exercise 19 for the special case where X and Y are
assumed to be processes of finite variation), for every 0 ≤ t ≤ s,
XsYs = XtYt +

]t,s]
Yu− dXu +

]t,s]
Xu− dYu + [X, Y ]t
where the quadratic covariation of semimartingales X and Y is given by
[X, Y ]t = 〈Xc, Y c〉t +

0∆Xu∆Yu
and the continuous martingales Xc, Y c are unique continuous martingale parts of X
and Y , respectively. Recall that, by definition, any semimartingale is a ca`dla`g process,
that is, an RCLL (right-continuous with left-hand limits) process.
If we denote (recall that Ft = 1− e−Γt)
Xt = 1−Ht, Yt = eΓt
(
mht +
∫ t
0
hu de
−Γu
)
then it is clear that X is a process of finite variation so that Xc = 0 and Y is a
continuous semimartingale. Therefore, the Itoˆ integration by parts formula becomes
d(XtYt) = Yt dXt +Xt dYt = −Yt dHt + (1−Ht−) dYt
where
dYt = e
Γt
(
dmht − hte−Γt dΓt
)
+
(
mht +
∫ t
0
hu de
−Γu
)
eΓt dΓt
= eΓt dmht − ht dΓt + Yt dΓt = eΓt dmht + (Yt − ht) dΓt.
Consequently,
dMht = ht dHt + d(XtYt) = (ht − Yt) dHt + (1−Ht−) eΓt dmht + (1−Ht−) (Yt − ht) dΓt
= (1−Ht−)(ht − Yt) d(Ht − Γt) + (1−Ht−) eΓt dmht
= (1−Ht−)(ht −Mht−) dMt + (1−Ht−) eΓt dmht
since 1−Ht− = 1{t≤τ} and thus
(1−Ht−) d(Ht − Γt) = (1−Ht−) dMt.
We conclude that
Mht = m
h
0 +
∫ t∧τ
0
eΓu dmhu +

]0,t∧τ ]
(hu −Mhu−) dMu,
which is the desired result.
20

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468