程序代写案例-MATH 21100

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
Name:
MATH 21100: Basics of Numerical Analysis — Practice Final Exam
2021
The exam consists of five problems of different lengths, and is closed b
ook, notes, computer, and
phone. You are allowed a calculator. Note that not all questions worth the same amount of points.
Be careful with your time, don’t linger too long on problems that give you trouble. None of the
questions require excessive calculation, so be careful about unnecessarily long calculations.
Newton’s Method: pn = pn−1 − f(pn−1)f ′(pn−1)
Taylor Expansion: for f ∈ Cn+1[a, b], x0 ∈ (a, b), then for some ξ ∈ [x0, x] we can write
f(x) =
n∑
k=0
f (n)(x0)
n!
(x− x0)n + f
(n+1)(ξ)
(n+ 1)!
(x− x0)n+1.
Truncation Error: If ωi+1 = ωi + hφ(ti, ωi) is an algorithm approximating an IVP solution y(t)
with yi = y(ti), then the truncation is defined as
τi+1(h) =
yi+1 − yi − hφ(ti, yi)
h
.
Taylor method of order n: ω0 = y0, ωi+1 = ωi + hT (n)(ti, ωi) where
T (n)(ti, ωi) =
n∑
k=1
hk−1
k!
f (k−1)(ti, ωi)
and has local truncation error O(hn).
1
Problem 1. Gaussian Elimination / Determinants (15 Points)
Consider the matrix
A =

1 2 −1 0
0 1 −1 −1
1 2 1 0
0 0 1 1
 .
(a) Find the LU decomposition, or in other words, find lower triangular matrix L and upper
triangular matrix U such that A = LU . Notice no pivoting is necessary here.
E1A =

1 0 0 0
0 1 0 0
−1 0 1 0
0 0 0 1
A =

1 2 −1 0
0 1 −1 −1
0 0 2 0
0 0 1 1
 .

1 0 0 0
0 1 0 0
0 0 1 0
0 0 −1/2 1
 ·

1 2 −1 0
0 1 −1 −1
0 0 2 0
0 0 1 1
 =

1 2 −1 0
0 1 −1 −1
0 0 2 0
0 0 0 1
 .
Hence
L =

1 0 0 0
0 1 0 0
1 0 1 0
0 0 1/2 1
 , U =

1 2 −1 0
0 1 −1 −1
0 0 2 0
0 0 0 1
 .
Note that we can easily check our answer is correct by noticing LU = A.
(b) Find det(A).
det(A) = det(L) det(U) = (1 · 1 · 1 · 1) · (1 · 1 · 2 · 1) = 2
, since the determinant of an upper or lower triangular matrix is the product of the diagonal
elements.
(c) Find det(A3).
det(A3) = (detA)3 = 8.
Problem 2. Norms (15 Points)
Suppose we have an n × n matrix A such that ‖A‖ < 1/2 for some induced norm ‖ · ‖. Show
that for λ > 1/2, A − λI is invertible, where I is the n × n identity matrix. Show your answer
rigorously.
2
Hint: You can suppose A− λI is not invertible, and then get a contradiction.
Suppose A − λI is not invertible. Then there exists x such that ‖x‖ = 1 and Ax = λx. Hence
‖Ax‖ = λ. Now since ‖ · ‖ is an induced norm,
‖A‖ := max
y:‖y‖=1
‖Ay‖ ≥ ‖Ax‖ = λ > 1/2,
which is a contradiction. Hence A− λI must not be invertible.
Problem 3. Numeric Integration (25 Points)
Consider the trapezoidal rule∫ x+h
x
f(s)ds =
h
2
[f(x) + f(x+ h)]− h
3
12
f ′′(ξ),
for some ξ ∈ [x, x + h]. Suppose we want to use the trapezoidal rule to approximate ∫ 1
0
f(x)dx.
We let h = 1
N
, xj = jh, and write
T (h) =
h
2
N−1∑
j=0
[f(xj) + f(xj+1)].
(a) Find the best n such that T (h)− ∫ 1
0
f(x)dx = O(hn) if f is smooth. Show your work.
T (h)−
∫ 1
0
f(x)dx =
N−1∑
j=0
(
h
2
[f(xj)− f(xj+1)]−
∫ xj+1
xj
f(s)ds
)
=
N−1∑
j=0
h3
12
f ′′(ξj) = O(h2).
Hence we have an error of O(h2).
(b) Suppose f(x) = cos(x) if x > 1/2, f(x) = 0 otherwise. Explain why n might be smaller
than the n you found in part (a).
Here f is not continuous, so we don’t have the guaranteed error above as that required
continuous second derivative. If we guaranteed a grid point at x = 1/2 however, this would
not be a problem.
(c) Suppose we have a smooth function f , and let
M(h) =
h
2
N−1∑
j=0
[f(xj)⊕ f(xj ⊕ h)]
δ :=M(h)− T (h)
E(h) =
∣∣∣∣∫ 1
0
f(s)ds−M(h)
∣∣∣∣.
3
M(h) is thus the computer generated approximation for the trapezoidal rule. E(h) is the total
error from implementing trapezoidal rule on a computer. Find α such that if h = O(δα), we
minimize the possible error while not doing unnecessary work. In other words, find the
smallest α > 0 that still minimizes E(h) asymptotically.
Total error is O(h2 + δ), so h ∼ δ1/2 will bring the error to the fixed rate O(δ1/2). h any
smaller won’t improve our error estimate, and larger h would not guarantee as small of an
error.
Problem 4. IVP (25 Points)
Suppose we have the IVP y′ = f(y), and y(0) = 0. Consider the numeric scheme of the form
ω0 = 0
ωi+1 = ωi + hφ(ωi).
(a) Find φ such that this is the second order Taylor Method.
f (1)(y(t)) = d
dt
[f(y(t))] = f(y(t))f ′(y(t)).
Hence φ(ω) = T (2)(ω) = f(ω) + h
2
f (1)(ω) = f(ω)(1 + h
2
f ′(ω)).
(b) Now consider ψ(ω) = f(ω + αf(ω)), and the numeric scheme
ω0 = 0
ωi+1 = ωi + hψ(ωi).
Find α such that the numeric scheme coming from this ψ is also order 2. Show that this
scheme has local truncation error order 2 to verify your answer.
Here we need ψ(ω) − T (2)(ω) = O(h2). Now for α small, ψ(ω) = f(ω) + αf(ω)f ′(ω) +
O(α2). To match this scheme with the Taylor Method, we pick α = h
2
so that the O(1) and
O(h) terms in ψ(ω) and T (2)(ω) match.
Problem 5. Jacobi Iterative Method (20 Points)
Suppose we have a matrix
A =

d 1 0 0 0
1 d 0 0 0
0 0 d 2 0
0 0 2 d 0
0 0 0 0 d
 .
(a) What are the eigenvalues and eigenvectors of this matrix?
4
Since this matrix is block diagonal, we only need to find the eigenvalues of the separate
blocks. For the first 2× 2 block, we see eigenvalues are d± 1 with eigenvectors
v± = (1,∓1, 0, 0, 0)T .
Likewise for the second block we have eigenvalues d± 2 and
v˜± = (0, 0, 1,∓1, 0)T .
The last eigenvalue is d with eigenvector (0, 0, 0, 0, 1)T .
(b) Let L, U , andD be lower triangular, upper triangular, and diagonal such thatA = D−L−U .
Denote the initial guess x0, and successive steps as xn. Jacobi iterative algorithm is then
xn = Txn−1 +D−1b
for T = D−1(L+ U).
Find ρ(T ), the spectral radius. Use this to find a criteria on d > 0 that guarantees the
algorithm will converge.
T = d−1(L + U), and by the same computation we see L + U has max eigenvalue of 2 in
absolute value. Hence ρ(T ) = 2
d
. Therefore Jacobi converges when ρ(T ) < 1, and hence
when d > 2.
5

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468