程序代写案例-AMATH 242/CS

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
Introduction to Computational Mathematics
(AMATH 242/CS 371)
University of Waterloo
Spring 2021
AMATH 242/CS 371
Condition Number
Definition
A problem is well conditioned with respect to the absolute error if small
changes in the input result in small changes in the output.
A problem is ill conditioned with respect to the absolute error if small
changes in the input result in large changes in the output.
Condition number with respect to the absolute error is defined as:
κA =
‖ ∆~z ‖
‖ ∆~x ‖ , where ∆~x is change in the input and ∆~z is change in
the output.
Condition number with respect to the relative error is defined as:
κR =
‖∆~z‖
‖~z‖
‖∆~x‖
‖~x‖
.
For 0.1 ≤ κA, κR < 10, problem is well conditioned and for
κA, κR →∞ problem is ill conditioned.
AMATH 242/CS 371
Vector norms
Vector norms are a useful tool for providing a measure of the magnitude of
a vector, and are particularly applicable to derivations for the condition
number.
Definition
Suppose V is a vector space over Rn. Then ‖ · ‖ is a vector norm on V if
and only if ‖ ~v ‖≥ 0, and
‖ ~v ‖= 0 if and only if ~v = ~0.
‖ λ~v ‖= |λ| ‖ ~v ‖ ∀~v ∈ V and λ ∈ R
‖ ~u + ~v ‖≤‖ ~u ‖ + ‖ ~v ‖ ∀~u, ~v ∈ V (triangle inequality)
AMATH 242/CS 371
Vector norms
There are three standard vector norms known as the 2-norm, the 1-norm
and the 1-norm.
Definition: The 2-norm over Rn is defined as
‖ ~x ‖2=
√√√√ n∑
i=1
x2i
Definition: The ∞-norm over Rn is defined as
‖ ~x ‖∞= max |xi | 1 ≤ i ≤ n
Definition: The 1-norm over Rn is defined as
‖ ~x ‖1=
n∑
i=1
|xi |
Theorem. Cauchy-Schwartz Inequality. Let ‖ · ‖ be a vector norm over a
vector space V induced by an inner product. Then
|~x · ~y | ≤‖ ~x ‖‖ ~y ‖
AMATH 242/CS 371
Condition Number (continued), Stability of a Numerical
Algorithm
• Consider a problem y = x
1− x . Is this a well conditioned problem?
Determining the stability of algorithm is important as for a
well-conditioned problem, some algorithms may be numerically unstable,
i.e. they may produce large errors, while other algorithms may be stable.
Stability of a Numerical Algorithm
If an algorithm propagates the error and produces large errors, it is called
an unstable algorithm.
If E0 > 0 denotes an error introduced at some steps in the calculations and
En represents the magnitude of error after n subsequent operations:
If En ≈ CnE0 (C is a constant) then the growth of error is linear.
If En ≈ CnE0 for C > 1, then the growth of error is called exponential.
AMATH 242/CS 371
Stability, Representing numbers on a computer
Linear growth of error is usually unavoidable, and when C and E0 are small
the results are generally acceptable. Therefore, algorithm with linear
growth of error is stable whereas an algorithm exhibiting exponential error
growth is unstable.
• For any constants c1 and c2, pn = c1(1
3
)n + c23
n is a solution to the
recursive equation pn =
10
3
pn−1 − pn−2, n = 2, 3, · · · Investigate the
stability of this procedure if p0 = 1 and p1 =
1
3
and 5-digit rounding
arithmetic is used to compute the terms of the sequence.
What about pn = 2pn−1 − pn−2 with pn = c1 + c2n as the solution, is this
a stable procedure?(p0 = 1 and p1 =
1
3
)
AMATH 242/CS 371
Chapter 2: Root Finding
The growth of a population can often be modeled over short periods of
time by assuming that the population grows continuously with time at a
rate proportional to the number present at that time. Suppose that N(t)
denotes the number in the population at time t and λ denotes the
constant birth rate of the population. Then the population satisfies the
differential equation
dN(t)
dt
= λN(t). Solution of this differential equation
is N(t) = N0e
λt . If immigration is permitted at a constant rate ν, then
the differential equation becomes
dN(t)
dt
= λN(t) + ν with
N(t) = N0e
λt +
ν
λ
(eλt − 1) as its solution.
AMATH 242/CS 371
Root Finding
If N0, is given and we also have ν and N(1) then finding the birth rate of
this population is not easy. We should solve N(1) = N0e
λ +
ν
λ
(eλ − 1). It
is not possible to solve explicitly for λ in this equation. But numerical
methods can help to find the solution, i.e. root of the given equation.
Now, lets define a problem in general:
Given any function f (x), find x∗ such that f (x∗) = 0. The value x∗ is
called a root of the equation f (x) = 0.
Even if the existence of root of a function is guaranteed, there is no
guaranteed method for finding x∗. In fact there are several techniques that
can work for many cases.
AMATH 242/CS 371
Root Finding
• Since x∗ may not be defined in a floating point number system, we will
not be able to find x∗ exactly. Therefore, we consider a computational
version of the same problem:
Definition
Given any f (x) and some error tolerance > 0, find x∗ such that
|f (x∗)| < .
AMATH 242/CS 371
Root Finding
Example:
• Find roots of the following functions:
f (x) = x2 − x − 12
f (x) = x3 − x − 1
f (x) = cos(ex − 2)
Can you find the root of the second and third function easily without
calculator?
In finding roots of a function numerically, a sequence of iteration (xk) is
generated and we require that iterates eventually converge to the actual
root, i.e. x∗ = limk→∞xk .
• Important question that should be answered is that ”how do we know
where a root of a function may approximately be located?”
Intermediate value theorem
If f (x) is continuous on a closed interval [a, b] and c ∈ [f (a), f (b)], then
∃x∗ ∈ [a, b] such that f (x∗) = c.
AMATH 242/CS 371
Four Algorithms for root finding: Bisection Method
Thus if we can find [a, b] such that f (a) · f (b) < 0 then by the
Intermediate Value Theorem, [a, b] will contain at least one root x∗ as
long as f (x) is continuous.
Bisection Method
A simple method to find roots
Convergence is guaranteed as long as its initial conditions are met
f (x) should be continuous on [a, b] and f (a) · f (b) < 0
This method works by bisecting the interval and recursively using the
Intermediate Value Theorem
AMATH 242/CS 371
Four Algorithms for root finding: Bisection Method
Theorem
If f (x) is a continuous function on the interval [a0, b0] such that
f (a0) · f (b0) < 0 then the interval [ak , bk ], defined by
ak =
{
ak−1 if f (ak−1) · f ((ak−1 + bk−1)/2) ≤ 0
(ak−1 + bk−1)/2 otherwise
bk =
{
bk−1 if f (ak−1) · f ((ak−1 + bk−1)/2) > 0
(ak−1 + bk−1)/2 otherwise
fulfils the property f (ak) · f (bk) < 0, ∀k
AMATH 242/CS 371
Four Algorithms for root finding: Bisection Method
As k increases, interval containing the root becomes smaller. Therefore,
convergence of sequence is guaranteed.
Question: How many steps does the bisection method take to reach the
desired tolerance or root?
[a, b] = [a0, b0] is the initial interval of the solution. In each iteration the
interval containing x∗ is halved. So, we can define stop criterion as
|b − a| ≤ t. To determine the required steps to reach this condition we
have:
|bn − an| = 1
2
|bn−1 − an−1| = 1
22
|bn−2 − an−2| = · · ·
= 2−n|b − a| ≤ t =⇒ n ≥ 1
log 2
log(
|b − a|
t
)
This is the number of steps that bisection method should take to converge.
AMATH 242/CS 371
Four Algorithms for root finding: Bisection Method
• Determine the number of iterations necessary to solve
f (x) = x3 + 4x2 − 10 = 0 with accuracy 10−3 using a = 1 and b = 2.
• Let f (x) = (x + 2)(x + 1)x(x − 1)3(x − 2). To which zero of f (x), does
bisection method converge when applied on [−3, 2.5].
AMATH 242/CS 371
Four Algorithms for root finding: Fixed Point Iteration
A fixed point for a function is a number at which the value of the function
does not change when the function is applied.
Definition
x∗ is a fixed point of g(x) if g(x∗) = x∗ i.e. if x∗ is mapped to itself under
g .
We can define root-finding problem in another way. Consider the
real-valued function g , defined by g(x) = x − f (x) so f (x) = x − g(x).
We observe that fixed point of g(x) is a root of f (x).
Fixed point Iteration method is convergent if g(x) satisfies certain
properties.
g(x) = x − f (x) is not the only way to define g(x). In general we can
write g(x) = x − H(f (x)) as long as we choose H such that
H(0) = 0.
AMATH 242/CS 371
Fixed Point Iteration
It is not always straightforward to find the fixed point of a function
explicitly. So, we start by an initial approximation x0 and generate the
sequence {xn}∞n=0 by letting xn = g(xn−1). If this sequence is
convergent to x∗, then x∗ = lim
n→∞ xn = limn→∞ g(xn−1) = g(x
∗).
Question: Does g have any effect on convergence of {xn}∞n=0?
• The equation x3 + 4x2 − 10 = 0 has a unique root in [1, 2]. There are
many ways to change the equation to the fixed-point form x = g(x):
(a) g1(x) = x − x3 − 4x2 + 10 (b) g2(x) =
(
10
x
− 4x
) 1
2
(c) g3(x) =
1
2
(
10− x3) 12 (d) g4(x) = ( 10
x + 4
) 1
2
How do these functions affect the convergence of fixed-point iteration if
x0 = 1.5?
AMATH 242/CS 371
Four Algorithms for root finding: Newton’s Method
• find root of f (x) = x4 − x − 10 on [0, 2] by fixed-point iteration.
Newton’s (or the Newton-Raphson) method is one of the most
powerful and well-known numerical methods for solving a root-finding
problem. Since an extra information f ′(x) is provided, we can get more
accurate solutions.
If f (x) is a continuous and differentiable function on [a, b], then we can
use Taylor series expansion around x0 to derive Newton’s method:
f (x) = f (x0) + f
′(x0)(x − x0) + f
′′(ξ(x))
2!
(x − x0)2 =⇒
f (x∗) = 0 = f (x0) + f ′(x0)(x∗ − x0) + f
′′(ξ(x∗))
2!
(x∗ − x0)2
AMATH 242/CS 371
Four Algorithms for root finding: Newton’s Method
Assuming that |x∗ − x0| is small, the last term is negligible. So,
0 ≈ f (x0) + f ′(x0)(x∗ − x0) =⇒
x∗ ≈ x0 − f (x0)
f ′(x0)
≡ x1
This sets the stage for Newton’s method, which starts with an initial
approximation x0 and generates the sequence xn+1 = xn − f (xn)
f ′(xn)
AMATH 242/CS 371
Four Algorithms for root finding: Newton’s Method
The Taylor series derivation of Newton’s method points out the
importance of an accurate initial approximation. The crucial assumption is
that the term involving (x∗ − x0)2 is so small that it can be deleted. This
will clearly be false unless x0 is a good approximation to x
∗. If x0 is not
sufficiently close to the actual root, there is little reason to suspect that
Newton’s method will converge to the root. However, in some instances,
even poor initial approximations will produce convergence. Another
problem in Newton’s method is that if f ′(xn) = 0 we get an undefined
solution.
AMATH 242/CS 371
Four Algorithms for root finding: Newton’s Method
• Use Newton’s method to approximate, the value of x that produces the
point on the graph of y = x2 that is closest to (1, 0).
AMATH 242/CS 371

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468