HOMEWORK 1 SOLUTIONS (MATH 170B, WINTER 2022) DUE DATE: SEE CANVAS REVISION NUMBER: 1.0 INSTRUCTOR: Prof. Michael Holst BOOKS: [1] D. Kincaid and W. Cheney. Numerical Analysis: Mathematics of Scientific Computing. Third. Providence, RI: American Mathematical Society, 2017. MATERIAL COVERED BY HOMEWORK 1: This homework covers mainly material from lectures in weeks one and two, covering roughly the following sections of [1]: 1.1, 1.2, 3.1, 3.2, 3.3, 3.4, 3.6. SUBMITTING HOMEWORK ON GRADESCOPE: For non-computer problems, create a PDF file of your work by whatever means you prefer (scanning handwritten work with your phone, or using LaTeX to typeset your mathematics, either is fine), and upload that PDF to Gradescope. For computer problems, take a screen shot of both your MATLAB functions (the code you write) and the output they produce, and upload that PDF to Gradescope. OUR HOMEWORK RULES ALWAYS APPLY: As discussed in detail on the syllabus, you are allowed (encouraged) to discuss homework problems with other students in our class, but you must write up your own solutions yourself. You are not allowed to post these questions, or their solutions, on homework help websites (such as Chegg.com) or other websites. [Q1] (Section 1.1/1.2: Review: Taylor’s Theorem and Related; Similar to Exercise 1.1.5) Let fpxq “ cospxq. (1) Derive the Taylor series for fpxq at x “ 0. (Use summation notation.) (2) Write down the Taylor remainder for the series when truncating the series at n terms. (3) Find the min number of terms needed to compute fp1q with error ă 10´4. (Hint: This is all in Section 1.1, and in your Calculus book from your first quarter of calculus.) Solution: (1) f 1pxq “ ´sinx, f 2pxq “ ´cosx, f p3qpxq “ sinx, f p4qpxq “ cosx. so fpxq “ 1´ x2 2! ` x4 4! ´ x6 6! ` x8 8! ´¨ ¨ ¨ “ 8ř k“0 p´1qk x2kp2kq! when we expand fpxq at x “ 0. (2) For ξ P r0, hs,Enphq “ hn`1pn`1q!f pn`1qpξq “ # p´1qi`1 h2i`1p2i`1q!sinpξq when n “ 2i p´1qi`1 h2i`2p2i`2q!cospξq when n “ 2i` 1 (3) We have }Enp1q} ď 1pn` 1q! . And we want 1pn`1q! ă 10´4, which is equivalent to pn ` 1q! ą 104. The minimum number of i that satisfies this inequality is n` 1 “ 8, that is n “ 7. 1 2 REFERENCES [Q2] (Section 3.1: The Bisection Method; Same as Exercise 3.1.7) If the bisection method is used starting with the interval r2, 3s, how many steps must be taken to compute a root with absolute accuracy ă 10´6? Answer the same question for the relative accuracy. (Hint: This is a variation of an example in the book.) Solution: By theorem 1 on page 79 in [1], after n steps of the bisection method we have |r ´ cn| ď 2´pn`1qpb0 ´ a0q “ 2´pn`1qp3´ 2q “ 2´pn`1q. Thus, we want 2´pn`1q ă 10´6. This gives ´pn` 1q ă log2 10´6 “ ´19.9 ùñ n` 1 ą 19.9 ùñ n ą 18.9 so we need 19 steps to get absolute accuracy ă 10´6. For relative error, since |r| ě 2 we have |r ´ cn| |r| ď |r ´ cn| 2 ď 2´pn`2q. We can solve for n similar to above. We need 18 steps to get relative accuracy ă 10´6. [Q3] (Section 3.2: Newton’s Method Error Analysis; Related to Theorem 1) Let f : RÑ R be analytic. The key step in proving Newton’s method is (locally) quadrat- ically convergent to the root r of fpxq (i.e., fprq “ 0) is to show the error en “ xn ´ r at step n satisfies: en`1 “ 1 2 f2pξnq f 1pxnqe 2 n « 12 f2pxnq f 1pxnq e 2 n “ Ce2n. (1.1) Give a very short (1-2 line) derivation of (1.1) by using the Taylor expansion: 0 “ fprq “ fpxn ´ enq “ fpxnq ´ enf 1pxnq ` 1 2 e2nf 2pξnq. (Hint: This very simple but really imporant idea underlying Newton’s method was derived carefully in class, and the book.) Solution: en`1 “ xn`1 ´ r “ xn ´ fpxnqf 1pxnq ´ r “ en ´ fpxnqf 1pxnq “ enf 1pxnq´fpxnq f 1pxnq . From the above equation, we have enf 1pxnq ´ fpxnq “ 12f2pξnqe2n. Combine these two equations together, it yields en`1 “ 12 f 2pξnq f 1pxnqe 2 n. REFERENCES 3 [Q4] (Section 3.2: Newton’s Method for Systems; Related to Exercise 3.2.14) (a) Starting with multidimensional Taylor expansion, derive Newton’s method for a gen- eral system: F pxq “ 0, where F : Rn Ñ Rn. (b) Starting with xp0q “ p0, 1qT , perform two iterations of Newton’s method (by hand) as a step toward solving the system F pxq “ 0, where F : R2 Ñ R2 has component functions defined as: F1px1, x2q “ 4x21 ´ x22, F2px1, x2q “ 4x1x22 ´ x1 ´ 1. (Hint: I have basically done all parts of this problem in class, and it is also in the book.) Solution: (a) Starting from an approximate root x, we add a small correction h P Rn to get a better solution. Using an order-1 Taylor expansion, 0 “ F px` hq « F pxq ` F 1pxqh where F 1pxq P Rnˆn is the Jacobian matrix of F . Solving for h gives h “ ´F 1pxq´1F pxq so Newton’s method is xpk`1q “ xpkq ´ F 1pxpkqq´1F pxpkqq. (b) The Jacobian matrix of F at x “ px1, x2q is F 1pxq “ „BF1{Bx1 BF1{Bx2 BF2{Bx1 BF2{Bx2 “ „ 8x1 ´2x2 4x22 ´ 1 8x1x2 . For step 1, we have hp0q “ ´F 1pxp0qq´1F pxp0qq “ ´ „ 0 ´2 3 0 ´1 „´1 ´1 “ „ 1{3 ´1{2 and therefore xp1q “ xp0q ` hp0q “ „ 1{3 1{2 . For step 2, we have hp1q “ ´F 1pxp1qq´1F pxp1qq “ ´ „ 8{3 ´1 0 4{3 ´1 „ 7{36 ´1 “ „ 5{24 3{4 and therefore xp2q “ xp1q ` hp1q “ „ 13{24 5{4 . 4 REFERENCES [Q5] (Section 3.2: Newton’s Method for Systems; Related to Computer Exercise 3.2.14) (a) Create a simple MATLAB routine that implements Newton’s method for F pxq “ 0, where F : Rn Ñ Rn. Your routine will take x0, F , F 1, , and M as input, where M is the max number of iterations allowed, and is the tolerance for |F pxpkqq| ă , where xpkq is the current Newton iterate at step k of your algorithm. (b) Apply your routine to the problem in [Q4] for a few iterations to confirm that your hand calculations were correct. (We will also make use of your Newton routine in later homeworks.) (Hint: I have basically done all parts of this problem in class, and it is also in the book.) Solution: (a) 1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 % Routine: go_newton.m 3 % Purpose: Illustrate how to call Newton Routine 4 % 5 % Author: Michael Holst 6 % Mon Jan 24 11:23:20 PST 2022 7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 9 % Set up the calling parameters 10 rtol = 1.0e-9; 11 dtol = 1.0e-9; 12 x0 = [0; 1]; 13 14 % Define the functions (F and DF) to pass to Newton 15 F = @my_f; 16 DF = @my_df; 17 18 % Call Newton with itmax=2 to compare [Q4] 19 fprintf('\n'); 20 fprintf('*** First Compare to [Q4] ***\n'); 21 itmax = 2; 22 [xfinal] = newton(F,DF,x0,rtol,dtol,itmax); 23 fprintf('Result from Newton (itmax=2): '); 24 disp(xfinal'); 25 26 fprintf('Result from Q4: [13/24,5/4]=[0.54167,1.25]\n'); 27 28 % Call Newton with itmax=100 to get converged 29 fprintf('\n'); 30 fprintf('*** Do again letting Newton converge ***\n'); 31 itmax = 100; 32 [xfinal] = newton(F,DF,x0,rtol,dtol,itmax); 33 fprintf('Result from Newton: (itmax=1000): '); 34 disp(xfinal'); REFERENCES 5 1 function [oF] = F(x) 2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 3 % Routine: my_f.m 4 % Purpose: Define components of F(x) 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 6 oF = [4*x(1)ˆ2 - x(2)ˆ2; 4*x(1)*x(2)ˆ2 - x(1) - 1]; 7 end 1 function [oDF] = DF(x) 2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 3 % Routine: my_df.m 4 % Purpose: Define components of DF(x) 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 6 oDF = [8*x(1), -2*x(2); 4*x(2)ˆ2 - 1, 8*x(1)*x(2)]; 7 end 1 function [xx] = newton(F,DF,x0,rtol,dtol,itmax) 2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 3 % Routine: newton.m 4 % Purpose: Newton's Method for F(x)=0 5 % 6 % Input: 7 % F = function handle: "function [b] = F(x)" 8 % --> provides [b] containing residual F(x) 9 % DF = function handle: "function [A] = DF(x)", 10 % --> provides [A] containing jacobian DF(x) 11 % x0 = vector containing initial guess 12 % rtol = tolerance for residual 13 % dtol = tolerance for Newton update 14 % itmax = the max number of iterations allowed 15 % 16 % Output: 17 % xx = final iteration from Newton's method 18 % 19 % Author: Michael Holst 20 % Mon Jan 24 11:23:20 PST 2022 21 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 22 23 % Check input parameters 24 fprintf('Checking Input parameters..'); 25 [n,one] = size(x0); 26 gchk = F(x0); 27 if ((one ˜= 1) | (rtol <= 0) | (dtol <= 0)) 28 fprintf('.incorrect or inconsistent. Exiting...\n'); 6 REFERENCES 29 return; 30 else 31 fprintf('.OK. Continuing.\n'); 32 end 33 34 % Check input functions 35 fprintf('Checking Input functions..'); 36 [m,one] = size(F(x0)); 37 [p,q] = size(DF(x0)); 38 if ((one ˜= 1) | (m ˜= n) | (p ˜= n) | (q ˜= n)) 39 fprintf('.incorrect or in consistent. Exiting...\n'); 40 return; 41 else 42 fprintf('.OK. Continuing.\n'); 43 end 44 45 % Set up for the iteration 46 x = x0; 47 resid = norm(F(x),2); 48 d = zeros(2,1); 49 diff = dtol + 1; % ensures we can do at one step 50 51 % Do the Newton iteration 52 it = 0; 53 while ((resid > rtol) & (diff > dtol) & (it < itmax)) 54 it = it + 1; 55 56 b = -F(x); % Set up RHS for Newton 57 A = DF(x); % Set up Jacobian matrix for Newton 58 d = A \ b; % Solve the Newton linear system 59 x = x + d; % Update solution for next setp 60 61 resid = norm(F(x),2); % Compute norm of gradient 62 diff = norm(d,2); % Compute norm of correction 63 64 disp(sprintf('it=%d resid=%e diff=%e',it,resid,diff)); 65 end; 66 67 % Return the final iteration 68 xx = x; REFERENCES 7 (b) Compare the results in first two iterations below with results in Q4(b), they are equal. 1 >> go_newton 2 3 *** First Compare to [Q4] *** 4 Checking Input parameters...OK. Continuing. 5 Checking Input functions...OK. Continuing. 6 it=1 resid=1.018729e+00 diff=6.009252e-01 7 it=2 resid=1.884316e+00 diff=7.783976e-01 8 Result from Newton (itmax=2): 0.5417 1.2500 9 10 Result from Q4: [13/24,5/4]=[0.54167,1.25] 11 12 *** Do again letting Newton converge *** 13 Checking Input parameters...OK. Continuing. 14 Checking Input functions...OK. Continuing. 15 it=1 resid=1.018729e+00 diff=6.009252e-01 16 it=2 resid=1.884316e+00 diff=7.783976e-01 17 it=3 resid=3.344790e-01 diff=2.825019e-01 18 it=4 resid=2.224655e-02 diff=7.561523e-02 19 it=5 resid=1.277543e-04 diff=5.771838e-03 20 it=6 resid=4.355141e-09 diff=3.345802e-05 21 it=7 resid=0.000000e+00 diff=1.138527e-09 22 Result from Newton: (itmax=1000): 0.4491 0.8982 23 24 >> 8 REFERENCES [Q6] (Section 3.4: Fixed-Point Iteration; Related to Exercises 3.4.33) We wish to find a solution to fpxq “ 0, where fpxq “ 2x2 ` 6e´x ´ 4. Determine two different maps gpxq such that: gpxq “ x ùñ fpxq “ 0. (This would allow us to use fixed-point iteration for gpxq “ x as a way to solve fpxq “ 0.) Solution: Let gpxq “ fpxq ` x “ 2x2 ` 6e´x ` x´ 4. Then gpxq “ x ùñ gpxq ´ x “ 0 ùñ fpxq “ 0. For the second map, let gpxq “ ´fpxq ` x “ ´2x2 ´ 6e´x ` x` 4. Then gpxq “ x ùñ x´ gpxq “ 0 ùñ fpxq “ 0. [Q7] (Section 3.4: Fixed-Point Iteration; Related to Exercises 3.4.3) Prove that if f : ra, bs Ñ ra, bs is continous on ra, bs, then f must have a fixed point. (Hint: This is a one-dimensional version of the Brouwer Fixed Point Theorem, which itself is a finite-dimensional version of the Schauder Fixed Point Theorem. A hint here is to consider gpxq “ fpxq ´ x, and then use the Intermediate Value Theorem on g.) Solution: Consider gpxq “ fpxq ´ x, then gpxq is also continuous on ra, bs and gpaq “ fpaq ´ a ě 0, gpbq “ fpbq ´ b ď 0. By the Integermediate-Value Theorem, there must exist some point x0 such that gpx0q “ 0, i.e. fpx0q “ x0. [Q8] (Section 3.4: Fixed-Point Iteration; Related to Exercises 3.4.18) Prove that if f : ra, bs Ñ ra, bs, if f 1 is continuous on ra, bs, and if |f 1pxq| ă 1 on ra, bs, then f has a unique fixed point in ra, bs. (Hint: This is a one-dimensional version of what is sometimes called Ostrowki’s Theorem. A hint for part is to use Taylor expansion and then the Contraction Mapping Theorem.) Solution: Let x, y P ra, bs with y ă x. By Taylor’s Theorem, there is ξ P ry, xs such that fpxq “ fpyq ` f 1pξqpx´ yq. Since f 1 is continuous on ra, bs and |f 1pxq| ă 1, there exists λ ă 1 such that |f 1pxq| ď λ on ra, bs. Thus, |fpxq ´ fpyq| “ |f 1pξqpx´ yq| “ |f 1pξq||x´ y| ď λ|x´ y| so f is a contractive map. So by the Contraction Mapping Theorem, f has a unique fixed point in ra, bs.
欢迎咨询51作业君