程序代写案例-MTH5530-Assignment 2

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
School of Mathematical Sciences
MTH4321, MTH5321, MTH5530
2021 [1]
Assignment 2
Due date: Thursday 6 May, 2021, 7pm (submit the electronic copy of your assignment and the
code via Moodle). Late submissions will be subject to late penalties (see the unit guide for full
details).
This assignment contains seven questions for a total of 100 marks (equal to 10% of the final mark for this unit).
The relevant output of the programming exercises are to be submitted as part of your electronic submission.
You can either include the outputs as a part of your pdf file (if you can edit the pdf document), or include
them in a separate word document (you need to label the figures and outputs consistently with the question
number). In addition, collect all your Matlab m-files in one zip file and submit this file through Moodle.
Part I: Theoretical Work
1. (10 points) Consider the N ×N linear system Au⃗ = f⃗ in the form of
1
h2

2 −1 0 · · · 0
−1 2 −1 · · · 0
0 −1 2 · · · 0
... ... ... . . . ...
0 0 0 · · · 2


u1
u2
u3
...
uN
 =

f1
f2
f3
...
fN
 ,
arising from the finite difference discretisation of the Poisson’s equation −u′′(x) = f(x), x ∈ [0, 1] with
h = 1N+1 and boundary conditions u(0) = u(1) = 0.
a) Split the matrix A as A = D−L−L⊤, where D is diagonal and L is strictly lower triangular. Derive
the iteration matrix Rω and the vector g⃗ in terms of D, L and f⃗ , so one sweep of the weighted
Jacobi method can be written as v⃗(new) = Rω v⃗(old) + g⃗.
b) Verify that the eigenvalues and eigenvectors of Rω take the form
λk(Rω) = 1− 2ω sin2
(
kpi
2(N + 1)
)
and w⃗k(j) = sin
(
jkpi
(N + 1)
)
,
for k = 1, . . . , N .
c) Show that the eigenvalues can be approximated as
λk(Rω) ≈ 1− ωk
2pi2
2
h2
for k ≪ N . Hint: using linearisation (truncated Taylor series expansion) around 0.
d) Work out approximately the number of iterations, m, that is required to reduce the relative error
‖e⃗(m)‖
‖e⃗(0)‖ ,
by a factor of 10−d. You should express m as a function of either h or N .
2. (20 points) Consider the N ×N linear system Au⃗ = f⃗ in the previous question.
a) Split the matrix A as A = D−L−L⊤, where D is diagonal and L is strictly lower triangular. Derive
the iteration matrix RG and the vector g⃗ in terms of D, L and f⃗ , so one sweep of Gauss-Seidel can
be written as v⃗(new) = RGv⃗(old) + g⃗.
b) Verify that the eigenvalues and eigenvectors of RG take the form
λk(RG) = cos
2
(
kpi
N + 1
)
, and w⃗k(j) = cosj
(
kpi
N + 1
)
sin
(
jkpi
N + 1
)
,
for k = 1, . . . , N .
c) Suppose we have b). Justify that Gauss-Seidel can have faster convergence (in terms of error bounds)
than the weighted Jacobi with ω ∈ (0, 1] on this problem.
Hint: You can assume h is sufficiently small, so the truncated Taylor series expansion around 0 can
be used to approximate the spectral radius.
d) Now consider the successive over-relaxation (SOR) scheme
v⃗(new) =
(
(1− ω)I + ωRG
)
v⃗(old) + g⃗ω,
for some vector g⃗ω. Derive the eigenvalues of the iteration matrix of the above SOR scheme and
show that SOR converges for 0 < ω < 2.
3. (10 points) Given a linear system Ax⃗ = b⃗. Assume A is a symmetric positive definite matrix. Recall the
A-inner product and the A-norm
〈u⃗, v⃗〉A = u⃗⊤Av⃗ and ‖u⃗|A =

〈u⃗, u⃗〉A.
a) Show that these are acceptable definitions for an inner product and a norm.
b) Given residual r⃗ = b⃗−Av⃗ and error e⃗ = x⃗− v⃗, show that ‖r⃗‖ = ‖e⃗‖A2 .
c) The error norm ‖e⃗‖ is generally not computable. Are ‖e⃗‖A and ‖e⃗‖A2 computable? Justify your
answer.
2
Part II: Computational Work
Note: Use sparse matrices if possible in solving the computational questions. Otherwise, your multigrid
code can be very slow.
4. (10 points) Consider you have a general square sparse matrix. Implement the weighted Jacobi algorithm
in MATLAB using only primitive commands, i.e. do not appeal to the built-in linear algebra functionality
in MATLAB (the “\” and “inv(A)”for example). You should use the function headers given below.
a) function vnew = wJacobi(A,vold,f,omega)
% Performs one weighted Jacobi iteration on Au=f with weight omega.
% vold is the approximation before the Jacobi iteration, and vnew after.
% Make sure to implement the method in matrix form (this is much
% faster than implementing it per component), and to use
% sparse matrices for all steps.
b) function u = WJSequence(A, u0, f, omega, tol, maxit)
% Usage: u = WJSequence(A, u0, f, omega, tol, maxit)
% Perform a sequence of weighted Jacobi iterations on
% A u = f using the initial estimate u0, until the 2-norm
% of the residual has been reduced by a factor
% of at least tol, or until the number of iterations
% reaches maxit.
You can download VerifyWJ.m to check your implementation. The function wJacobi.m will be used in
later parts of this assignment.
5. (10 points) Consider you have a general square sparse matrix. Implement the Gauss-Seidel algorithm in
MATLAB using only primitive commands. You should use the function headers given below.
a) function u = Gauss(A, u0, f)
% Usage: u = Gauss(A, u0, f)
% Perform one Gauss-Seidel iteration on
% A u = f using initial guess u0.
b) function u = GSSequence(A, u0, f, tol, maxit)
% Usage: u = GSSequence(A, u0, f, tol, maxit)
% Perform a sequence of Gauss-Seidel iterations on
% A u = f using the initial estimate u0, until the 2-norm
% of the residual has been reduced by a factor
% of at least tol, or until the number of iterations
% reaches maxit.
You can download VerifyGS.m from the course website to check your implementation. The function
Gauss.m will be used in later parts of this assignment.
6. (10 points) Consider the N ×N linear system Au⃗ = f⃗ in Question 1. With the right hand side f⃗ = 0 (so
we can compute the error exactly), carry the following tasks:
a) With N = 127, compute the solution to this system using your implementation of weighted Jacobi
(with ω = 1 and ω = 2/3) and Gauss-Seidel from Questions 1 and 2 with v⃗(0)(i) = sin( ikpiN+1 ). For
k = 1, 10, 20, 40, 80, 100, 120, perform 100 iterations of weighted Jacobi and Gauss-Seidel methods
and compute the L2 norm of the error in each iteration. Compare the results of different methods
(by plotting the error against iteration number for different k) and comment on your findings.
3
b) With N = 63, repeat part (a) with k = 1, 10, 20, 40. For each k, compare the convergence of different
methods using different N . Justify the numerical results.
c) For the weighted Jacobi, with N = 15, 31, 63, 127, k = 2 and tol = 10−6, determine the number
of iterations required for obtaining (approximate) solutions. How many iterations do you need to
achieve convergence as a function of N? Compare this with the justification in Question 2.
7. (30 points) You are asked to implement the multigrid method for A u⃗ = f⃗ in MATLAB. In particular,
you are asked to implement the method for the 2D model problem described in the lecture slide
−uxx − uyy = 2[(1− 6x2)y2(1− y2) + (1− 6y2)x2(1− x2)],
and to produce tables similar to Table 4.1 of the textbook – note that the analytical solution is u(x, y) =
(x2 − x4)(y4 − y2).
To generate the A matrix, you may use the following function (provided)
function A=matPoisson(N)
% Builds the problem matrix A for NxN unknowns
% (interior points) on a grid with grid spacing h=1/(N+1).
The lexicographic ordering is used as discussed in class. You also need your wJacobi.m function and
Gauss.m function from previous questions.
Implement the following m-files (Note: make sure to use Matlab’s sparse matrix storage! Type ‘help
sparse’ for an introduction. The ‘spalloc’ and ‘spdiags’ commands may come in handy. Test your program
for small values of N, and use ‘full(A)’ and ‘spy(A)’ to display A.):
a) function f=fPoisson(N)
% Builds the right hand side column vector f for NxN unknowns
% (interior points) on a grid with grid spacing h=1/(N+1).
b) function I2hh=interpolate(N,N2h)
% Builds the interpolation matrix from level 2h to level h.
% There are N2hxN2h interior points on the coarse
% level (level 2h), and NxN interior points on the fine
% level (level h). This means that I2hh is a
% (NxN) x (N2hxN2h) matrix (many more rows than columns).
% I2hh is again a sparse matrix.
(The interpolation matrix is Ih2h. Note that the restriction matrix, I2hh , is given by I2hh = 0.25 (Ih2h)⊤.)
c) function vnew=mgVcycle(vold,f,alpha1,alpha2,omega,Nmin,N,smoother)
% This is the V-cycle implementation, the central part of the code.
% You should base your implementation on the recursive implementation.
%
% In this routine, you will first do relaxation on the current level,
% which has NxN interior points, and then recursively call
% the next level, with (N-1)/2 x (N-1)/2 interior points, followed
% by the coarse grid correction and another relaxation.
% > vold is the approximation on the current level before the V-cycle,
% and vnew is the new approximation after the V-cycle.
% > f is the right hand side on the current level (remember, on all
% levels but the finest, this is the residual vector).
% > alpha1 and alpha2 are the number of pre- and post-relaxations.
4
% > omega is the weight for the weighted Jacobi relaxation.
% > if N <= Nmin you will do a direct solve (use Matlab's v=A\f).
% > smoother=1 means weighted Jacobi and smoother=2 means Gauss-Seidel.
d) function mgMain
% This is the main driver routine of your code.
% You will first set parameters (including omega, alpha1, alpha2, Nmin,
% number of V-cycles iter, smoother, fine level problem size N, etc.)
% You will create the grid points, the exact PDE solution interpolated
% on the grid, and set up the fine level matrix A and right hand side f.
% Build a for loop for the V-cycles, calling mgVcycle iter times.
% At the end, plot, on the unit square grid, the exact solution of the
% PDE, your multigrid solution for Au=f, and the difference between them.
% (The `reshape' command may be helpful to go from lexicographic vectors
% to 2D arrays.)
% During the V-cycle iterations, keep track of the norms of residuals
% and errors, and output a table similar to Table 4.1 of the textbook.
Make sure to test all routines separately before attempting to put them together. (For each m-file, try
to come up with some small and simple tests, for example, use small test problems for which you know
and can write out the solution.) If you want to find out about bottlenecks in the algorithm or in your
implementation, you can use ‘profile on’ and ‘profile report’, and then try to optimize your code.
You are asked to submit the following for this question:
• Submit your m-files fPoisson.m, interpolate.m, mgVcycle.m, and mgMain.m, and provide a print-
out in your assignment package.
• Provide a table similar to Table 4.1 of the textbook for 15 V(2,1)-cycles using weighted Jacobi
relaxation with ω = 2/3, using Nmin=3 and a random initial guess (each component a random
number between 0 and 1) on the finest level. You should at least do N=31 and N=63, and if possible
also N=127. Make sure to use sparse matrices throughout though! The numbers you get should be
close to the numbers in the slide, but there will be small differences because of small differences in
the algorithm and initial guess).
– What are the total numbers of unknowns for the problem sizes you tried, and how many nonzeros
are there in the fine level matrices A (approximately)?
– Explain how your numbers show that the multigrid method has grid-independent convergence
factors
– Briefly discuss the memory and execution time complexities of the algorithm as a function of the
total number of unknowns.
Hint: Say something about the number of iterations you need as a function of grid size, and the
amount of work per V-cycle as a function of the total number of unknowns. What is the order
of complexity as a function of the total number of unknowns? Don’t worry about convergence
to discretization error in your discussion; you can just focus on solving the linear systems until
the residual has been driven down by a certain factor.
• Redo your simulations with your Gauss-Seidel relaxation, and provide a table similar to that of the
above. Briefly compare your two tables. You will see that Gauss-Seidel gives different convergence
factors. What does this mean for the memory and execution time complexities?
5

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468