Numerical Methods II
CSCI-GA.2421-001, MATH-GA.2020-001
Spring 2013, Thursdays 7:00-9:00pm, WWH 512
Instructor: Olof B. Widlund
Coordinates
Office: WWH 612
Telephone: 998-3110
Office Hours: Drop by any time, or send email or call for an appointment
Email: widlund at cims.nyu.edu
Course URL: http://cs.nyu.edu/courses/spring13/CSCI-GA.2421-001/index.html
Final Exam
The final will be on May 16 from 7:00 to 9:00pm in WWH 512.
You may bring two normal-sized pages covered with notes on both sides
to the final.
NYU Classes
A NYU Classes account has been activated and you can use it to keep
track of homework scores. It will also be used to send e-mail occasionally.
The assignments will only be available, in pdf format, via this course
home page.
Requirements
Regular homework assignments, including programming assignments,
using Matlab or any reasonable alternative.
It is important that you do the homework yourself, but when you get stuck,
I encourage you to consult with other students,
or me, to get help when necessary. However, when you get help,
it's important to acknowledge it in writing. Passing off other people's
work as your own is not acceptable.
Homework
Homework 1 Assigned February 15, due
on Friday March 1 at 4:00 pm.
Homework 2 Assigned February 25, due
on Friday March 11 at 4:00 pm.
Homework 3 Assigned March 25, due
on Friday April 5 at 4:00 pm.
Homework 4 Assigned March 29, due
on Friday April 19 at 4:00 pm.
Homework 5 Assigned April 20, due
on Friday May 3 at 4:00 pm.
Main Text Book
A First Course in the Numerical Analysis of Differential Equations by Arieh Iserles. Cambridge University Press. Second Edition.
Two Additional Books of Interest
Applied Numerical Linear Algebra by James W. Demmel. SIAM.
Computer Methods for Ordinary Differential Equations and Differential-Algebraic
Equations by Uri M. Ascher and Linda R. Petzold. SIAM.
Lectures
There will descriptions of the content of each lecture, after the fact.
The semester will begin by covering material in chapters 8, 11, 12, 14, and
15 in that order.
- 1/31: A finite difference approximation of second order
for a boundary value problem for a simple second order ordinary
differential equation. Computing the truncation error by using
Taylor series. Increasing the accuracy by using a five-point stencil
instead of one based on three points; it is inconvenient since we
need to do something special on at two boundary points. A fourth order
scheme based on three-point stencils for both the solution and the
right hand side called the "Mehrstellenverfahren". The rate of convergence
of these schemes by using an expansion in eigenfunctions and also by using
a maximum principle. Richardson extrapolation and the basic idea of
deferred corrections. The standard five-point of Poisson's
equation on a rectangle and on a general domain. The truncation error can be
computed by essentially the same formulas as for the one-dimensional case.
Two nine-point formulas in an attempt to increase the rate of convergence
past second order.
- 2/7: The maximum principle for the five-point formula approximating the
Poisson equation. This analysis extends to one of the nine-point formulas,
at least for rectangular regions, namely the one with a stencil for which
all off-diagonal terms have the same sign. Gerschgorin's theorem; it allows
us to conclude that the relevant matrices are invertible for the cases
where the maximum principle holds. The cost of solving the linear systems
of equations which arise from these elliptic difference equations by
Gaussian elimination. For one-dimensional problems, we have tridiagonal
matrices and a linear algorithm. For two dimensions, the five point formula
gives us a matrix which can be given a band matrix form. Almost all zeros
inside the band are lost to fill-in when these matrices are factored.
This results in a factorization with a cost that is quadratic, n^2, in the
number of unknowns. For three dimensions, the cost is n^{7/3}.
The minimum degree and
nested dissection algorithms. For two dimensions the factorization cost
is only on the order of n^{3/2} and the number of nonzeros of the triangular
factors grow in proportion to nlog(n). For three dimension, the best we
can do is to factor at a cost proportional to n^2. A first few words on
classical iterative methods.
- 2/14: A few additional remarks on solving large linear systems of algebraic
equations by direct methods; diagonally dominant and symmetric, positive
definite problems do not require pivoting. Even in other cases, one can
try to avoid pivoting and monitor the possible growth of the elements on
the triangular factors; growth can indicate loss of accuracy. The classical
iterative methods: Jacobi, Gauss-Seidel, and SOR. A proof that Jacobi and
Gauss-Seidel converge for strictly row diagonal dominant matrices; the same
result holds for weakly row diagonally dominant, irreducible matrices. A proof
that SOR diverges for all omega >2 and a proof that SOR converges for all
symmetric, positive definite matrices for 0 < omega <2. Derivation of a formula
which expresses the eigenvalues of SOR as a function of omega and the
eigenvalues of the Jacobi operator for any matrix which is consistently
ordered. This formula makes it possible to establish that optimal SOR
is considerably much faster than Gauss-Seidel.
- 2/21: A few words on block variants of classical iterative methods. A
derivation of the conjugate gradient method by using Krylov spaces, Lanczos
vectors, and a variational formulation. How to derive preconditioned conjugate
gradient methods and the general idea of preconditioners. A simple iterative
scheme due to Richardson. A bound for the rate of convergence if the matrix
has real and positive eigenvalues. A bound for arbitrary matrices which have
a positive definite symmetric part; the lower bound for the symmetric part
and the norm of the operator enters.
- 2/28: Handout on conjugate gradients in the style of my lecture on 2/21.
The rate of convergence of the conjugate gradient method; to be repaired by
a handout on 3/7. Comments on Krylov space methods for symmetric indefinite
coefficient matrices. The GMRES algorithm for nonsymmetric matrices. Separation
of variables, Fourier series and reduction of simple elliptic problems to
sets of ordinary differential equations. Solving Poisson's equation on a
circular disc. Fast Poisson solvers.
- 3/7: Buneman's method: the basic idea is simple but a non-obvious variant
is required to make it numerically stable. A good presentation is given in
a text book by Stoer and Bulirsch pp. 576-584. Introduction to the finite
element method. A variational reformulation of Poisson's equation to a
minimization problem over functions which have square integrable first derivatives
using Green's formula. Continuous piece-wise linear functions on a triangulation
of the given domain; these functions have square integrable first derivatives.
The sparsity of the resulting linear system of equations and the symmetry and
positive definiteness of the stiffness matrix. Higher order methods based
on quadratic and cubic polynomials. (There are many books on finite elements;
my favorite is written by Dietrich Braess.)
- 3/14: More on finite elements. The use of barycentric coordinates; they
are closely connected to the standard basis functions for the continuous
piece-wise linear finite element space. They can be used to express the
basis functions for higher order finite elements as well. Assembling stiffness
and mass matrices from matrices defined on individual elements. Cea's lemma
which reduces the error bounds for the finite element solutions to a best
approximation problem. The error of the latter problem is often solved by
estimating an interpolation error, which can be done on individual elements.
- 3/28: Sobolev spaces H^k based on the L_2-norm. The norms and semi-norms
for H^k. Poincare's inequality and extensions fo H^k for k > 1; there are
additional terms on the right hand side. Bounds for the interpolation operators
into finite element spaces; Sobolev's inequality is needed since functions
in H^1 can be unbounded in two and three dimensions. Error bounds for finite
element approximations. There are three main ingredients: the interpolation
operators map certain polynomials into themselves; Sobolev's inequality and
the generalized Poincare inequalities; an argument on what happens to
the Sobolev seminorms under dilation. Cea's lemma and Aubin-Nitsche's
bound of the L_2-norm of the finite element solution; Aubin-Nitsche works
and provides a better L_2-bound than the H^1-bound obtained by using Cea's
lemma provided that the solution of Poisson equation always has a H^2-bounds
for any L_2 right-hand side.
- 4/4: Introduction to numerical methods for initial value problems for
ordinary differential equations. Euler's method and the trapezoidal rule with
error bounds. Adams-Bashforth and Adams-Moulton. A first introduction to linear
multi-step schemes, the root condition, and Dahlquist's first barrier theorem.
A handout related to the fourth home work problem.
- 4/11: More on linear multi-step schemes. How to generate the full schemes
if one of the two polynomials are given. Backward differentiation formulas.
Stiff systems, A-stability, and stability diagrams. The weak instability
of the midpoint rule and Milne's method; they both have two roots on the
unit circle and these roots end up outside when the ODE has decaying solutions.
Error control a la Milne.
- 4/18: Runge-Kutta schemes. Gaussian quadrature and how to select the
nodes. Implicit Runge-Kutta schemes; they lead to large non-linear systems
of algebraic equations but some have very favorable stability regions.
Introduction to finite difference equations for partial differential
equations. Simple advection and the wave equation. Examples of instability
and how it is revealed by using Fourier analysis. L_2 stability of first
order systems with symmetric coefficient matrices and how it can be
extended to variable coefficients.
- 4/25: General initial value problems for PDE which are first order in
the time derivative: u_t= P(x,t,\delta_x) u. Calculating the "symbol" by
letting P operate on a complex exponential. A condition on the real part
of the eigenvalues of th symbol. This does not give us a sufficient condition
for wellposedness. Wellposedness in the sense of Hadamard; this also has
problems. Wellposedness in the sense of L_p. Simple examples of scalar
equations which are wellposed including first order symmetric systems.
The Courant-Friedrichs-Lewy condition dating back to 1928. Two examples
due to Strang and Kreiss, respectively showing that looking at problems
with frozen coefficients can be completely misleading. Bounds for problems
with forcing terms. Stability and strong stability; they are fully equivalent.
Insensitivity to lower order terms. The von Neumann condition. Leapfrog
and Milne's schemes; a connection to the theory of multi-step schemes for
ODEs. Stability of a three-level scheme for a convection-diffusion problem.
Alternating direction methods and how to prove their stability.
- 5/2: The heat equation in R^n; many bounds on the solutions can be
derived including those that show that the solution becomes smoother
for increasing t. Solving the heat equation on a bounded domain and with
zero Dirichlet boundary condition on the boundary of the domain in terms of
eigenfunctions of the Laplace operator. Using these techniques to recover
some of the same bounds as for R^n. Deriving bounds for one space variables
on an interval with more complicated boundary conditions; the one-dimensional
Sobolev inequality is then of importance. Note that this and the next lecture
to a large extent is based on a book by Larson and Thomee and published by
Springer.
- 5/9: Finite element methods for the heat equation. A priori
bounds similar to those that can be obtained by energy methods
for the continuous problem. Error bounds based on such bounds and
error bounds for elliptic finite element problems. The wave equation
on bounded domains; we can again use the eigenfunctions of the
Laplace operator to write down the solution. Formulating finite
element problems for the wave equation. A finite element solver
for the advection equation on a bounded interval; an energy estimate.
- 5/16: Final exam. If possible, the exam will start at 7:00pm.
You may bring two normal-sized pages covered with notes on both sides
to the final.