Numerical Methods II
CSCI-GA.2421-001, MATH-GA.2020-001
Spring 2015, Thursdays 5:10-7:00pm, WWH 512
Instructor: Olof B. Widlund
Coordinates
Office: WWH 612
Telephone: 998-3110
Office Hours: Drop by any time, or send email or call for an appointment
Email: widlund at cims.nyu.edu
Course URL: http://cs.nyu.edu/courses/spring13/CSCI-GA.2421-001/index.html
Final Exam
The course will NOT end with an in-class finals. Instead, there will be
individual oral exams. You can sign up now and have an exam any time
on a weekday May 1-15 or during the first half of June.
NYU Classes
A NYU Classes account has been activated and you can use it to keep
track of homework scores. It will also be used to send e-mail occasionally.
The assignments will only be available, in pdf format, via this course
home page.
Requirements
Regular homework assignments, including programming assignments,
using Matlab or any reasonable alternative.
It is important that you do the homework yourself, but when you get stuck,
I encourage you to consult with other students,
or me, to get help when necessary. However, when you get help,
it's important to acknowledge it in writing. Passing off other people's
work as your own is not acceptable.
Homework
- Homework 1 Assigned February 4, due
on Thursday February 19 at 5:00 pm.
- Homework 2 Assigned February 26, due
on Thursday March 12 at 5:00 pm.
- Homework 3 Assigned March 26, due
on Thursday April 9 at 5:00 pm.
- Homework 4 Assigned April 9, due
on Thursday April 23 at 5:10pm.
- Main Text Book
Finite Difference Methods for Ordinary and Partial Differential Equationas,
Steady-State and Time-Dependent Problems by Randall J. LeVeque.
SIAM 2007. Note that you can become a student member of SIAM, a very fine
organization. One of the benefits is discounts on all SIAM books. As a student
there is no membership fee.
- Two Additional Books of Interest
Applied Numerical Linear Algebra by James W. Demmel. SIAM.
Computer Methods for Ordinary Differential Equations and Differential-Algebraic
Equations by Uri M. Ascher and Linda R. Petzold. SIAM.
- Course Content.
For each week, there will be a short description of the content
of the lectures. Note that you should make sure that you obtain and save
all handouts; they should be an important part of your reading.
Request copies if you miss any of them.
- 1/29: Simple finite difference approximations of scalar second order
equations. Central differences and truncation errors. The discretization
error satifies the same difference equation with a right hand side equal
to the truncation error. Several ways of estimating the norm of the inverse
including the use of Fourier analysis and the maximum principle. For the
simplest type of discretization, we obtain O(h^2) convergence. How to improve
the rate of convergence by a simple change in the right hand side of the
difference equation, resulting in order four convergence, if the solution
is smooth enough, and how further improve the convergence rates by deferred
corrections. Both these devices involve the elimination of leading terms in
the truncation error. Remark on the use of piecewise linear continuous
functions and a variational reformulation of the ODE and how we can obtain
error bounds in this case; this is the finite element method in its simplest
form.
- 2/5: Motivated by spectral methods, a study of polynomial interpolation.
Equidistant points lead to poor performance; this is seen by looking at the
Lebesque constants. Chebyshev points are much to be preferred. By using
barycentric interpolation formulas, we obtain linear complexity. (There was
handout on these topics given out on 2/12.) Going to 2 and 3 dimensional
elliptic problems.
Solving the resulting linear systems of equations are generally
now more costly. For special simple geometries and very regular meshes,
Fast Poisson solvers based on the use of FFT can be developed.
- 2/12: Solving two and three dimensional elliptic problems discretized
by finite difference by using band Cholesky. There are better orderings
of the equations and unknowns
by nested disection orderings which provide a factorization
cost of C N^{3/2} and C N^2 for two and trhee dimensions, respectively;
here C is a constant and N the number of unknowns. For two dimensions,
the number of nonzeros in the Cholesky factors grows only in proportion of
N log(N) which makes direct solvers competitive in particular in case we
have several or many right hand sides.
Buneman's method as an alternative fast Poisson solver. Solving
Poisson's equation on a circle using Fourier analysis for the angular variable.
A few first words on classical iteative methods: Jacobi and Gauss-Seidel and
successive overrelaxation.
- 2/19: A variety of results for classical iterative methods; to be
continued.
- 2/26: Property A and a classical result which relates the rate of
convergence of the SOR algorithm and that of Jacobi's algorithm for matrices
which satisfies property A. Krylov spaces and a simple iterative method due
to Richardson; it can be shown to converge even for nonsymmetric matrices
provided that the symmetric part of the matrix is positive definite. The
conjugate gradient method and how its rate of convergence can be estimated
in terms of the square root of the condition number. The idea of preconditioning by using a second positive definite matrix M; this makes sense if we can
solve the linear system with the matrix M relatively inexpensively. The role
of Chebyshev polynomials in the analysis of the conjugate gradient algorithm.
There is also an iterative method based directly on the use of Chebyshev
polynomials; it requires knowledge of the largest and smallest eigenvalue
of the matrix of the linear system we wish to solve; such information is not
required for the conjugate gradient algorithm.
- 3/5: Lanczos vectors, which provide a convenient basis for Krylov spaces.
For symmetric matrices, they are defined by a three-term recurrecne relation.
The Lanczos vectors are essentially normalized residuals in a related conjugate
gradient iteration. A few words on the Lanczos eigenvalue algorithm; the
approximate eigenvalues are obtained by solving a symmetric tridiagonal
eigenvalue problem; the matrix elements can be computed from the parameters
of the conjugate gradient algorithms, the alphas and betas.
The case of a symmetric but indefinite matrix: the minimal
residual algorithm also based on using Krylov spaces.
Initial value problems for ordinary differential equations and how to turn
any such problem into a first order system of ODE's, y'=f(y,t).
Existence and uniqueness
and the importance of a Lipschitz condition both for the ODE theory and for
the convergence of numerical approximations. The forward Euler method and error
bounds for it; the Lipschitz constant enters, which makes the bound far from
accurate in many cases. Instead, we can understand the error propagation
in terms of
a linear ODE, the variational equation, obtained by linearizing around the
solution of the ODE. Methods bsaed on Taylor series; the analysis is basically
identical to that of the forward Euler method. Backward Euler; this method and
other implict methods require the solution of a non-linear equation or system
of equations in each time step. A first look at Runge-Kutta methods; they do not
require formulas for derivatives of f(y,t), but for higher order methods,
several evaluations of f(y,t).
- 3/12: More on Runge-Kutta schemes. Computing the order of the truncation
error gets to be difficult for methods of higher order. For autonous systems
and in particular for y^\prime = lambda y, it is easier. We have popular explicit fourth order schemes requiring four function evaluations but for to obtain a
k-th order scheme for larger k, more than k function evaluations are required.
There are also implicit Runge-Kutta schemes that for higher order requires
the solution of large non-linear systems of algebraic equations. More about
multistep schemes in particular of the BDF family. The root conditon.
Dahlquist's first barrier. The midpoint rule and the Simpson rule. They
are stable but not suitable for ODEs with decaying solutions. Stiff ODE
and A-stability and Dahlquist's second barrier. A(alpha) stability; a
number of the BDF schemes belong to this family; alpha decreases with the
order of the schemes. Stability diagrams.
- 3/26 More on Runge-Kutta schemes, in particular implicit Runge-Kutta.
Some of them can be derived by using collocation. They can have high accuracy
if the c parameters are chosen as the zeros of a particular Legendre polynomial;
there is a relation to Gauss-Legendre quadrature. Adpative paris of Runge-Kutta
methods especially one due to Fehlberg. Introduction to finite difference
approximations to the heat equation. Restriction on the time step for an
explicit method. L_2-stability of Crank-Nicolson.
- 4/2. Linear initial value problems for PDE. The use of Fourier analysis,
which is successful for problems with constant coefficients. Typically, we
have to establish well-posedness for the underlying PDE before we start
considering the stability of difference approximations. When the coefficients
are not constant, relying on related problems with "frozen" coefficients can
be misleading. An example due to Gilbert Strang, which shows that a problem
which is perfectly well posed in L_2 gives rise to an illposed problem when
we freeze the coefficients. A system of first order and with vector-valued
solutions with two components, demonstrates that while all problems with
frozen coefficients might be well posed, the problems can in fact be ill posed.
This is demonstrated by a change of variables which turns the problem into
one with constant coefficients; this example is due to Heinz-Otto Kreiss.
First order PDE with symmetric coefficient matrices; it is easy to prove that
these problems are well posed in L_2 even for variable coefficients and in
the presence of a zero order term. It is known that such systems are not
well posed in L_p, with p not equal to 2, unless the coefficient matrices
commute.
Examples of such systems can be obtained from the wave equation in two and three
dimensions. A proof of nonconvergence of any explicit finite difference
approximation of the heat equation if the time step k is proportional to the
mesh size of the spatial variable. How to establish well-posedness of the heat
equation, defined on an interval, and equipped with boundary conditions at the
end points of the interval; the key is a Sobolev inequality which in one
variable allows us to estimate the maximum of a function in terms of its
L_2-norms of u_x and u.
- 4/9. The equation u_t = u_x on a finite interval and how to asign
boundary conditions. An unstable scheme based on a central difference.
Friedrichs' scheme and Lax-Wendroff's. The CFL condition and von Neumann
analysis. Dufort-Frankel's scheme for the heat equation; it is consistent
for k/h approaching 0. Stabilty conditions and how the error accumulates.
How to switch to an equivalent norm for which any stable scheme becomes
a strongly stable scheme for which we have a local (1+Ck) bound.
- 4/16. Stability conditions revisted and a proof that a strongly
stable scheme remains strongly stable if we add a term of the form k times
a bounded operator. Using the energy method to prove L_2 stability first
for a number of problems formulated as PDE and then for several families
of finite difference approximations including one which is three-level.
Finite element methods for Poisson's equation. Integrating by parts to
obtain a variational problem posed in H^1(Omega). Triangulations using
triangles or quadrilaterals in 2D and tetrahedra, hexahedra, or prisms in 3D.
An outline of a proof that our finite element functions must be continuous
if they are to be conforming, which means that they form
a subspace of H^1(Omega).
Several finite elements spaces: P_1, piece-wise linear with values determined
by their values at triangle vertices, P_2 piece-wise quadratics with six nodes,
and two P_3 spaces, one Lagrangian and one Hermitian.
- 4/23: More finite element spaces in particular some complicated
ones which belong to C^1, the space of continuously differentiable functions.
Affine mappings as an aid in constructing the matrix of the linear system of
equations the solution of which gives us the nodal values of the finite
element approximation. From these nodal values, the solution at any point in
the domain can easily be constructed. Error bounds and the role of the
interpolation error. The main result shows that in the energy norm,
the error will decay as h^k if the finite element space is based on the
P_k space. A bound of the L_2 norm which is valid if we can establish
a certain regularity theorem. The bilinear forms for linear elasticity and
the biharmonic problem.
- 4/30: Poincare's inequality and the second eigenvalue of the Laplace
operator for the Neumann problem. Using similar techniques to obtain error
bounds for elliptic finite element problems; the problem can be reduced to
a problem on an individual element and interpolation. Various bounds for the
heat equation using the exact solution for the pure Cauchy problem posed in
R^d, using eigenvalues of the Laplacian, and energy methods.
- 5/7: A priori estimates for parabolic problems and related estimates
for semi-discrete finite element problems. Fully discretized parabolic problems.
Solving the wave equation using the eigenvalues and eigenvectors of the Laplace
operator. Finite element methods for the wave equation. A comment on finite
element solutions of first order hyperbolic problems with symmetric coefficient
matrices.