Numerical Methods I
Fall 2013, Thursdays 5:10-7:00pm, WWH Room 101
Office: WWH 612
Office Hours: Tuesdays and Thursdays, 4:00 - 5:00 pm.
e-mail: widlund at cims.nyu.edu
Regular homework assignments, including programming
using Matlab or any reasonable alternative.
It is important that you do the homework yourself, but when you get stuck,
I encourage you to consult with other students
or me, to get help when necessary. However, when you get help,
it's important to acknowledge it in writing. Passing off other people's
work as your own is called plagiarism and is not acceptable.
Please staple everything together and order the problems
in the same order as given.
There will be an in-class final exam in WWH 101 on Thursday December 19
at 5:00pm. You will not be able to bring any books or electronic devices to the final but you can bring two regular-sized sheets of paper covered on both sides with notes written by yourself. Here is the final
given two years ago.
Homework scores will be made available through the NYU Classes system.
You can download the assignments in pdf format from this home
page. Some of the problems involve MATLAB programming.
Homework 1 Assigned September 5,
due September 20, at 12 noon.
Homework 2 Assigned September 13,
due September 27, at 12 noon.
Homework 3 Assigned September 30,
due October 11, at 12 noon.
Homework 4 Assigned October 11,
due October 22, at 12 noon.
Homework 5 Assigned October 29,
due November 8, at 12 noon.
Homework 6 Assigned November 14,
due November 22, at 12 noon.
Homework 7 Assigned December 3,
due December 13, at 12 noon.
September 5: Singular value decomposition; cf. Lectures 4 and 5 in the
Trefethen-Bau book. The geometry of the singular value decomposition. The
full and reduced singular value decomposition. Existence. Uniqueness of
the reduced SVD if the singular values are all different. Determining the
rank of matrix and how to approximate a matrix by a matrix of fixed lower
rank. Connection to two Hermitian eigenvalue problems.
September 12: Handout on IEEE arithmetic.
Another look at the singular value decomposition and
the information on the given matrix A contained in its SVD. Projections.
Classical and modified Gram-Schmidt seen as QR factorizations of matrices.
Householder reflections and how to use them to obtain a QR factorization
of a matrix. A few words on linear least squares problems.
September 19: Guest lecture by Professor Michael Overton on IEEE floating
September 26: Absolute and relative errors. How to identify functions,
which are hard to compute accurately with examples. Wilkinson's polynomial
and the sensitivity of its roots to changes in the coefficients; we cannot
reliably compute the eigenvalues of matrices via their characteristic
polynomials. Conditioning of
matrix-vector multiplication and linear systems of algebraic equations.
Linear least squares problems. The normal equations and why forming them
is a source of round-off which can be by-passed by using a QR factorization
of an SVD of the matrix. How to solve least squares problems with QR and SVD.
Least squares fits of polynomials as an alternative to polynomial interpolation.
A few words on basic arithmetic using IEEE floating point arithmetic.
October 3: A discussion of the Fibonacci number homework problem. Forward
and backward stability. The backward stability of QR factorizations based on
Householder reflections. Solving linear systems of algebraic equations using
QR; there are three steps each of which is backward stable. A proof that
the resulting three step algorithm is backward stable. A proof that the
final step of that algorithm, namely the solution of a linear system with
an upper triangular
matrix is backward stable; it all depends on bounds on the basic arithmetic
operations of any IEEE floating point system. Gaussian elimination and
the LU factorization of matrices.
October 10: A comparison between LU and QR factorization of matrices;
the resulting triangular matrices can be expected to maintain less sparsity
for QR than for LU. The need for pivoting to obtain LU factorizations of
matrices; exceptions are the symmetric, positive definite and diagonally
dominant matrices for which we know a priori that pivoting is not required.
Partial pivoting. An attempt to prove backward stability of Gaussian
elimination; a growth factor measuring the relative size of the elements
of the upper triangular matrix enters the bound. An example with a large growth
factor, which however is quite untypical. Cholesky's algorithm. A data structure
for sparse matrices and how the LU factorization of a sparse matrix can
be preceded by a symbolic factorization step using for example the minimal
degree algorithm in which we always choose the next variable to eliminate
among those associated with the smallest number of edges of the graph that
represents the matrix that remains to be factored.
A few words about BLAS (basic linear algebra subroutines);
see the discussion on Demmel's book listed on this course home page. A few
words on the benefits of using one loop rather than three nested loops when
implementing the LU factorization of a matrix by Gaussian elimination in MATLAB;always try to use vector and matrix operations.
October 17: Polynomial interpolation of continuous functions on [-1,+1].
Existence and uniqueness and Lagrange's formula. The barycentric formula which
provides a stable and faster algorithm for evaluating the values of a polynomial.Chebyshev points obtained from roots of unity. Chebyshev polynomials, which
provides the first example of a system of polynomials orthogonal in the right
inner product; this is all about cosine Fourier series in disguise. Hermite
interpolation where we match both function values and values of the first
derivatives. A proof that any polynomial of an orthogonal system of polynomials
have all its roots in the interval [a,b], over which the system is defined, and
that the roots are all simple. A derivation of Gauss quadrature rules with
n+1 points and a proof that any such rule is exact for all polynomials of
degree 2n+1; the proof uses the Hermite interpolation formula.
October 24: Handouts on Barycentric Interpolation, discussed on October 17,
and Adaptive Simpson quadrature. Three-term recurrence relation for orthogonal
polynomials. Gauss-Lobatto quadrature; with nodes fixed at endpoints of the
interval, the space of polynomials for which the integration is exact is of
dimension 2n-1, while for Gaussian quadrature it is 2n+1. Error formula for
polynomial interpolation. Clenshaw-Curtis integration: interpolate at
Chebyshev points to obtain a series in the Chebyshev polynomial basis and
integrate the result. Newton-Cotes based on equidistant points and integrating
the resulting polynomial. Composite integration: split the interval into
a union of subintervals and use the numerical quadrature rule of your choice
in the subinterval and, finally, add up the results. Simpson's rule and
adaptive Simpson. Runge's example and a study of the norm of the polynomial
interpolation operator; it grows rapidly with n if the nodes are equidistant
but behaves much better if the roots of a Chebyshev polynomial are used as
nodes. Two books that provide material considered in the October 17 and 24
lectures are now listed under recommended texts and available as reserve books
in the CIMS library.
October 31: The trapezoidal rule. End point corrections of the trapezoidal
rule. Euler-Mclaurin expansions and why the trapezoidal rule is very effective
for periodic smooth functions. More details on Clenshaw-Curtis integration;
use FFT and watch the decay of the coefficients of Chebyshev series when
n is doubled.
Piecewise linear continuous approximation; a very simple local procedure for
which an error bound is available immediately by considering what is happening
in a single subinterval. A related minimization problem posed in the set
of functions with square-integrable first derivatives and which also satisfies
the interpolation conditions. Cubic spline interpolation; piecewise cubics
which interpolates at a set of nodes and have two continuous derivatives.
A variational formulation in terms of the L^2-norm of the second derivative;
this means that the spline interpolant has a minimum energy of any function
which satisfies the interpolation conditions.
November 7: Review of the algebraic eigenvalue problem. Schur normal form.
A proof that a matrix can be diagonalized using a unitary matrix if and only
if the matrix is normal. Wellposedness of the algebraic eigenvalue problem;
The Bauer-Fike theorem with a proof for the case of normal matrices. The power
method and inverse iterations. Rayleigh quotient iterations. Jacobi's method
for symmetric eigenvalue problems. Reduction of
any matrix to an upper Hessenberg matrix with the same eigenvalues.
November 14: More on Gerschgorin's theorem and how diagonal scaling
can be used to obtain refined information on the location of individual
eigenvalues. Cauchy's interlace theorem for symmetric tridiagonal matrices.
Sturm sequences and how the changes in sign of a sequence of determinants
of principal minors of a tridiagonal matrix, shifted by a multiple of the
identity matrix, will tell us how many eigenvalues exceed the shift.
Approximations of the eigenvalues can then be obtained by using a bisection
algorithm. A divide and conquer algorithm; to be revisited 12/21. The basic
idea of the QR algorithm with shift; to be continued.
November 21: Conclusion of the discussion of the divide and conquer
algorithm for solving symmetric tridiagonal eigenvalue problems. Simultaneous
iteration and unshifted QR; part of the long story about their relationship.
An introduction to iterative methods for large linear systems of algebraic
equations and algebraic eigenvalue problems. Krylov spaces. Arnoldi's
algorithm and its relation to Gram-Schmidt. A few words on GMRES.
November 28: Thanksgiving.
December 5: A few words about solving very large algebraic system;
nonlinear problems are often solved using Newton's method reducing the
problem to a sequence of linear systems. When the linear systems are
ill-conditioned and we need to use a Krylov space method, a preconditioner,
i.e., an approximate inverse is often required. Analysis of Arnoldi's method;
it is related to a polynomial approximation problem. GMRES: an algorithm
elated to Arnoldi's method. A disadvantage of GMRES; the work and storage
requirements increases with the iteration count. Richardson's method and
an error bound for system matrices which have a positive definite
symmetric part. Krylov spaces and an orthogonal basis for the case of
symmetric matrices. There is then a three term recurrence relation just
as for systems of orthogonal polynomials
December 12: Why many linear systems in applications have symmetric
positive definite system matrices. Examples of families of problems which
have symmetric indefinite matrices: finding minima of problems with positive
definite energy form subject to linear constraints, e.g., from Stokes' equations
for incompressible fluids and problems arising from time-harmonic wave
equations. An extra complication using GMRES for problems with ill-conditioned
systems of eigenvectors; the condition number enters the error bound. The
conjugate gradient method; it provides a best approximation in the A-norm
where A is the system matrix and we minimize over a Krylov space with the
first element being the right hand side of the system or the initial residual.
The standard theory of the conjugate gradient method as told in the T&B text.
A bound of the convergence derived by using Chebyshev polynomials with a
shifted argument; in fact this provides the best possible bound if only
we have information on the smallest and largest eigenvalue. The bound is
given in terms of the square root of the condition number.
December 19: In class final.
Numerical Linear Algebra by Lloyd N. Trefethen and David Bau, III. SIAM.
Numerical Computing with IEEE Floating Point Arithmetic by Michael L. Overton,
Applied Numerical Linear Algebra by James W. Demmel. SIAM
Demmel's book also contains a lot of material relevant to Numerical Methods II.
Approximation Theory and Approximation Practice by Lloyd N. Trefethen.
An Introduction to Numerical Analysis by Endre Suli and David Mayers.
Cambridge University Press.