Numerical Methods I
G22.2420, G63.2010
Fall 2003, Mondays 5:00-7:00pm, WWH Room 101
Instructor: Olof
B. Widlund
Coordinates
Office: WWH 712
Telephone: 998-3110
Office Hours: Drop by any time, or send email or call for appointment
Email: widlund@cs.nyu.edu
Midterm Grade Reports .
This is a new requirement in the graduate school. The reports were filed
October 24 and the grades should soon be available on Albert.
The grades, which of course are very preliminary, were based exclusively
on the point count of the first three homework sets. For your information,
17 out of 36 students had accumulated 27 or more points, out of 30,
and they received A or A- as a midterm grade. There were 9 C+ and C and
the rest of the students received B+ or B as a midterm grade.
Examination and end of semester.
The grades sheets were turned in to the departmental offices
in the afternoon of December 23. It is unclear if the grades
will be posted before January 5 when the NYU offices open again.
The grades were based to a great extent on the homework scores.
If you have not taken your exam yet or scheduled one for later, please
contact me by e-mail as soon as possible. A few old homeworks
are also available for pickup in WWH 712 at your convenience.
Running matlab on SUN workstations.
Make sure the path includes /opt/bin/matlab .
Requirements
Regular homework assignments, including programming
using Matlab or any reasonable alternative.
It is important that you do the homework yourself, but when you get stuck,
I encourage you to consult with other students
or me, to get help when necessary. However, when you get help,
it's important to acknowledge it in writing. Passing off other people's
work as your own is called plagiarism and is not acceptable.
Please staple everything together and order the problems in the
same order as given.
Homework
Lectures
- September 8. Introductory remarks: In addition to much of the material
in the Trefethen-Bau text, the course will also cover IEEE Floating Point
Arithmetic as well as polynomial interpolation, numerical quadrature, and cubic
spline interpolation. The main topic of this first lecture was the
Singular Value
Decomposition; see chapters 4 and 5 of the required text. (Read chapters
1, 2, and 3 by yourself.)
- September 15. IEEE Floating Point Systems.
Slides on Floating Point Systems (pdf file).
Second set of slides on Floating Point Systems (pdf file).
Errors in adding three and more numbers of different signs.
- September 22. The effects of rounding errors when computing
the variance of a set of numbers using two alternative formulas.
How catastrophic cancellation can sometimes be avoided, e.g., when
solving a quadratic equation. A numerically unstable recurrence and
a remedy. Lectures 6 and 7 of the text: projectors and the QR factorization
of matrices.
- September 29. Gram-Schmidt, classical and modified as in Lectures 7
and 8. Householder triangularization as in Lecture 9. A few words on
least squares problems. A few words on function and script files in MATLAB.
- October 6. Solving linear least squares problems using various matrix
factorizations (as in Lecture 11 of the text book.) Interpolation using
polynomials. Divided differences a la Newton.
- October 13. An example of absurdly poor results when interpolating
data with high order polynomials. Divided difference tables a la Newton.
Hermite cubic interpolation derived using a divided difference table.
Cubic spline interpolation: derivation and the tridiagonal structure
of the resulting linear system of algebraic equations. Numerical
quadrature: the rectangular rule, the midpoint rule, the trapezoidal
rule, and Simpson's rule. Error bounds for all these methods.
- October 20. Adaptive Simpson quadrature as in the final pages
of the printed handout of October 13. Conditioning of various problems
such as finding roots of polynomials and its impact on the choice of
algorithms for the algebraic eigenvalue problem. Conditioning of
matrix vector multiplication and the solution of linear system of
algebraic equations. Accuracy, stability, and backward stability as
in Lecture 14 of the text.
- October 27. Lectures 16, 17, and part of 18: Stability of Householder
triangularization, backward stability of back substitution, and the
beginning of the story on the conditioning of least squares problems.
- November 3. Additional remarks on the conditioning and accuracy
of least squares problems. Gaussian elimination, the need to pivot, partial
and complete pivoting. Possible rapid growth of elements under partial
pivoting. Loss of sparsity and elimination graphs.
- November 10. Backward error analysis for Gaussian elimination; bounds
depends on the growth of the elements of the triangular factors. Diagonally
dominant matrices; no pivoting necessary. Cholesky's algorithm and properties
of Hermitian, positive definite matrices. Stability of Cholesky's algorithm.
Using Courant-Fischer's theorem to show that the Schur complement of a
symmetric, positive definite matrix is better conditioned than the matrix
from which it originated. Algebraic eigenvalue problems; iterative algorithms
are necessary.
- November 17. Review of the algebraic eigenvalue problem with special
emphasis on the Schur normal form. Perturbation theory for the algebraic
eigenvalue problem; eigenvalues are continuous functions but not necessarily
differential but in the symmetric case the problem is very well behaved.
Unitary similarity transformations to upper Hessenberg form. Rayleigh
quotients, inverse iterations, and Rayleigh quotient iterations.
- November 24. The unshifted QR algorithm and its connections to
simultaneous and inverse iterations. Conditions on the eigenvalues to
guarantee convergence of these algorithms. Using simultaneous iterations
for large eigenvalue problems. The QR algorithm with shifts. Jacobi's
method. The bisection algorithm and how to compute the number of eigenvalues
of a tridiagonal symmetric matrix in a given interval of the real axis.
- December 1. Divide and conquer algorithms for symmetric tridiagonal
matrices and the secular equation. Computing the SVD. Krylov spaces, Lanczos
vectors and a derivation of the conjugate gradient method for positive
definite symmetric linear systems of equations.
- December 8. No class. Instructor in Korea at a conference.
- December 15. (Make up final class.) Orthogonal
polynomials. Roots of orthogonal polynomials. Best approximation in
the maximum norm; characteristics of the error. Chebyshev polynomials
as the solution of a particular best approximation problem. An error
bound for the conjugate gradient method expressed in terms of the
condition number of the matrix.
Required Text
Numerical Linear Algebra by Lloyd N. Trefethen and David Bau, III. SIAM
Recommended Texts
Numerical Computing with IEEE Floating Point Arithmetic by Michael L. Overton,
SIAM.
You can buy Overton's book at a special student price of $20.
Applied Numerical Linear Algebra by James W. Demmel. SIAM
Demmel's book also contains a lot of material relevant to Numerical Methods II.
Try out
A Free Matlab Online Tutorial or look for others by a web
search.
An outdated but still very useful Introductory Matlab Primer (3rd edition,
in pdf format).
Sun Account
If you don't have a Sun workstation account, but you want one,
send me email.
Don't Hesitate to Ask for Help
If you have questions, send me email, give me a call, or drop by my office.
Don't wait until it's too late!