Spring 2004, Tuesdays and Thursdays, 11:00am - 12:15 pm. WWH 101
Office: WWH 712
Office Hours: Tuesdays 12:30 - 1:30 pm or drop by any time,
or send email or call for an appointment.
A Free Matlab Online Tutorial or look for others by a web
An outdated but still very useful Introductory Matlab Primer (3rd edition,
in pdf format).
There will be regular homework assignments, including programming
using Matlab or any reasonable alternative.
When turning in programming assignments, please provide a listing of
your program and output from runs with actual data that shows that
your program works as intended.
It is important that you do the homework yourself, but when you get stuck,
I encourage you to consult with other students
or me, to get help when necessary. However, when you get help,
it's important to acknowledge it in writing. Passing off other people's
work as your own is called plagiarism and is not acceptable.
Please staple everything together and order the problems in the
same order as given. The best way of turning in homework is to give it to me
personally, in class or in my office. If I am not in my office, you can
slide it under my door. If you put your homework in my mailbox, it is at
your own risk.
Homework 1 Assigned February 2, due, with
full credit, February 13 at midnight.
Note on Exercise 1.4
Handouts; if you miss some, send e-mail request.
Homework 2 Assigned February 18, due with
full credit, March 2 at 11:00 am.
Homework 3 Assigned March 9, due with
full credit, March 30 at 11:00 am.
Homework 4 Assigned April 3, due with
full credit, April 15 at 11:00 am.
Homework 5 Assigned April 7, due with
full credit, April 27 at 11:00 am.
Chapter 6: Linear system: pp. 194-197 and pp. 208-211.
The Error of Interpolating Polynomials: pp. 50-53.
Piecewise-Polynomial Approximation: pp. 284-291.
Differentiation and Intergration: pp. 294-309 and pp. 328-331.
The solution of nonlinear equations: pp. 72-81, pp. 88-93, and pp.100-109.
Vector and matrix norms: pp. 170-173.
January 20. General comments about numerical computing and
it role in some other areas of computer science such as graphics,
geometric modeling, etc. Gaussian elimination. Partial pivoting
and how to tell if a matrix is nonsingular. An example with
thee-decimal-digit arithmetic which leads to failure without
January 22. Gaussian elimination in terms of elementary row
operations and factorization of the coefficient matrix as a product
of a unit lower triangular matrix and an upper triangular matrix.
Permutation of the rows in between the basic steps of the algorithm.
Introduction to IEEE Floating Point Systems.
January 27. More about IEEE Floating Point. Symmetric positive
definite matrices. Cholesky's algorithm.
January 29. Demonstration of MATLAB. Computing Fibonacci numbers
something about floating point computations from the success
and failure in recovering the Fibonacci sequence by running the
recursion backwards. Three vector norms and their associated matrix norms.
If you did not get the handout about norms, send an e-mail
Slides on Floating Point Systems (pdf file).
Second set of slides on Floating Point Systems (pdf file).
February 3, 5, 10, and 12. Exercise 1.3 which shows that diagonal
dominance is inherited by the matrix that remains to be factored after
one step of Gaussian elimination. The same result for Cholesky's algorithm
for symmetric, positive definite matrices; cf. the proof of Theorem 1.10.
The sensitivity of the solution of
linear systems of equations. Bounds on the relative error in terms of
the condition number of the matrix and the relative perturbation of
the right hand side and the matrix. Geometric meaning of ill-conditioning
of linear systems; cf. Figures 2.2 and 2.3. Avoiding subtraction of nearly
equal numbers: quadratic equation solving and the use of power series.
The use of multivariable calculus and vector and matrix norms
to derive formula (2.1).
Introduction to forward and backward error
analysis. Backward stability of triangular linear system solving. The
more modest results on LU factorizations of matrices.
Introduction to linear least squares
problems. The normal equations. Risk of loss of information if normal
equations are formed. The alternative given by QR factorization of
matrices. Solving least squares problems once the QR factorization
February 17. Givens and Householder transformations to compute QR
factorizations of matrices.
February 19. Sections 3.1.3 and 3.1.4: What can be said about the
condition of least squares problems? What do these bounds mean when we select
an algorithm? The problem with using the normal equations; we will have to
give up accuracy in many cases.
Setting up least squares problem for fitting polynomials
of degree higher than one; see Example 3.1 for the case of linear functions.
February 24. Comments on some aspects of Homework set 2. Topics from
chapter 7: Unique solvability of the classical polynomial interpolation
problem. Vandermonde matrices. Lagrange polynomials to provide the solution
of the problem. Lemma 7.4 on the recursive computation of the interpolating
polynomials; Neville's scheme. Lemma 7.11 in the special case when the
interpolation points are distinct.
February 26. Verification that Newton's scheme to compute interpolating
polynomials is correct. Review of the definition of the recursive definition
of divided differences. Comments on homework. Handout and discussion
the use of matrix operations in matlab to eliminate ``for'' loops. Solving
triangular systems by rows and by columns.
March 2. Hermite cubic interpolation. Existence by an explicit
construction. Uniqueness by a linear algebra argument. Construction of
Hermite interpolating polynomials by Newton's method. Review of Taylor's
formula. Error bounds for the Lagrange interpolation as in a hand-out.
March 4. Midterm exam.
March 9. Discussion of homework problem 2.6 and of four of five
March 11. Discussion of the remaining midterm question. Interpolation
by polynomials, piecewise cubic Hermite polynomials, and cubic splines.
(Handout about cubic Hermite polynomials and cubic splines.)
March 23. The dimension of the space of cubic splines. Derivation
of the tridiagonal linear system of equations which determines the
value of the first derivative of the spline interpolant at the interior
nodes. Different ways of selecting the additional two conditions. How
to make certain tridiagonal matrices symmetric. How to compute approximations
of first derivatives by using difference quotients and the effect of round off.
(Handout about numerical differentiation and numerical intergration including
adaptive numerical quadrature.)
March 25. Experiments with two formulas used for numerical differentiation.
Deriving numerical quadrature formulas by using interpolating polynomials
of different degrees. Error bounds for numerical quadrature formulas using
the standard error bound for polynomial interpolation.
March 30. Simpson's rule and the trapezoidal rule with end point
corrections. How to determine the coefficients of a quadrature formula
from the requirements that it should integrate a certain class of polynomials
exactly. Composite integration. Adaptive Simpson quadrature.
April 1. Section 7.1.4 of the text book. Chebyshev polynomials and
their role in selecting good interpolation points for polynomial interpolation;
cf. Theorem 7.16. Theorem 7.19 which shows that the roots of the Chebyshev
polynomials provide the best selection. A few words on trigonometric
interpolation; cf. Section 7.2 of the text book.
April 6. Interpolation with trigonometric polynomials. Unique solvability
follows from theory for polynomial interpolation or, alternatively from the
invertibility of the Vandermonde matrix. Fast computation of the trigonometric
interpolant on equidistant points by the fast Fourier Transform. Fast
computation of the coefficients by the same algorithm. Reference is pp. 197-203
of the text book.
April 8. Solving one nonlinear equation by bisection, Newton's method,
the secant method and regula falsi. Newton's method for systems of nonlinear
April 13. Second midterm.
April 15. More about the convergence of iterative methods for nonlinear
equations. Quadratic convergence of Newton's method and the almost quadratic
convergence of the secant method. Problems 4 and 5 of the second midterm.
April 20. Problems 1, 2, and 3 of the second midterm. Solving large
systems of linear algebraic equations with the Jacobi and Gauss-Seidel algorithms.
The use of eigenvalues and eigenvectors to estimate the rate of convergence
of such iterative methods. The convergence of Jacobi's method for diagonally
dominant matrices. See Section 8.1 of the text book.
April 22. Convergence of Gauss-Seidel's method for positive definite,
symmetric matrices. Approximation of Laplace's equations by finite differences.
Solvability of the resulting linear system.
April 27. Review of positive definite symmetric matrices: a symmetric
matrix is positive definite if Cholesky succeeds. Eigenvalues are real and positive.
Basic result on Gauss elimination. Why is Cholesky stable? Householder transformations.
Orthogonal matrices and their use in least squares problems. The use of Chebyshev
polynomials to accelerate the convergence of a basic iterative method for large
April 29. The Chebyshev and conjugate gradient methods for solving
large linear systems of algebraic equations with positive definite symmetric
coefficient matrices. Bounds in terms of the condition number of the matrices;
cf. sections 8.2 and 8.3 in the textbook. Practical aspects: the matrix needs
only be available in terms of a matrix-vector multiplication; there are short
term recurrence relations which makes it possible to save just a few vectors
throughout the iteration.
Numerical Analysis in Modern Scientific Computing; An introduction
by Peter Deuflhard and Andreas Hohmann, Second Edition, Springer 2003.
Class attendance is required .
Attendance will be taken occasionally.
If you must miss class, you must email me in advance with an
The first midterm was given on Thursday March 4.
Old midterm from 2001 (pdf file).
March 4, 2004 midterm (pdf file).
A second midterm was be given on April 13.
April 13, 2004 midterm (pdf file).
The midterm grade will be the
maximum score of the two exams. The second midterm covered
topics beginning with polynomial interpolation, i.e.,
the topics covered on February 24 and up to and including the fast Fourier
transform, i.e., topics covered on and before April 6.
Midterm grades were due in the department on Friday
Spring break this year was March 15 -20.
March 29 was the last day for undergraduates to withdraw with a ``W''.
All students remaining will receive a grade.
The last class will be on April 29.
The final will be on
Thursday May 6, 10:00-11:50am in WWH 101. It will cover topics from the
Closed book, but you may bring one sheet of paper with your own notes.
Graded homework can be picked up in my office
now through 6:00 pm on Saturday May 1. Thereafter, look for instructions
on my office door.