Class meetings: Tues-Thurs, 3:30 - 4:45pm, in Warren Weaver Hall (CIWW) 102.
Last day of class: Thursday, May 3, 2018.
Final examination: Thursday, May 10, 2018, 4:00-5:50pm, in Warren Weaver 317. **Note the different room for the final exam!**
This class will cover several topics, including: one-dimensional nonlinear equations; understanding and dealing with sources of error; linear equations and linear least-squares; data fitting; splines; numerical integration; ordinary differential equations; and elementary eigenvalue problems. As much as possible, numerical methods will be presented in the context of real-world applications.
The final grade will be calculated by averaging the three elements (Homework, Midterm, Final Project/Final Examination), with weights of 40%, 35%, 25%, where the weighting will be chosen individually to maximize each student's grade.
Students without this background should check with the instructor for permission to take the class.
Professor Gilbert Strang's famous linear algebra courses at MIT can be found on the MIT open courseware website or on YouTube, with search terms ``Strang linear algebra MIT''.
Matlab tutorials are available online from several sites. For example,
there is an array of tutorial and other educational resources
for students at
the MathWorks website .
HW2, due February 23, 2018 (24-hour extension of original deadline).
HW3, due March 4, 2018.
HW4, due April 5, 2018.
HW5, due April 17, 2018.
due May 1, 2018.
A brief LaTeX document containing a template for projects is
|Introduction. Overview of numerical computing. Nonlinear equations in one dimension. Bisection. Chapter 3 of textbook (Ascher and Greif).|
|Newton's method. Rate of convergence. Pure secant method.|
|Properties of the secant method. Regula falsi and Wheeler's method. Inverse quadratic interpolation. Brent's method (zeroin). Starting and stopping.|
|More details about Wheeler's method, inverse quadratic interpolation, and Brent's method (fzero). Starting and stopping.|
|Chapters 1 and 2 of Ascher and Greif. General ideas in numerical computing. A generic model of floating-point computation. IEEE arithmetic.|
|IEEE double-precision arithmetic.|
|Review of HW1. More on IEEE double-precision arithmetic. Introduction to numerical linear algebra. Ascher and Greif, Chapters 4 and 5.|
|More discussion of issues in graded HW1. Matrix norms. Diagonal linear systems. Triangular linear systems. Analysis of solving a lower triangular system, showing backward stability. Condition number of a matrix.|
|Gaussian elimination with elementary matrices. LU factorization. Pivot size and its effects. Growth.|
|Variations on partial pivoting. Symmetric positive definite matrices. The Cholesky factorization and its favorable numerical properties. Failure when the matrix is not ``sufficiently'' positive definite.|
|The singular value decomposition (SVD) and what it reveals. Connections with the 2-norm of Ax and (A inverse); maximizing and minimizing the 2-norm. Significance of the columns of V and the columns of U. Estimating the rank of a matrix. Liberal and conservative rank-estimation strategies and their consequences.|
|Linear least-squares problems. Linear least squares in mathematical modeling. Properties of the least-squares residual. The normal equations and concerns about conditioning. Orthogonal triangularization and its differences from Gaussian elimination.|
|Householder transformations and their properties. Using Householder transformations to reduce an m times n matrix to upper triangular form. QR factorization.|
|March 12-16 (Spring break)|
|Solving least-squares problems using the QR factorization. Backward error analysis of the QR factorization. Some effects of perturbations. QR with column interchanges. Estimating rank with the QR factorization. Condition of the least-squares problem.|
|Introduction to interpolation and approximation in one dimension. Ascher and Greif, Chapters 10 and 11. Functional norms. Horner's method for evaluating a polynomial. Uniqueness of the interpolating polynomial. The Vandermonde matrix. Lagrange and Newton forms of the interpolating polynomial. Weierstrass's Theorem.|
|The error factor polynomial. Runge's function. The Runge phenomenon. Polynomial approximations based on function norms. Quick look at Chebyshev polynomials and Chebyshev points. Hermite osculating polynomials.|
|Splines. Piecewise linear interpolation. Piecewise cubic splines. Conditions that define a spline. Natural splines, `clamped' splines and `not a knot' splines. polynomial. Piecewise cubic Hermite a la Matlab (pchip). Comparing different interpolants.|
|Numerical integration (quadrature). The midpoint and trapezoid rules, and their connection with polynomial interpolation. Error estimates. Simpson's rule. The roles of Legendre polynomials and the Lagrange form of the interpolating polynomial. Clenshaw--Curtis quadrature.|
|Composite quadrature; adaptive quadrature. Romberg quadrature. Introduction to initial value problems in ordinary differential equations. The test equation. The forward Euler method and its error estimate. Introduction to Runge-Kutta methods.|
|Runge-Kutta methods. Transforming problems with higher serivatives into systems of first-order equations. Implicit methods. Backward Euler. Multistep methods. Adams-Bashforth and Adams-Moulton methods. Backward differentiation methods.|
|Derivation of backward differentiation methods. Predictor-corrector methods. Stiffness. Stability. The Milne-Simpson method. Linear difference equations. Numerical issues. ODE software.|
|Background theorems and results about eigenvalues and eigenvectors. The power method.|
|The power method with shifts. The inverse power method (inverse iteration). The Rayleigh quotient; Rayleigh quotient iteration. The QR method. The pedagogical QR method.|
|More on the pedagogical QR method. A diversion about PageRank and its connection with eigenvalues.|
|Example of the pedagogical QR method. Finding the eigenvectors of a triangular matrix. Reduction to upper Hessenberg form.|
|The implicit QR method; the implicit Q theorem. Plane rotations. Chasing the bulge. Some useful theory. The QR algorithm for symmetric matrices. Using bisection to find eigenvalues.|
|Review of material covered in the course.|