What you need to know
Here follows a list of some of the things you need to know for the exam. If you feel comfortable in these topics then you are well prepared for the exam.
- General knowledge
- Know how to use an error estimate to find out how many [interpolation points, or quadrature points, or composite method intervals, or ...] are needed to be guaranteed that the error is less than a given number ?.
- Linear algebra
- Know how to compute the LU and QR factorizations of a matrix. How to solve least squares problems by QR factorization.
- Know how to compute the matrix norm with respect to the 1-, 2- and ∞-norms. Know how to compute the condition number of a matrix A, and what it says about the stability of solving the system Ax = b.
- Solving nonlinear systems
- Know how to set up a fixed point iteration. Know when this iteration converges (Banach's fixed point theorem), and how to check the conditions (g should be a map from a closed set \(D\subset\mathbb{R}^n\) into D, and should be a contraction. Sufficient conditions for being a contraction. The error estimate for the k-th iteration.
- Know how to derive Newton's method using Taylor expansion. Know sufficient conditions for its convergence.
- Polynomial interpolation and approximation
- Know the Lagrange form of the interpolant over n+1 points, and the fundamental error estimate. Know that Runge's phenomenon might arise as n increases.
- Understand what Hermite interpolation is, and how to find it "by hand" in simple cases.
- Know what the minimax problem is, and some basic properties – existence and uniqueness of the minimax polynomial, and the oscillation property.
- Know how to find the Chebyshev points of order n (i.e., the zeros of \(T_{n+1}\)), and why they are useful interpolation points (they minimize the ∞-norm of \(\pi_{n+1}\)).
- Know how to find the closest polynomial approximation in the 2-norm using orthogonal polynomials. Know how to compute the orthogonal polynomials for a given weight function w.
- Know what a linear spline is, how to express it in the "hat" basis, and the basic error estimate.
- Quadrature rules
- Know how to derive a quadrature rule for a given set of quadrature points \(x_0,\dots,x_n\). Understand how to map a rule for an interval [a, b] to [0, 1], and back again. Know how to write down the composite method for a given quadrature rule.
- Know what the order N of a quadrature rule is, know that n+1 ≤ N ≤ 2(n+1), and how to find the order of a given quadrature rule.
- Know the fundamental error estimate for a quadrature rule of a given order N. Know the error estimate for the corresponding composite method.
- Know how to derive the error estimate for the multi-dimensional composite method. Understand the curse of dimensionality, and the fact that the Monte Carlo method does not suffer from this.
- Know how to derive the Gauss quadrature rule for a given weight function w. Understand what is special about Gauss quadrature (they are of maximal order). Understand that Gauss quadrature converges as \(n\to\infty\) for any continuous integrand.
- Numerical methods for ODEs
- Know how to derive the forward and backward Euler methods
- Know what consistency is, and how to check it for a given method.
- Know what truncation error and global error are, how to estimate the truncation error of a given method, and understand the idea behind "Lady Windermere's fan" and the result \(|\tau_k| \leq Ch^{p+1} \ \Rightarrow\ |x_N-x(t_N)|\leq Ch^p\).
- Definition of linear stability, stability region, and the stability function.
- How to check consistency of a Runge–Kutta method.
Overview of the curriculum
We will follow the book An introduction to numerical analysis by Süli and Mayers.
- Chapter 1
I will consider this as known and will not teach it. - Chapter 2 (2 weeks)
Solving linear systems using Gauss elimination and LU decomposition. Computational complexity. Condition numbers. The least squares method. - Chapter 4 (1 weeks)
Solving nonlinear systems of equations by iterative methods (fixed point iteration and Newton's method). - Chapter 6 (2–3 weeks)
Lagrange and Hermite interpolation, convergence of interpolating polynomials, Runge's phenomenon, differentiation. - Chapter 7 (2–3 weeks)
Numerical integration using Newton–Cotes methods. Error estimates and Runge's phenomenon. Composite methods. Only sections 7.1–7.5. - Chapter 8 (2 weeks)
Finding the best approximating polynomial of a given function, when measured in the ∞-norm. Chebyshev polynomials. - Chapter 9 (1–2 weeks)
Polynomial approximation in the 2-norm. - Chapter 10 (1 week)
Use the theory developed in Chapter 9 to find the "best" integration method. This leads to Gauss quadrature. - Chapter 12 (if time permits)
Numerical methods for approximating solutions to ordinary differential equations.