I have marked some problems in orange. These are the types of problems that you can expect in the exam. (That does not mean that "non-orange" problems are not relevant. All problems on this page will improve understanding of the syllabus.)
Chapter 1
- Week 34
- Problems 1.3, 1.9(a, b, c)
- Computers can represent many, but not all integers in floating point representation. For 32-bit floating point representation (cf. the IEEE-754 standard), compute the largest integers \(n_{\min}\) and \(n_{\max}\) such that all integers in the set \(\{-n_{\min},-n_{\min}+1, \dots, n_{\max}\}\) can be represented exactly. Do the same for 64-bit.
- Let \(A = \begin{pmatrix} 1 & 0 \\ 0 & \varepsilon \end{pmatrix}\), where \(\varepsilon >0\) is some very small number. Compute the condition number of \(A\). What does the condition number say about the sensitivity of the solution of the system \(Ax=b\) to perturbations in the data \(b\)?
- Let \(f\) be a continuous function. Show that there is a unique polynomial \(p\in\mathbb{P}_2\) such that
\(p(0)=f(0),\quad p'(1)=f'(1), \quad \int_0^1 p(x)\,dx = \int_0^1 f(x)\,dx.\)
Show that there is in general either no, or infinitely many, polynomials \(p\in\mathbb{P}_2\) such that
\(p(0)=f(0),\quad p'(\tfrac13)=f'(\tfrac13), \quad \int_0^1 p(x)\,dx = \int_0^1 f(x)\,dx.\)
(The morale is: We have to be careful when deciding on how to determine the degrees of freedom of the interpolant.)
Hint: Write \(p(x)=a_0+a_1x+a_2x^2\) and determine \(a_0,a_1,a_2\).
- Week 35
- Exercises 3.3, 3.4, 3.5
- Let \(I\subset\mathbb{R}\) be
an interval (either bounded or unbounded)a closed, bounded interval, let \(w:I\to[0,\infty)\) be a function such that \(\int_I w(x)\,dx<\infty\) and such that \(w(x)=0\) only at finitely many points \(x\in I\), and define \(\langle f,g \rangle_w = \int_I f(x)g(x)w(x)\,dx\) for \(f,g\in C(I)\) (the set of continuous functions from \(I\) to \(\mathbb{R}\)). Show that \(\langle\cdot,\cdot\rangle_w\) is an inner product on \(C(I)\). - Prove that the Chebysheff polynomials \(T_n(x)=\cos(n\arccos(x)),\:n\in\mathbb{N}_0\) are indeed orthogonal nth order polynomials with respect to the inner product \(\langle f, g\rangle = \int_{-1}^1 \frac{f(x)g(x)}{\sqrt{1-x^2}}\,dx\).
Hint: It suffices to prove that \(\langle T_n, T_m\rangle = 0\) for \(n\neq m\) (why?). - Write a method
interpolate(f, xi, x)
in your favourite programming language which takes as inputf
(a function you wish to interpolate),xi
(the interpolation points, a vector of length n+1), andx
(a vector of arbitrary length where you want to evaluate the interpolating polynomial), and returns the values of the interpolating polynomial at the points inx
. Test it out on uniform and non-uniform gridsxi
, and on analytic (e.g., trigonometric) and non-analytic (e.g., the Runge function) functions.
- Week 36
- Exercises 3.14 (d,e,f,g), 3.15, 3.16 (a,b,c), and 3.12.
- Week 37
-
Read Section 5.6
-
Problems 5.1, 5.6, 5.7(a)–(e)
-
In your favourite programming language, implement the Newton–Cotes formula with quadrature points on a uniform mesh, \(x_i=a+\frac{i-1}{n-1}(b-a),\:i=1,\dots,n\). Test it out for increasing values of \(n\) for the Runge function and a trigonometric function. Do the approximations seem to converge as \(n\to\infty\)? (To obtain the exact answer, you can take for granted that the composite midpoint method converges for any continuous integrand.)
-
-
-
Oblig – no exercises!
-
-
-
Do the two exercises in the Monte Carlo note.
-
Find the quadrature points and quadrature weights using \(n=3\) points for both the Gauss–Lobatto and Gauss–Radau methods on the interval \([-1,1]\) and with the weight \(w \equiv 1\).
-
Consider the composite method using the midpoint rule for approximating the integral \(\int_0^1 f(x)\,dx\).
-
State the method.
-
Prove that the error can be bounded by some constant times \(N^{-2}\).
Hint: Recall that the error in the midpoint rule over an interval \([a,b]\) can be written as \((b-a)^3f''(\xi)/24\).
-
-
-
-
Exercises 4.1, 4.5, 4.7
-
Prove the integral version of the intermediate value theorem, which we used in the error estimate for Gauss quadrature: If \(g,h\in C([a,b])\) and \(h>0\) (except for at most finitely many points), then there is some \(\xi\in(a,b)\) such that \(\int_a^b g(x)h(x)\,dx = g(\xi)\int_a^b h(x)\,dx\).
Hint: Bound the first integral from below and above by \((\min_{x\in[a,b]}g(x))\int_a^b h(x)\,dx\) and \(\max...\), respectively. Then use the intermediate value theorem.
-
-
Week 41
-
Problems 1–5, 6, 7 in the optimization note.
-
I claimed in lecture that Gaussian elimination with an upper (or lower) triangular matrix requires O(n?) floating point operations (flops). Determine the constant c such that the number of flops is cn?+O(n).
-
LU decomposition and Gaussian elimination both require 2/3n? operations. Why do you think Matlab uses LU decomposition and not Gaussian elimination in its matrix inversion function mldivide?
-
-
Week 42
- Oblig – no exercises!
-
Week 43–44
-
Define
\(A = \begin{pmatrix}0 & 2 \\ 1 & 3\end{pmatrix}, \quad B=\begin{pmatrix}1 & 2 \\ 2 & 4 \end{pmatrix}, \quad C = \begin{pmatrix} 1 & 2 \\ 0 & 1 \\ 3 & 7 \end{pmatrix}.\)-
QR factorize the matrices A, B, C using both the Gram–Schmidt algorithm and by Givens rotations.
(Hint: When finding the Givens rotations, it is only necessary to find \(c=\cos\theta,\ s=\sin\theta\) , and not \(\theta\) itself.) -
Find a Householder matrix \(H_u\) (that is, a matrix of the form \(H_u=I-\frac{2}{\|u\|^2}uu^{\mathsf{T}}\)) such that \(H_u B\) is upper triangular. Explain how this yields a QR factorization of B.
-
Do the same for A.
-
Explain why the system \(Cx=b\), where C is the above matrix and \(b=\begin{pmatrix}2 & 0 & 1\end{pmatrix}^{\mathsf T}\), does not have a solution. Solve the least squares problem \(\min_{x\in\mathbb{R}^2}\|Cx-b\|\).
-
-
Let \(A = \begin{pmatrix} -4 & -6 \\ 3 & -8\end{pmatrix}\).
-
Compute the singular values both by computing the square roots of the eigenvalues of \(A^{\mathsf T}A\) and \(AA^{\mathsf T}\). Confirm that you get the same answer.
-
Compute the singular value decomposition of A.
-
-
Let \(A\in\mathbb{R}^{3\times 3}\) be the matrix which, when multiplied by a vector x, first rotates x counterclockwise in the \(x_1\text{-}x_3\text{-}\)plane by \(\pi/4\), then scales the first component by 5, the second by 2 and the third by 0, and then reflects through the plane orthogonal to \(u = \begin{pmatrix}0&1&-1\end{pmatrix}^{\mathsf T}\).
-
Find A.
-
Explain what the above description of A has to do with the singular value decomposition of A.
-
-
-
Week 45
-
???????Implement the explicit and implicit Euler methods for the problem \(\dot{x}=\lambda x,\; x(0)=x_0\) when \(\lambda = -5,\;x_0=1\) . Compute up to time T=2, first using N=4 and then using N=100. How large must N be to guarantee linear stability of both methods?
-
Consider the problem \(\dot{x}=Ax,\;x(0)=x_0,\;A=\begin{pmatrix}0 & -1 \\ 1 & 0\end{pmatrix}\).
-
???????What sort of matrix is A? What does the solution look like?
-
What are the eigenvalues of A? What will happen if you approximate the problem with the explicit Euler method?
-
Implement and test the explicit Euler method for the problem. Use e.g. \(x_0=\begin{pmatrix}1 \\ 0\end{pmatrix}\)???????.
-
-