For May 3rd
For warm-up, you can do 9-01 and 9-03, although they will not be prioritized during the seminar.
- 9-07
- 9-12; here, show also that the solution is indeed optimal.
- 9-17
- The 2014 exam.
I suggest you simulate an exam situation to do this.
I also suggest you bring a copy of the solution note to the seminar.
Earlier weeks' seminar problems follow below. For reference, in case you want to catch up, I have as of yet (including the upcoming seminar) assigned the following exam and compendium problems.
Compendium problems:
- (1-01,) 1-03, 1-04, 1-05, 1-06, 1-07, 1-08, 1-12(a), 1-13, 1-14
- 2-01, 2-02, 2-03, 2-06, 2-07
- 3-03, 3-05, 3-07, 3-12, 3-13, 3-14, 3-21
- 4-01, 4-02, 4-05, 4-08, 4-09, 4-10
- 5-06, 5-14
- 6-10, 6-12, 6-15 (and generally, start from the beginning of section 6 to drill 2nd order diff.eq's)
- 7-01, 7-02, 7-04, 7-05
- 8-01, 8-06
- 9-02, 9-05
- 10-03, 10-04, 10-05
- 11-01, 11-04, 11-07
Exam problems
- Exam 2008 problem 1
- Exam 2011 problems 1d and 3
For seminar #1, February 2nd:
- Let f(x) = g(||x||). What is the gradient of f, in matrix form?
- Do compendium problems 2-01, 2-02, 2-03 and 2-07:
- How much of these can you do without differentiating?
- In 2-01 (b) and (d): write down the associated matrix, and work with that directly.
- (Do the rest as well.)
- Use the determinant criteria to show that a Cobb-Douglas of three variables is strictly concave if the sum of exponents is <1.
- Exam 2011 problem 1 (d) (closing in on being able to pass an exam already, yay!)
- Prove from the definition of positive semidefiniteness that the odd powers \(\mathbf A^{2p+1}\)(p a natural number) of a positive semidefinite matrix \(\mathbf A\)is positive semidefinite.
- Hint: Put \(\mathbf y=\mathbf A\mathbf x\)
- If \(\mathbf A\) is also positive definite, what about the odd powers?
- From the determinant criteria: Could \(\mathbf A^3\) be positive definite if \(\mathbf A\) is merely positive semidefinite, but not positive definite?
- What about odd powers of negative definite matrices?
- What about even powers?
For seminar #2, February 9th:
- Exam 2008 problem 1 (c) (can be done without anything from (a) and (b)).
- Compendium problem 1-13
- 2-06
- 3-03
- 3-05
- 3-21 (a bit demanding?)
For seminar #3, February 16th:
- Exam 2011 problem 3 (the rest)
- Give a one-line argument why the admissible set of a concave program, is convex. Does the same have to be true for a quasiconcave program?
- 3-07
- 3-12
- 3-13
- Consider the problem of minimizing |x|+|y| subject to \((x-a)^2+(y-b)^2\leq 1\), where a and b are nonnegative and \(a^2+b^2>1\). Use Kuhn--Tucker to decide for what a and b there is a solution at \((x,y)=(0,1)\).
- Possibly to be postponed: 3-14
- Not to be covered in this seminar - but maybe the next one: This problem.
(Those of you who have taken a course in the economics of information, may know it well already.)
For seminar #4, March 1st:
- Leftovers from last problem set (that would be 3-14 and the information economics problem?)
- 1-03
- 1-06
- 1-12 (a) (likely, the seminar will rather give priority to 1-06. If hard, try that one or 1-09 first)
- 1-07 ("mutually orthogonal" means that if you dot any pair, you get zero)
- 1-14
- Given a square matrix A: a non-null vector v is called an eigenvector for A if Av = λ v for some number λ.
Show that the matrix A - λ I cannot have full rank.
For seminar #5, March 15th:
- Exam 2011 problem 1. (You have already done 1d, now use a different method!)
- 1-04
- 1-05
- 1-08 (might be hard - if so, try 1-01 first)
- Here is an argument that an nxn symmetric matrix has n linearly independent eigenvectors.
Addendum: You should not stress this problem too much - but you should remember the fact that a symmetric matrix has n linearly independent eigenvectors.
I give the setup, and then three questions remain to complete the proof. Indeed, something stronger will be shown: such a matrix has n orthogonal eigenvectors.- From class, you saw that the problem
\(\max/\min \ \mathbf x^\top \mathbf A\mathbf x \quad\text{subject to}\quad \mathbf x^\top\mathbf x=1\) is solved by eigenvectors, so there is at least one. - Suppose for contradiction that there are only k<n linearly independent eigenvectors. Then there is a line through the origin, orthogonal to the span of these k eigenvectors.
- (For example, think a budget constraint with initial endowment z: \(\mathbf p^\top(\mathbf x-\mathbf z)=0\) determines an \(n-1\)-dimensional hyperplane, and the nth direction is p and is orthogonal to all the possible net transactions x-z.)
- It is possible to write the span of the eigenvectors as the span of orthogonal vectors \(\mathbf y_1,\dots,\mathbf y_k\). (For example, consider a plane through the origin; it is spanned by two vectors, but you can always find two orthogonal vectors (ninety-degree angle) that span the same plane.)
- Use Lagrange's method to solve
\(\max/\min \ \tfrac12\mathbf x^\top \mathbf A\mathbf x \quad\text{subject to}\quad \tfrac12\mathbf x^\top\mathbf x=\tfrac12\ \text{and }\mathbf x^\top\mathbf y_i=0\) for all these \(\mathbf y_1,\dots,\mathbf y_k\). - Lagrangian: \(L(\mathbf x)= \tfrac12\mathbf x^\top \mathbf A\mathbf x - \tfrac12(\mathbf x^\top\mathbf x-1)-\mathbf x^\top\sum_{i=1}^k \mu_i \mathbf y_i\) . Stationary if
\(\mathbf x^\top \mathbf A = \lambda \mathbf x^\top +\sum_{i=1}^k \mu_i \mathbf y_i^\top\) . - Right-multiply with some \(\mathbf y_i\): \(\mathbf x^\top \mathbf A \mathbf y_j= \lambda \mathbf x^\top \mathbf y_j+\sum_{i=1}^k \mu_i \mathbf y_i^\top\mathbf y_j\) .
- Question 1: Why is the left-hand side zero? (Hint: why is \(\mathbf A\mathbf y_j\)a linear combination of vectors orthogonal to \(\mathbf x\)?)
- Question 2: Why does this prove that \(\mathbf \mu_j=0\)?
- Question 3: Why does this prove that \(\mathbf x\) is yet another eigenvector, which - contrary to assumption - cannot be written in terms of the k ones we started with?
- From class, you saw that the problem
Update. More problems because you have another week, see Messages.
- The following problem consists in finding the eigenvectors of a 2x2 triangular but not diagonal matrix. It suffices to cover either upper or lower triangular matrices (the other one works the same way); by scaling the matrix, we can assume the off-diagonal element to be -1. Consider therefore the matrix \(\mathbf A = \begin{pmatrix}a&-1\\0&d\end{pmatrix}\)
- Find all the eigenvalues for all values of a and d.
- Is there any case which is "special" in any way compared to the problems you have seen before?
- 4-01 (maybe do 4-02 first?)
- 4-02
- 4-10
For seminar #6, March 31st:
- Compendium problem 4-05
- Compute the the double integral of xey (the same integrand as in 4-05!) over the domain bounded by y = 0, x = 1 and y = x - in both orders.
- Let p>1>q>0 and let f be a nonnegative function defined on the bounded domain between the functions y = xp and y = xq. Formulate in terms of iterated integrals - both the dx dy order and the dy dx order - the volume under the graph of f.
- 4-08 (trigonometric functions!)
- 4-09 (ditto!)
- 4-10
- Correction! Wednesday night, Yikai pointed out to me that I had reversed order here. Stupid mistake, now corrected. Let A be nxn, and let the columns of an nxn matrix V be eigenvectors of A.
- Show that AV = VD where D is diagonal and element ii is the eigenvalue associated with the ith column of V.
- Suppose in the following that the columns of V are linearly independent (that is possible if and only if A has n linearly independent eigenvectors), so that we can invert V and write A = VDV-1. (This is called diagonalization. The case where A has n linearly independent eigenvalues, makes lots of things easier.))
Compute the matrix power A2016 of such an A. - Compute the matrix exponential exp A = I + A + A2/2! + A3/3! + ... and the matrix exponential exp tA (where t is a real number).
- Compute the matrix product exp(tA) exp(-tA).
- Matrix exponentials do not necessarily behave as nice as you are used to from ex. For example, it is not universally true that exp(A+B) = exp(A) exp(B).
Give a criterion that this formula holds. (It is way more general than when one of the matrices is a scaling of the other!) - The eigenvectors in this course, can be called right eigenvectors, as opposed to a left eigenvector p', which is such that p'A = λp'.
Explain how we the tools of the course can be adapted to find left eigenvectors. - One application where left eigenvectors are more common, is Markov chains (earlier taught in ECON5160): Let Xt be a random process where the probabilities of tomorrow's state, depends only on today's state (i.e. not on previous history). If we have n states, let pij = the probability of being in state j tomorrow given that we are in state i today. The matrix P = (pij) then has the following properties: If the initial state X0 is drawn from a distribution is p', (that is, element i is the probability that X0 = i) then the unconditional probability distribution of Xt is the vector p'Pt (the superscript is power t).
- A stationary distribution is a p' one such that if X0 is distributed p' then all Xt are (unconditionally distributed p' as well.
Show that p' is a left eigenvector of P, and find the eigenvalue. - Some processes have a unique limiting distribution π such that as \(t\to+\infty\), p'Pt will converge to π no matter what p'
Show that such a π must be stationary as well.
- A stationary distribution is a p' one such that if X0 is distributed p' then all Xt are (unconditionally distributed p' as well.
For April 7th:
As announced, the Tuesday will be a lecture. Thursday will be a lecture, and then possibly, if time permits, some problems will be covered.
- The problems to be given priority on Thursday, will be 6-15, 7-01 and possibly 6-10. (If you need "simple" 2nd order diff.eq. problems for a start, then start at 6-01, then 6-02 ...)
- Other problems you should do: 5-06 (at least the "derive" part), 5-14, and 6-12 (I have not yet covered equilibrium states: it is just a constant solution).
For April 12th:
- 7-02
- 7-04
- Exam 2008 problem 1. (You have already done part (c).)
- Classify the origin as equilibrium point for the system x' = 1 - exp(x-y), y' = -y.
(Classify only locally. This is an example from the book, under Olech's theorem for global asymptotic stability. That test is not curriculum, although it is not hard - try it!) - 7-05
For April 19th
- It is a fact that a product of two odd integers, is an odd integer. Formulate as proof by induction the fact that the product of any (natural) number of odd integers, is an odd integer.
- Show by induction that 1 + 2 + ... + n = (n+1)n/2.
- Show by induction that for every natural number n, the number \(9^n-1\)is divisible by 8. (That is, (9n-1)/8 is an integer.)
Hint: Write \(9^{n+1}=9^n\cdot (8+1)\) - 10-03
- 10-04
- 10-05
- Look at the difference equation given in problem 3 (a) in this exam in ECON5150. Here, the unknown function is \(v\), while \(p\in(0,1)\) and \(K\) are constants. (Do not be confused by the running index being called i rather than t - it is not supposed to be time in this application.)
- Is there any \(p\in(0,1)\) such that the equation is stable?
- Find the general solution.
- Find the particular solution which satisfies \(v_0=v_N=0.\)
(The Department has discontinued Mathematics 4. If you think this application looks interesting, then try STK2130 - be aware though, that 2*** courses may not be useful towards your degree even though the content is at a level we used to assign to PhD candidates.)
For April 26th
Dynamic programming:
- 11-01
- 11-04
- 11-07
- This infinite-horizon problem: http://www.uio.no/studier/emner/sv/oekonomi/ECON4140/h13/dynprog_infinitehorizon.pdf
Continuous time:
- 8-01
- 8-06
- 9-05
- 9-02