Plans for the week of March 10-14

Plans for the week of March 10-14

Dear all, we hope you've had an excellent start of the week, and a pleasant weekend.

This week, the first part of the lecture is an attempt at wrapping up our discussion of RNNs and introduce LSTMs (long short time memory). 

After that, the second part introduces now a new method, the so-called Autoencoders. In this connection we need also to discuss a famous methods for reducing the dimensionality of the problems at hand, the Principal Component Analysis, or just PCA. The reason for this is that we can show that a linear autoencoder corresponds to us performing a PCA reduction. 

The slides this week are somewhat dense, and most likely a good fraction of these will be used next week as well. I may add corrections during the coming days as well.

Plans for the week March 10-14

  1. RNNs and discussion of Long-Short-Term memory

  2. Example of application of RNNs to differential equations

  3. Start discussion of Autoencoders (AEs)

  4. Links between Principal Component Analysis (PCA) and AE

Reading recommendations: RNNs

  1. For RNNs see Goodfellow et al chapter 10.

  2. Reading suggestions for implementation of RNNs in PyTorch: Rashcka et al's text, chapter 15

Reading recommendations: Autoencoders (AE)

  1. Goodfellow et al chapter 14.

  2. Rashcka et al. Their chapter 17 contains a brief introduction only.

  3. Deep Learning Tutorial on AEs from Stanford UniversityLinks to an external site.

  4. Building AEs in KerasLinks to an external site.

  5. Introduction to AEs in TensorFlowLinks to an external site.

  6. Grosse, University of Toronto, Lecture on AEsLinks to an external site.

  7. Bank et al on AEsLinks to an external site.

  8. Baldi and Hornik, Neural networks and principal component analysis: Learning from examples without local minima, Neural Networks 2, 53 (1989)

 best wishes to you all,

Edvin and Morten

Published Mar. 10, 2025 12:07 PM - Last modified Mar. 10, 2025 12:07 PM