Interactive session, Thursday April 11
- First 20 minutes (approximately): brief review of deep learning history and foundations, and the discussion of recent developments, including large language models like ChatGPT and Mistral.
- The rest of the session: guest lecture from Elliptic Labs where they will present and demonstrate their work on classification of sensor data using deep neural networks.
Weekly Lecture
Slides
We have rescheduled the semester since the videos were recorded, so you will see "lecture 10" in the videos. Don't be confused, it is still lecture 12.
Video Recordings
- The deep learning revolution
- Deep feed-forward neural networks
- Convolutional neural networks and image processing
- Recurrent neural networks and language processing
Recommended reading
There are no mandatory readings this week.
For the Deep learning revolution, there are several popular introductions. The Great AI Awakening, from NY Times, Dec 2016, is an interesting report from the time when the media started to recognize the revolution. For they who want to dig deeper into the revolution, we recommend the book by Melanie Mitchell: Artificial Intelligence, 2019.
If you are interested in knowing more about CNNs, this page from the Stanford course CS231n, is highly recommended. If you are interested in knowing more about RNNs, check the blog post by Andrey Karpathy: "The Unreasonable Effectiveness of Recurrent Neural Networks". Transformers and large language models are explained in this chapter of Jurafsky and Martin's "Speech and Language processing" textbook.
If you are ready for practical experience with deep learning, you may start with Paolo Perrotta, Programming Machine Learning (in O'Reilly library), Ch. 16 ff. which uses Tensorflow and Keras. If you are ready for go one step further, you may proceed with Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow, (2. ed.) by Aurelien Geron.
For a good PyTorch introduction, look no further than the PyTorch own tutorial.