Plans for the week of May 5-9

Dear all and welcome back after a break of almost two weeks (last lecture day was a public holiday).

The aim this week is to discuss the math and implementations of diffusion models. These are very popular generative learning models which can be thought of as stacked variational autoencoders (VAEs). A primary advantage of diffusion models over GANs (covered next week) and VAEs is the ease of training with simple and efficient loss functions and their ability to generate highly realistic images.

Diffusion models are prominent in generating high-quality images, video, sound, etc. They are named for their similarity to the natural diffusion process in physics, which describes how molecules move from high-concentration to low-concentration areas. In the context of machine learning, diffusion models generate new data by reversing a diffusion process, i.e., information loss due to noise intervention. The main idea here is to add random noise to data and then undo the process to get the original data distribution from the noisy data.

The famous DALL-E 2, Midjourney, and open-source Stable Diffusion that create realistic images based on the user's text input are all examples of diffusion models. 

The plan this week is thus to 

Plans for the week of May 5-9, 2025

Deep generative models.

  1. Mathematics of diffusion models and selected examples with codes

Readings

Reading on diffusion models.

  1. A central paper is the one by Sohl-Dickstein et al, Deep Unsupervised Learning using Nonequilibrium Thermodynamics, https://arxiv.org/abs/1503.03585

  2. Calvin Luo at https://arxiv.org/abs/2208.11970

  3. See also Diederik P. Kingma, Tim Salimans, Ben Poole, Jonathan Ho, Variational Diffusion Models, https://arxiv.org/abs/2107.00630

The jupyter-notebook is at https://github.com/CompPhysics/AdvancedMachineLearning/blob/main/doc/pub/week15/ipynb/week15.ipynb

 

Best wishes to you all,

Edvin and Morten

p.s. next week we will wrap up diffusion models and discuss GANs and if there is interested the last week, besides presenting a summary of the course, we may sketch some of the basics of reinforcement learning

Published May 5, 2025 11:48 AM - Last modified May 5, 2025 11:48 AM