This claims to explain diffusion models from first principles, but the issue with explaining how they work is we don't know how they work.
The explanation in the original paper turns out not to be true; you can get rid of most of their assumptions and it still works: https://arxiv.org/abs/2208.09392
> The explanation in the original paper turns out not to be true; you can get rid of most of their assumptions and it still works
I’ll admit it is amusing that some assumptions on why it works were incorrect. The core idea of a Markov chain[0] where each state change leads to higher likelihood, is bound to work, even if the rest doesn’t.
In my mind, the Muse paper[1] gets closer to why it works: ultimately, the denoiser tries to match the latent space for an implicit encoder. The Muse system does this more directly and more effectively, by using cross-entropy loss on latent tokens instead.
In a way, the whole problem is no different from a language translation task. The only difference is that the output needs to be decoded into pixels instead of BPE tokens.
Huh, that is quite fascinating paper – we can learn to invert any image degradation and use it as generative model? Hm. Is there any research of using some U-net as the degradation function?
This is really cool, and I think possibly the best demonstration I've seen in a while of the power of Julia for tasks outside of science. The ability to use a high level fast and flexible language instead of being forced to compromise on a slow high level language that wraps fast but rigid libraries allows you to do some really cool stuff.
Just in case the author is reading: in the introduction, you use the term "virtual RAM" to describe the GPU VRAM needed for running stable diffusion, but what that VRAM actually stands for is 'Video RAM'.
I get the error trying to run MNIST(:train) that modules are not callable. When I search for the error I see a link to the repo issues raising this issue but the issue itself is missing. Was it deleted rather than closed? Weird.
I've been trying out developing my own diffusion models from scratch lately, to understand this approach better and to compare against similar trials I previously did with GAN. My impression from reading posts like these was that it would be relatively easy once you understand it.. with the advantage that you get a nice normal supervised MSE target to train against, instead of having to deal with the instabilities of GANs.
I have found in practice that they do not deliver on this front. The loss curve you get is often just a big thick noisy straight line, completely devoid of information about whether it's converging. And the convergence seems to be greatly dependent on model choices and the beta schedule you choose. It's not clear to me at all how to choose those things in a principled manner. Until you train for a long time, you just basically get noise, so it's hard to know when to restart an experiment or keep going. Do I need 10 steps, 100, 1000? I found that training even longer, and longer, and longer, it does get better and better, very slowly, even though this is not displayed in the loss curve, and there seems to be no indication of when the model has "converged" in any meaningful sense. My understanding of why this is the case is that due to the integrative nature of the sampling process, even tiny errors in approximating the noise add up to large divergences.
I've also tried making it conditional on vector quantization codes, and it seems to fail to use them nearly as well as VQGAN does. At least I haven't had much success doing it directly in the diffusion model. After reading more into it, I found that most diffusion-based models actually use a conditional GAN to develop a latent space and a decoder, and the diffusion model is used to generate samples in the latent space. This strikes me that the diffusion model then can never actually do better than the associated GAN's decoder, which surprised me to realize since it's usually proposed as an alternative to GAN.
So, overall I'm failing to grasp the advantages this approach really has over just using a GAN. Obviously it works fantastically for these large scale generative projects, but I don't understand why it's better, to be honest, despite having read every article out there telling me again and again the same things about how it works. E.g. DALLE-1 used VQGAN, not diffusion, and people were pretty wowed by it. I'm not sure why DALLE-2's improvements can be attributed to their change to a diffusion process, if they are still using a GAN to decode the output.
Looking for some intuition, if anyone can offer some. I understand that the nature of how it iteratively improves the image allows it to deduce large-scale and small-scale features progressively, but it seems to me that the many upscaling layers of a large GAN can do the same thing.
Author here. I have noticed similar behaviour. As part of this exercise I tried to train a model to generate Pokemon based on This Pokemon Does Not Exist by HuggingFace. However my models only converged to nosiy smudges after 50 iterations and so I excluded it from the posts (I do mention my experiments at the end of part 2).
My first assumption was that the mdoel I was training was too small: 13 million parameters as opposed to the 1.3 billion in ruDALL-E (not sure how much of this is only the diffusion model). So that's a 100x smaller. I want to experiment with upscaling it.
Reading this I'm wondering if there's more I need to do. For example, training a conditioned model - "cheat" by given it in the index of the Pokemon during training but then you sample without an index - or make the model predict the standard deviation (beta tilde). Or as you say, work with loss functions.
Dalle 2 does not use any adversarial loss (so no GAN), it uses a text2image diffusion based model and two diffusion based upscaler, VQGAN is an autoencoder, alone it can't do much, Dalle 1 works thx to the autoregressive model (also no GAN), Stable Diffusion uses an autoencoder because running a diffusion model on a 1024/768/512 image is really inefficient as the model has no bottleneck, the autoencoder has an adversarial loss but upscaling a 64x64x4 latent up to a 512x512x3 image is a much simpler job than generating the 64x64x4 from scratch, that's why you need a diffusion or an autoregressive model as a base.
Yes, all the autoencoders you see used in practice have adversarial loss + MSE + perceptual loss, the VAE used with Stable Diffusion also uses KL regularization, while VQGAN uses all other losses to make use of the codebook.
What datasets were you using, how large was the model and what was the noise schedule? I’ve been contemplating implementing my own from scratch as well. I’m surprised that training with conditional labels did not help as much.
Data is mel spectrograms. To be clear about the conditional labels, I was trying to get it to come up with a vector quantized code, so it's not conditionally labeled but rather I was using an embedding layer with a VQ layer to have it come up with its own codebook. This works well with VQGAN so I was surprised that for diffusion it just keeps setting all the codes to the same value and ignoring them, but maybe I'm doing something wrong. Still working on it.
I'm just expressing here that my expectation was that this method would be less finicky than GAN because it uses an MSE loss, but unfortunately it seems to have its own difficulties. No silver bullet, I guess. The integration sampling can be quite sensitive to imperfections and diverge easily, at least in early stages of training.
I decided to write this because it feels like the early days of GAN where overall there seems to be lots of these "explain diffusion from scratch" type articles out there, but not yet a lot discussing common pitfalls and how to deal with them.
I'm doing my thesis right now in diffusion models (for audio) and have experienced a lot of the same things you mention. One paper which I found illuminating was this one: https://proceedings.mlr.press/v139/nichol21a.html
Particularly relevant to the noisy training you mentioned earlier is their alternative timestep sampling procedure they propose which seem to reduce gradient noise significantly judging from their experiments. Would love to hear or discuss if you have found any other design changes which have improved training / sample qualities :)
Thanks for the feedback, glad to hear I'm not completely crazy ;). I think I saw that paper cited in my reading but haven't read it in full, will take a look thanks!
Some of the results I've had have been from trying to apply it using 1D unets (also audio). I am getting slightly better results now using larger (and more standard) 2D unets but it's really taking a long time to train, especially given that I'm still experimenting with a subset of my data.
I'm beginning to suspect that because it's learning to predict very small signal residuals, improvement in output quality is very incremental in a way that is not directly correlated to the size or nature of the dataset. Like, even if I just train it on sinusoids it takes a really long time improve. (compared to a GAN approach). None of these conclusions are very formal mind you, would love to hear this confirmed. The training dynamics just seem very different from what I am used to with either MSE or discriminative loss.
I see. What types of sampling methods are you using? IIRC they are different approaches of solving the diffusion ODE and creating a sample, but I’ve only played around with them during inference.
The explanation in the original paper turns out not to be true; you can get rid of most of their assumptions and it still works: https://arxiv.org/abs/2208.09392