They are smaller models with less parameters. Their original small sizes relative to LLMs also let people play around with it and tune it to run on less expensive hardware, if the weights are given, ie open source like SD.
Originally SD was quite hard to run, with an 8GB high end card only outputting 256x256 images. Then AMD and NVIDIA started releasing 16GB and 24GB consumer cards and people start doing training on those GPUs and tuning their own models. Now we have plenty of cards and models that can do 512x512.
Stable Diffusion will run on any decent gaming GPU or a modern MacBook, meanwhile LLMs comparable to GPT-3/ChatGPT have had pretty insane memory requirements - e.g., <https://github.com/facebookresearch/metaseq/issues/146>
Worth noting that the M-series macbooks are UMA so 100GB VRAM is costly but easily accessible. Their GPU performance is nowhere near a 96GB A100, but for sheer VRAM it’s a good choice.
Why wasn't this a problem for StableDiffusion vs DALL-E?