I've been pretty underwhelmed by stable diffusion so far (admittedly even this much would have seemed like magic to me 10 years ago).
First thing I asked it for was a picture of a dragon. I've subsequently a few different models and all sorts of prompt engineering (but perhaps I still haven't found the right one?)... I cannot get it to draw something anatomically coherent.
Are there some tricks I am missing? Do I need to run it through a pipeline of further steps to refine the mangled creature into something that makes sense?
I have done exactly that... the results were basically the same as I get from DiffusionBee app for stable diffusion
i.e. regions of the image are locally impressive, it has understood the prompt well, but the overall picture is incoherent... the head or one or more legs may be missing, or legs or wings sprout from odd places
like, it gets the 'texture' spot on but the 'structure' is off
First thing I asked it for was a picture of a dragon. I've subsequently a few different models and all sorts of prompt engineering (but perhaps I still haven't found the right one?)... I cannot get it to draw something anatomically coherent.
Are there some tricks I am missing? Do I need to run it through a pipeline of further steps to refine the mangled creature into something that makes sense?