I just used Stable Diffusion to do an image composite, saving a photo concept featuring my (at the time) pregnant wife that I had put a lot of time and effort into. Our daughter is nearly 18 months old and the photo has been shelved that entire time.
It took a lot of work, but it took significantly less work than doing an image composite the old fashioned way. Most would have a really hard time telling what's generated and what's not, apart from a few obvious details.
I would love to read a small write-up of how you put it together if/when you have the time. So far most of my experimentation with Stable Diffusion has been lackluster at best, though I haven't tried doing a composite yet.
In short, I used ControlNet and inpainting models in Img2Img to replace the armor and wings. The prompts were generally something like "a photo of a pregnant woman wearing (glowing:.5) gold intricate filigree armor, holding a glowing rapier in front of her, fantasy armor, metal armor..." The ControlNet 'modes' I used were depth, canny, and HED, depending on how much detail I wanted to keep.
Usually generating at 512xWhatever is still your best bet and then upscaling after, but there were pieces I was able to do high resolutions for from the start. That's the biggest issue with making the workflow fast.
From there it's just usual Photoshop type compositing but far easier as it mostly lines up with perfect lighting.
https://i.imgur.com/BfckWCH.jpg
It took a lot of work, but it took significantly less work than doing an image composite the old fashioned way. Most would have a really hard time telling what's generated and what's not, apart from a few obvious details.