It should be noted that most "AI-generated" images shared (Sam's included here) are typically just a first pass, whereas most recent models also include some kind of inpainting method, where you can then mask off various parts of an image and continue to edit those specific areas until the whole image is what you're looking for. This process makes it feel a lot more like a "tool" used by artists than a simple magic box that just gives you an "art piece" and you're done.
As a tool, this could be used by an artist to continue working on that image until it's exactly what the artist (or the comissioner) is looking for: masking off the water to actually add dolphins, masking off the ship to redraw it, retoning the sky for a more aesthetically-pleasing sunset, adding other objects to specific locations in the scene, etc.
I'm not sure how the embeddings ("descriptions") work in DALL-E yet, but in a lot of models they're fixed-length. So there's a natural limit on how many concepts you mention in the first pass before it'll just start leaving them out.
The solar powered ship with a propeller sailing under the golden gate bridge during sunset with dolphins jumping around was pretty impressive. https://twitter.com/sama/status/1511731259319349251
I think it's only missing the dolphins.