Check out MidJourney, which is in closed beta but expanding rapidly. I think it uses a different approach (different models), but also stunning results.
The logo of the project is not good, and i suggested to them a better one in the forum. They will use it in the future. The avocado armchair which i created, i did not stated simply "A logo of an avocado armchair" like i am talking to a human being. I expressed it as declarative as possible, like a computer program, and the machine understood it in a much more correct way, what is the command to execute.
As i have stated in the past, i consider natural language inputs to the machine, to be a very poor substitute of programming inputs, and i think that experiment has failed already.
Is there a Dall-E Ultra Max in the works?