The common current example? A text prompt of "A horse riding an astronaut" without prompt engineering. Though I don't think the successful production of this image will demonstrate intelligence/causal understanding either (but it is a good counter example).
Causal understanding is going to be a bit difficult to prove tbh.
> The common current example? A text prompt of "A horse riding an astronaut" without prompt engineering. Though I don't think the successful production of this image will demonstrate intelligence/causal understanding either (but it is a good counter example).
I'm not sure why you think this falsifies intelligence. There are plenty of puzzles and illusions that trick humans. The mere presence of conceptual error is no disproof of intelligence, any more than the fact that most humans get the Monty Hall problem wrong is.
Your argument is that there are adversarial cases? Sure... But that's not what I'm even arguing here. There's more nuance to the problem here that you lack an understanding of. I do suggest diving deep into the research to understand this rather than arrogantly make comments like this. If you have questions, that's a different thing. But this is inappropriate and demonstrates a lack of intimate understanding of the field.
I didn't make an argument that there are adversarial cases, you did. You brought up an adversarial example, and said the existence of that example proves these algorithms are not generally intelligent. If that follows, it follows that the existence of adversarial examples for humans proves the same thing about us.
And in general, if you're going to be condescending, you should actually make the counter argument. You might make fewer reasoning errors that way.
Counter-argument: DALL-E is smart enough to understand that an astronaut riding a horse makes more sense than a horse riding an astronaut, and therefore assumes that you meant "a horse-riding astronaut" unless you go out of your way to specify that you definitely do, in fact, want to see a horse riding an astronaut.
Because intelligence is more than frequentism. Being able to predict that a dice lands on a given side with probability 1/6 is not a demonstration of intelligence. It feels a bit naive to even suggest this and I suspect that you aren't a ML researcher (and if I'm right, maybe don't have as much arrogance because you don't have as much domain knowledge).