That doesn't seem like an appropriate comparison to the task the blogger did. The blogger gave their AI thing the raw data - and a different prompt from the one you gave. If you gave it a raster image, that's "cheating" - these models were trained to recognize things in images.
> When a png is directly uploaded, the model is better able to notice that some strange pattern is present in the data. However, it still does not recognize the pattern as a gorilla.
I wonder if the conversation context unfairly weighed the new impression towards the previous interpretation.
I'm curious if there's any good resources on this, but I've noticed including the conversation context makes the responses drastically worse quality in my experience. It's gotten to a point whe