That's an impressive result, considering how simple the algorithm really is. (The learning algorithm isn't obvious, but it's not a lot of code.)
Can this algorithm be run in reverse, to generate an image from the network? That's been done with one of the other deep neural net classifiers. "School bus" came out as a yellow blob with horizontal black lines.
An interesting question is whether there's a bias in the data set because humans composed the pictures, and humans like to take pictures of certain things. (Cats are probably over-represented.) Images taken by humans tend to have a primary subject, and that subject is usually roughly centered in the images. It might be useful to test against a data set taken from Google StreetView images, which lack such a composition bias.
Can this algorithm be run in reverse, to generate an image from the network? That's been done with one of the other deep neural net classifiers. "School bus" came out as a yellow blob with horizontal black lines.
Do you have a link for that, sounds interesting (nothing turned up in a quick google search)
edit: I found something similar to what you were talking about, is this[1] what you meant?
That's the paper he's referring to. They used another NN to generate images, and selected the ones that the first NN predicted to be school buses the most.
Can this algorithm be run in reverse, to generate an image from the network? That's been done with one of the other deep neural net classifiers. "School bus" came out as a yellow blob with horizontal black lines.
An interesting question is whether there's a bias in the data set because humans composed the pictures, and humans like to take pictures of certain things. (Cats are probably over-represented.) Images taken by humans tend to have a primary subject, and that subject is usually roughly centered in the images. It might be useful to test against a data set taken from Google StreetView images, which lack such a composition bias.