Hacker News new | past | comments | ask | show | jobs | submit login

Wait - they didn't use knowledge of the neural network internal state to calculate these patterns? Does that mean they could create equivalent images for human beings? What would those look like!



No, but we did make use of (1) a large number of input -> network -> output iterations, along with (2) precisely measured output values to decide which input to try next. It may not be so easy to experiment in the same way on natural organisms (ethically or otherwise).

Of course, if you're as clever as Tinbergen, you might be able to come up with patterns that fool organisms even without (1) or (2):

https://imgur.com/a/ibMUn


Perhaps a single experiment on millions of different people? A web experiment of some kind? "Which image looks more like a panda?" and flash two images on the screen.


That's a good idea, though note that there's a difference between asking "Which of these two images looks more like a panda?" and "Which of these two images looks more like a panda than a dog or cat?". The latter is the supervised learning setting used in the paper, and generally could lead to examples that look very different than pandas, as long as they look slightly more like pandas than dogs or cats. The former method is more like unsupervised density learning and could more plausibly produce increasingly panda-esque images over time.

A sort of related idea was explored with this site, where millions (ok, thousands) of users evolve shapes that look like whatever they want, but likely with a strong bias toward shapes recognizable to humans. Over time, many common motifs arise:

http://endlessforms.com/


Problem is, even if you succeed and end up with a fabricated picture that fools human neural nets into believing it's a picture of a panda, how would you tell it's not really a picture of a panda?

You'd need another classifier to tell you "nope it's actually just random noise and shapes" ... hm.


Probably not hard - just engage the higher cognitive functions. "Does it look like noise? Yeah." Or just close one eye and look again.


I think you missed my somewhat deeper philosophical point :)

Who gets to decide what is really a picture of a panda?

If we'd manage to craft a picture that could with very high certainty trick human neural nets (for the sake of argument, including those higher cognitive functions) into believing something is a picture of a panda, "except it actually really isn't", what does that even mean?

Human insists it's a picture of a panda, computer classifier maintains it's noise and shapes.

Who is right? :)


Interesting, sure. But I started out wondering if some obviously-noise picture could be found that fooled humans, at least at first glance. "Hey a panda! Wait, what was I thinking, that's just noise!" It would be weird and cool, on the order of the dress meme etc. but much more so.

Kind of like the memes in Snowcrash, ancient forgotten symbols that make up the kernel of human thought.


Like a picture of a blue and black dress?


They would look like optical illusions!


Can they create human equivalents?

That would require that human vision works in the fashion of neural networks/svms/etc. There actually isn't any evidence for this.


What?! Neural networks are explicitly designed to work in the fashion of - you guessed it, neural networks. Like, retina and optic nerve.


"In modern software implementations of artificial neural networks, the approach inspired by biology has been largely abandoned for a more practical approach based on statistics and signal processing." [1]

[1] http://en.wikipedia.org/wiki/Artificial_neural_network


The embodiment is changed of course. But the process of successive ranks of weighted accumulators has not been. Which is the neural model. Of course its different. But there's still the question of, could failure modes of the mathematical model be present in the biological one? Its a question of modeling, not wetware vs hardware.


But that model is not the way the overall activity of the brain is currently understood - neurons are seen as being far more complex than simple threshold machines. They may involve thresh effects but the claim of them working overall like any version of artificial neural works is no longer supported by anyone.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: