Hacker News new | past | comments | ask | show | jobs | submit login

You say that these images are highly optimized to produce this effect and would not occur by chance, but have you looked at the images in the "fooling" paper?

http://www.evolvingai.org/fooling

Some of them are very simple, and DO occur a lot in the world. For example, the alternating yellow and black line pattern would be encountered by a driverless car, and it would think it is seeing a school bus.




>Some of them are very simple, and DO occur a lot in the world. For example, the alternating yellow and black line pattern would be encountered by a driverless car, and it would think it is seeing a school bus.

While the image shows a yellow and black line pattern to us, are you sure this is also what the CNN "sees"? Couldn't this image just be the same as the adversarial images, i.e. it responds to many small input values rather than the overall pattern?

If it's possible to make the CNN predict an ostrich for an image of a car, then the same can be done of an image of an alternating yellow and black line pattern, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: