Hacker News new | past | comments | ask | show | jobs | submit login

> ...in case 2 I can compute the gradient numerically, it just takes a bit longer.

Yep, true, might just take a while. On the other hand, even a very noisy estimate of the gradient might suffice, which could be faster to obtain. Perhaps someone will do that experiment soon. Maybe you could convince one of those students of yours to do this for extra credit?? ;).

> Likewise, I was not very surprised that you can produce fooling images, but it is surprising and concerning that they generalize across models.

Ditto x2.

> It seems that there are entire, huge fooling subspaces of the input space, not just fooling images as points. And that these subspaces overlap a lot from one net to another, likely since they share similar training data (?) unclear.

Yeah. I wonder if the subspaces found using non-gradient based exploration end up being either larger or overlapping more between networks than those found (more easily) with the gradient. Would be another interesting followup experiment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: