Hacker News new | past | comments | ask | show | jobs | submit login

There is no way to determine that a non-trivial neural network won't drastically diverge in output due to small changes in input (eg one pixel attacks on image classifiers). This is true for all current models I know of.

Almost all neural network implementations have continuous outputs (ie the nodes in the output layer produce a value between 0 and 1). That doesn't change the above issue at all.

This is much less of an issue with traditional methods




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: