There is no way to determine that a non-trivial neural network won't drastically diverge in output due to small changes in input (eg one pixel attacks on image classifiers). This is true for all current models I know of.
Almost all neural network implementations have continuous outputs (ie the nodes in the output layer produce a value between 0 and 1). That doesn't change the above issue at all.
This is much less of an issue with traditional methods
Almost all neural network implementations have continuous outputs (ie the nodes in the output layer produce a value between 0 and 1). That doesn't change the above issue at all.
This is much less of an issue with traditional methods