Hacker News new | past | comments | ask | show | jobs | submit login

Wouldn't it still be likely to settle on a local minima when the deciding factors for the existence of a local minima are limited to those that contribute to the likelihood of a single output category, and not whether all functions are curving up?

An example I can think of would be an absurd million input neural network, where one of the inputs only has a pronounced effect on one of the outputs. It seems like it would be possible for the path of the input to output to be dragged downhill in the context of all outputs, but uphill in the context of the single output it affects.

Is what I've described not likely, or am I just completely off base?




An interesting question. A million input neural network isn't necessarily absurd. Images are very high dimensional. One could easily imagine a one megapixel input to a neural net. But in natural images no single pixel is indicative of any single image characteristic by itself. I think this isn't a coincidence but a common characteristic of "natural" high dimensional data, on which neural nets tend to work well. So yes, I'd say what you've described is not likely for a large category of "natural" high dimensional data which probably includes most of the data we care about in the real world.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: