Hacker News new | past | comments | ask | show | jobs | submit login

Give me a ML system, and I can give you a problem it cannot solve. I am guaranteed success thanks to the No Free Lunch theorem: https://en.wikipedia.org/wiki/No_free_lunch_theorem.

In the case of deep learning, I can point to the task of determining values above 0.5 on an infinite Perlin-noise-derived 2D space fed by Mersenne Twister with seed 0, with an infinite number of octaves. Deep learning does not deal well with infinite spaces to begin with, and the pseudo-random generator cannot be easily encoded with common neural network nonlinear functions.

On the other hand, while we cannot compute an infinite number of octaves, and while places extremely far or extremely small details will run into IEEE754 limitations, we will get a good approximation by writing the program that computes the texture.

And that is just with only two numbers as input and one number as output.




Sure, I choose... "Exhaustive Search in the space of programs". (maybe with some genetic algorithm heuristics to shave of a couple billion years on each query)

It's a ML system that can solve any decidable and even some semi decidable problems. Which is (if the church turing thesis holds) everything that can be understood by humans or other.

You might not be able to wait around long enough to see it give you a result though, but hey at least _it_ got an answer.


If you allow impractical ML systems, you might as well pick a dice. Sure, the answer is inaccurate, but there's a non-zero probability that it is correct!

But, realistically, the ML system you devise cannot learn about features that require knowledge outside of the observable universe.


How does a static dice model computational processes?

What does the universe have to do with the set of computable functions?


Humans are neither immune to No Free Lunch, nor able to predict Perlin noise.


They're definitely not immune to No Free Lunch, but they are able to predict Perlin noise.

(Sure, you could try to analyse humans as if they were spherical objects floating in void, but in practice humans have computers.)

Let me give you an example. The 2011 Nobel prize in chemistry was dedicated to the discovery and analysis of quasicrystals. Those also cannot be modeled by deep learning, as its building blocks, linear separators, cannot finitely appreciate infinitely generated structures (unless it essentially encodes a completely different ML system within its neural network). Yet humans can model them.

I could go on all day about this, as there is an infinity of problems where deep learning is inadequate: proving the three-color theorem, routing, computing multiplications, …

Don't get me wrong: deep learning is outstanding for a set of menial tasks that I love to see being handed off to machines. But it is not the be-all, end-all that is sometimes claimed.


In principle its not impossible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: