Hacker News new | past | comments | ask | show | jobs | submit login

> Machine learning can observe and learn patterns that are more complex than humans can grasp.

That is a common misconception. ML cannot do anything beyond our modeling ability because it is designed with it. Deep learning is simply a method to approximate a function with a nonlinear formula. Something that cannot be easily approximated this way may require too much memory and power to be practical.

It is fundamentally similar to how JPEG is not a good fit for storing text: glyphs are hard to approximate with Fourier transforms.

The edge that ML has against humans is not in the learning part, it is in the machine part. Human memory is volatile, while we have grown exceedingly good at making machines retain memory.




That is not a valid argument. You need to provide a reason why the patterns that machine learning grasps are all graspable by humans, or that humans grasp something that machine learning never will. Multilayer neural networks can capture very interesting (from a human perspective) patterns and concepts, but also many others that seem garbage to us (perhaps because we dont grasp their significance).


Give me a ML system, and I can give you a problem it cannot solve. I am guaranteed success thanks to the No Free Lunch theorem: https://en.wikipedia.org/wiki/No_free_lunch_theorem.

In the case of deep learning, I can point to the task of determining values above 0.5 on an infinite Perlin-noise-derived 2D space fed by Mersenne Twister with seed 0, with an infinite number of octaves. Deep learning does not deal well with infinite spaces to begin with, and the pseudo-random generator cannot be easily encoded with common neural network nonlinear functions.

On the other hand, while we cannot compute an infinite number of octaves, and while places extremely far or extremely small details will run into IEEE754 limitations, we will get a good approximation by writing the program that computes the texture.

And that is just with only two numbers as input and one number as output.


Sure, I choose... "Exhaustive Search in the space of programs". (maybe with some genetic algorithm heuristics to shave of a couple billion years on each query)

It's a ML system that can solve any decidable and even some semi decidable problems. Which is (if the church turing thesis holds) everything that can be understood by humans or other.

You might not be able to wait around long enough to see it give you a result though, but hey at least _it_ got an answer.


If you allow impractical ML systems, you might as well pick a dice. Sure, the answer is inaccurate, but there's a non-zero probability that it is correct!

But, realistically, the ML system you devise cannot learn about features that require knowledge outside of the observable universe.


How does a static dice model computational processes?

What does the universe have to do with the set of computable functions?


Humans are neither immune to No Free Lunch, nor able to predict Perlin noise.


They're definitely not immune to No Free Lunch, but they are able to predict Perlin noise.

(Sure, you could try to analyse humans as if they were spherical objects floating in void, but in practice humans have computers.)

Let me give you an example. The 2011 Nobel prize in chemistry was dedicated to the discovery and analysis of quasicrystals. Those also cannot be modeled by deep learning, as its building blocks, linear separators, cannot finitely appreciate infinitely generated structures (unless it essentially encodes a completely different ML system within its neural network). Yet humans can model them.

I could go on all day about this, as there is an infinity of problems where deep learning is inadequate: proving the three-color theorem, routing, computing multiplications, …

Don't get me wrong: deep learning is outstanding for a set of menial tasks that I love to see being handed off to machines. But it is not the be-all, end-all that is sometimes claimed.


In principle its not impossible.


> Deep learning is simply a method to approximate a function with a nonlinear formula.

I think that was single hidden layer neural networks. Deep learning improves behaviour of that approximated function for values that were not in training set beyond what was possible with single layer (which was totally sufficient to aproximate any function).

Deep doen't improve ability to approximate function results but it it improves ability to approximate function implementation to get better result on the data that NN was not trained for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: