Hacker News new | past | comments | ask | show | jobs | submit login

No it isn't, in fact this line of work has already a tradition seen here:

https://en.wikipedia.org/wiki/Genetic_programming#Meta-genet...

Although it uses genetic programming instead of neural networks. Those approaches are equivalent in power, the difference is only that neural networks are more amenable to parallelization and we know a very good baseline optimization algorithm (gradient descent and friends).

The problem is that coming up with an optimizer/architecturer is hard. So the benefit of running a meta-optimizer to solve a problem is difficult to realize versus a handmade neural network. The problem needs to be so large the network would need to re-engineer itself to solve it -- but think how many resources have already been spent in engineering neural networks (by humans no less, which are quite powerful optimizers!). Unless the problem is truly titanic (w.r.t. other current problems) the yields might be small.

What you can do in a similar vein is gather a large set of problems and train a 'Neural architect' or something like that that then can be applied many times to new problems. This allows sharing this cost over many new networks. I think it could make sense for governments and large organizations to get together and create this sort of thing. If you know the costs of training a large neural network, imagine the cost of training hundreds of millions to train some kind of neural architect (NA).

There are milder versions of this, where the architecture search itself isn't trained:

https://en.wikipedia.org/wiki/Neural_architecture_search

https://en.wikipedia.org/wiki/Automated_machine_learning

Of course even this approach has limitations (even if you theoretically allow it to create essentially arbitrary architectures), because it will still have difficulty "thinking outside the box" like we do using huge amounts of mathematical expertise and intuition (see EfficientNet -- the insight in that paper would be difficult to arise from a NA network) -- it's not the thing that will solve all of our problem forever; but it would be pretty significant (perhaps towards making large AI-based systems with multiple components, say self-driving cars and robots of the future).




Wow, thank you for the answer and the links. I had tried to search for something like this but I didn't have the right terminology.

I really hope to see something happen in this area before I die, just for the sake of seeing it happen.

I often wonder about whether neural networks might need to meet at a crossroads with other techniques.

Inductive Logic/Answer Set Programming or Constraints Programming seems like it could be a good match for this field. Because from my ignorant understanding, you have a more "concrete" representation of a model/problem in the form of symbolic logic or constraints and an entirely abstract "black box" solver with neural networks. I have no real clue, but it seems like they could be synergistic?

There's a really oddball repo I found that took this approach:

https://github.com/921kiyo/symbolic-rl

"Symbolic Reinforcement Learning using Inductive Logic Programming"




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: