Hacker News new | past | comments | ask | show | jobs | submit login

1) I am aware of deep learning, and the improvements made circa 2012 or so, but it is still ultimately failing to make advanced correlations between differing training sets, a strong memory, and a meaningful distinction between high-level abstractions and low-level inputs (although all these things are being addressed in one way or another with incremental improvements). And also effectively sharing learned data among separate entities or reteaching it.

2) These things are all very human-like, but they are still sub-problems IMHO :)




This reminded me that there are other NN variants that might help provide some clues in that direction.

Hopfield nets for example provide associative memory.

It may not all be groundbreakingly efficient, but very worthwhile.


Yep - also we have a large tendency to use feedforward NNs right now, but I have a sneaking suspicion that the future lies in something closer to recurrent NNs. Or probably something more complex, like automata-ish properties (IIRC there is also some NN that uses Turing-like devices).


Nah. If we want properly humanlike AI software, we're going to have to find a way to make inference in probabilistic programs a lot faster.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: