1) I am aware of deep learning, and the improvements made circa 2012 or so, but it is still ultimately failing to make advanced correlations between differing training sets, a strong memory, and a meaningful distinction between high-level abstractions and low-level inputs (although all these things are being addressed in one way or another with incremental improvements). And also effectively sharing learned data among separate entities or reteaching it.
2) These things are all very human-like, but they are still sub-problems IMHO :)
Yep - also we have a large tendency to use feedforward NNs right now, but I have a sneaking suspicion that the future lies in something closer to recurrent NNs. Or probably something more complex, like automata-ish properties (IIRC there is also some NN that uses Turing-like devices).
2) These things are all very human-like, but they are still sub-problems IMHO :)