I think that neural networks of the current design will have trouble with abstract symbolic logic because they do not have any structures to support that type of reasoning.
That said, they are great at recognition/prediction and it is likely that you can solve some simple symbolic logic problems by recognition/prediction but only up to a degree.
Neural networks currently are very much like the sensory processing areas in the brain and sometimes they are mapped directly to actions for control of video games.
But symbolic logic as human explicitly do it, and what we consider as thinking (planning, imagining, evaluating, deciding) is not done just in the sensory areas of the brain but involves mediation by executive control centers and self-stimulation of the sensor areas in sort of a loop.
I am not sure that trying to deal with abstract reasoning in just the current sensory NN designs that we have is going to be super effective. But I guess measuring abstract reasoning will allow us to realize the current limitations and then push forward with better structures that enable it.
(Although even if the NN do not have structures to support symbolic reasoning in the way that humans do it, I guess DeepMind will just write custom code around the NN do enable NN do help with symbolic logic? Sort of how they combined NN with other search structures to create AlphaGo?
Personally I think it would be easier to combine NN with existing symbolic reasoning tools to get a better result rather than just sticking with NNs. Use NNs to recognize and evaluate patterns and feed that to the symbolic logic tools to get reasoning solutions. Much more efficient I would think and tractable. And for extra credit run NNs on the symbolic logic reasoning tools to see if you can make some "intuitive" jumps sometimes instead of pure reason.)
That said, they are great at recognition/prediction and it is likely that you can solve some simple symbolic logic problems by recognition/prediction but only up to a degree.
Neural networks currently are very much like the sensory processing areas in the brain and sometimes they are mapped directly to actions for control of video games.
But symbolic logic as human explicitly do it, and what we consider as thinking (planning, imagining, evaluating, deciding) is not done just in the sensory areas of the brain but involves mediation by executive control centers and self-stimulation of the sensor areas in sort of a loop.
I am not sure that trying to deal with abstract reasoning in just the current sensory NN designs that we have is going to be super effective. But I guess measuring abstract reasoning will allow us to realize the current limitations and then push forward with better structures that enable it.
(Although even if the NN do not have structures to support symbolic reasoning in the way that humans do it, I guess DeepMind will just write custom code around the NN do enable NN do help with symbolic logic? Sort of how they combined NN with other search structures to create AlphaGo?
Personally I think it would be easier to combine NN with existing symbolic reasoning tools to get a better result rather than just sticking with NNs. Use NNs to recognize and evaluate patterns and feed that to the symbolic logic tools to get reasoning solutions. Much more efficient I would think and tractable. And for extra credit run NNs on the symbolic logic reasoning tools to see if you can make some "intuitive" jumps sometimes instead of pure reason.)