The article is a thought-provoking piece of art. A few personal observations from me:
a) An acyclic computational graph is equivalent to a program without loops and thus not Turing-complete
b) A cyclic computational graph is equivalent to a program with recursion. The closest analogy from the programming world is a program without loops but with recursive calls. This means that it has Turing completeness, as loops are fully interchangeable with recursion (just different facets of the same thing).
As easy as 2+2=4. This means that a neural network is essentially a program. The only difference is the way to write the program: neural networks do it themselves by "learning".
Those simple observations explain everything. Brain is essentially a biological Turing-complete processing unit with full plasticity, as you can build any network (program) by changing "weights" of the neurons. This also means that full AI is possible, it is just a matter of the available computational power. Why is that? Because Turing completeness guarantees the absence of boundaries for achieving progressively sophisticated system abilities.
Most neural networks these days aren't recursive however.
GPT-3 and friends certainly isn't, unless you count the inference loop as recursion, which is bounded both by vocabulary size and maximum token length (hardcoded for now).
This only means that there is so much more space for AI to grow and improve in the future.
As far as I know, having a recursion in neural networks poses an extravagant dilemma: from one hand, recursive networks are more capable; from another, they are harder to train and may have issues with stability caused by occasional positive feedback loops.
well, be careful with "more capable" - the universal approximation theorem shows that a NN with one hidden layer is able to represent any function (in the limit of how wide that layer is, at least). So there's nothing fundamental about "more capable" or "less capable." It's only about which representations are easy to train and which aren't.
> Brain is essentially a biological Turing-complete processing unit
How do you know that?
Some would answer, "well, everything in the physical universe can be simulated by a Turing machine, so also the brain" but that would be begging the question.
I just presume that a biological brain is a network of neurons with connections between them, including feedback connections, at least this is what we see under the microscope. In terms of math, this structure corresponds to a cyclic directed computational graph - this only fact makes it Turing-complete.
Currently it is just a highly plausible conjecture, not a formal proof. However, I think I can prove it. But should I?
We don't know how the brain works. Neuron activity is one aspect, and clearly an important one, but it's not everything.
And even if the connectome can be modeled as a graph, the dynamics on the graph could be totally alien to us. It might be computable by a Turing machine. But it also might involve some not computable process we can't imagine. Perhaps it involves fundamentally unknown physics. (I consider the last option to be quite likely. I think rather few actually entertain the idea that physics as we understand it today could give rise to consciousness. But here we are.)
Those simple observations explain everything. Brain is essentially a biological Turing-complete processing unit with full plasticity, as you can build any network (program) by changing "weights" of the neurons. This also means that full AI is possible, it is just a matter of the available computational power. Why is that? Because Turing completeness guarantees the absence of boundaries for achieving progressively sophisticated system abilities.