Most neural networks these days aren't recursive however.
GPT-3 and friends certainly isn't, unless you count the inference loop as recursion, which is bounded both by vocabulary size and maximum token length (hardcoded for now).
This only means that there is so much more space for AI to grow and improve in the future.
As far as I know, having a recursion in neural networks poses an extravagant dilemma: from one hand, recursive networks are more capable; from another, they are harder to train and may have issues with stability caused by occasional positive feedback loops.
well, be careful with "more capable" - the universal approximation theorem shows that a NN with one hidden layer is able to represent any function (in the limit of how wide that layer is, at least). So there's nothing fundamental about "more capable" or "less capable." It's only about which representations are easy to train and which aren't.
GPT-3 and friends certainly isn't, unless you count the inference loop as recursion, which is bounded both by vocabulary size and maximum token length (hardcoded for now).