1. We don't know all the mechanisms that a brain employs to achieve intelligence. We see billions of interconnected neurons and we assumes "Yea, this might be generating intelligence".
2. We don't know if we are already at some fundamental limits of intelligence. For example, you can see may instances in nature where a pattern emerged that maximizes some sort of efficiency. (Like Honeycomb pattern).
So, the end result of this will be that, even if we transfer the process by which our intelligence work to a machine, it will have the same performance as an average human brain...
yes but we're not built on silicon nor were we guided by curated data sets nor did we have a deadline nor does another parallel universe rely on our decisions for potentially life and death outcomes.
I do not like the idea of this simple equality and I think it misses the point. We might not get to us with this tech, the model might not be near enough.
Wait, how is that actually different than what our brains do?
From what I know, our cognitive system is built in quite a similar fashion of probabilistic pattern matching with backpropagation, coupled with some "ad-hoc" heuristic subsystems.
Eh no. Our brains and how human mind works are actually very poorly understood. To claim we have a good idea how our cognitive systems work under the hood is an incorrect statement.
there is nothing like backpropagation in the brain, or a probabilistic pattern matcher. there is evidence that a connectionist model is applicable, but learning is not deciphered, and there are aspects of it, like neuronal excitability, local dendritic spiking, oscillations, up and down states etc, which do not translate at all to DL systems. That said, the increasing success of connectionist architecture does point to the conclusion that the brain is also a connectionist machine.