Should we master replicating that, we also must gain a much better understanding of AI and human intelligence along the way.
Our current computers are certainly not able to simulate a human brain. No computer in the world has currently the computing power to run decent, biochemistry based models for each neuron in a human brain. This basically calls for specialized hardware, which may or may not extend the limits of computable problems beyond Turing completeness.
If we manage to get to that point, we will have to face more important ethical challenges anyway, like how do we deal with tools that allow us to assess what a person is thinking and planning with near certainty? At that point, the key argument of the paper might actually fall apart because there is a chance that we end up understanding the inner workings of the alien AI.
But at that point we have heaped up so much speculation that decent science fiction authors are about to get jealous.
>This basically calls for specialized hardware, which may or may not extend the limits of computable problems beyond Turing completeness.
In... what way do you think human brains are hypercomputers? Keep in mind that quantum computers are "merely" Turing universal, they can't solve decidability. (If you think that human brains can solve the halting program, then I would like to ask you to produce Busy Beaver 100)
To the contrary, I believe human brains are what they look like: a glob of regular, boring old proteins, interacting with each other using normal atoms. No exotic physics, closed timelike curves, integrated information theory phi factor, Penrose quantum spookiness, Sam Hughes infolectricity, nothing. Humans aren't special, we're just atoms, capable of being simulated by a sufficiently gigantic Turing machine.
When you are equating brains with quantum computers, you are again jumping to conclusions.
If I had to guess right now, I'd say that it is more likely that the human brain has clever ways to exploit signal timings and randomness (e.g. Brownian motion). There is no room for large scale and/or long duration quantum processes in biochemistry.
>If I had to guess right now, I'd say that it is more likely that the human brain has clever ways to exploit signal timings and randomness
Then why do you think it's a hypercomputer? These are all classical physical effects. What math problems do you think the human brain can solve that a Turing machine with sufficient time can't?
A machine that is capable of truly random behavior is no longer a Turing machine. The human brain does not only have access to that kind of randomness, it is fundamentally subject to it. Biological organisms need to actively protect themselves against that to succeed.
> The human brain does not only have access to that kind of randomness, it is fundamentally subject to it.
[citation needed]. How can you confidently make an assertion like that? How do you design an experiment that distinguishes randomness in the human brain from sufficiently-advanced pseudorandomness?
On the contrary, at the surface level, we seem to be terrible at doing anything that resembles true randomness [1].
The paper only discusses the fact that human generated random sequences are not perfectly random, although they are actually random. Reasoning about a complex system at that level is extremely hard and the paper fails to find a good theory as to why that is, although it lists a few. No surprise there.
Besides, random processes within cells are a fact of nature. Research has discovered various ways how organisms protect themselves against that. I will not point you to references on this because this goes down a rabbit hole of different aspects.
Would you mind giving me a brief overview on what those aspects are?
To put it less obliquely, how do you distinguish "true randomness" from perfectly-deterministic physical phenomena that are just determined by a "rabbit hole of different aspects"? Is a die roll truly random, or is it just a theoretically-predictable product of factors like air circulation patterns and friction?
The keyword you are looking for is Laplace's demon[1]. Also, note that there are physical phenomena which are known to have no deterministic outcome. Consider e.g. chaotic systems or certain quantum mechanical systems where the observed end state is one of a set of states following a probability distribution and no way to predict the end state in each individual iteration of the experiment.
To the first part of your question: I thought long about how to put it succinctly and I cannot. The best thing that I can come up with is to point you to the safeguards that are in place for gene expression within cells. A bit of Google-fu brought me to an entire volume dedicated to explaining how that particular process can be so reliable[2]. (I already know that this stuff was complex, but this takes the cake!) Now, these complex mechanisms take tons of resources to maintain. Natural selection generally gives processes with lower resource usage an edge. This has in some cases lead to amazingly efficient solutions, e.g. for some single-cell organisms. But gene expression stayed this complex. It stands to reason that every bit of this mechanism is required to keep cells reasonably alive.
Our current computers are certainly not able to simulate a human brain. No computer in the world has currently the computing power to run decent, biochemistry based models for each neuron in a human brain. This basically calls for specialized hardware, which may or may not extend the limits of computable problems beyond Turing completeness.
If we manage to get to that point, we will have to face more important ethical challenges anyway, like how do we deal with tools that allow us to assess what a person is thinking and planning with near certainty? At that point, the key argument of the paper might actually fall apart because there is a chance that we end up understanding the inner workings of the alien AI.
But at that point we have heaped up so much speculation that decent science fiction authors are about to get jealous.