My argument is as follows. You're right, and I'm appealing to a stronger Wolfram-esque version of Rice, together with the fact that humans readily throw non-essential goals under the bus in pursuit of speed.
* Building AI is a race against time, and in such races, victory is most easily achieved by those who can cut the most corners while still successfully producing the product.
* As a route to general AI, a neural architecture seems plausible. (Not at the current state-of-the-art, of course.)
* Neural networks (as they currently stand) are famously extremely hard to analyse: certainly we have no good reason to believe they're more easily analysed than a random arbitrary program.
* A team which is racing to make a neural-architecture AI has little incentive to even try to make their AI easy to analyse. Either it does the job or it doesn't. (Witness the current attempts to produce self-driving cars through deep learning.) Any further effort spent on making an easily-analysable AI is effort which is wasting time that another team is using just to build the damn thing.
* Therefore, absent a heroic effort to the contrary, the first AI will be a program which is as hard as a random arbitrary program to analyse. And, as much as I hate to appeal to Wolfram, he has abundantly shown that random arbitrary programs, even very simply-specified ones, tend to be hard to analyse in practice.
(My argument doesn't actually require a neural architecture of the AI; it's just a proxy for a general unanalyseable thing.)
1. I'm not sure that I agree. Not all research is a race against time. But, perhaps you're right, I'll accept this.
2. Certainly the most plausible thing we have now, I'm not sure that that makes it plausible, but better than anything else. so okay.
3. This depends on what you mean. Neural Networks are actually significantly easier to analyze than arbitrary programs, when you essentially restrict yourself to two operations (multiplication and sigmoid or ReLU), things get a lot easier to analyze. Here are some questions we can answer about a neural network that we can't about an arbitrary program: "Will this halt for this input?", "Will this halt for all inputs?", "What will a mild perturbation of this input have on the output?", these are as a consequence of fineiteness and differentiability, which are not attributes that a normal program has. (caveat: this gets more difficult with things like RNNs and NTMs, but afaik is still true). The questions that we find difficult answer for a Neural Network are very different than for a normal program: namely "How did this network arrive at these weights as opposed to these other ones?" and related "What does this weight or set of weights represent?", but I don't think that there's any indication that those questions are impossible to answer (and often we can answer them, like for facial recognition networks where we can clearly see that successive layers detect gradients, curves, facial features, and then eventually entire faces)
4. Agreed. There's no real reason to know why it works if it works.
5. I think you can tell, but I don't think this holds.
Those arguments are plausible, and thanks for the clarification.
I just hate to see Rice's theorem interpreted as "nobody can ever know if a program is correct or not". People have been making a ton of progress on knowing if (some) programs are correct, and Rice's theorem never said they can't.
* Building AI is a race against time, and in such races, victory is most easily achieved by those who can cut the most corners while still successfully producing the product.
* As a route to general AI, a neural architecture seems plausible. (Not at the current state-of-the-art, of course.)
* Neural networks (as they currently stand) are famously extremely hard to analyse: certainly we have no good reason to believe they're more easily analysed than a random arbitrary program.
* A team which is racing to make a neural-architecture AI has little incentive to even try to make their AI easy to analyse. Either it does the job or it doesn't. (Witness the current attempts to produce self-driving cars through deep learning.) Any further effort spent on making an easily-analysable AI is effort which is wasting time that another team is using just to build the damn thing.
* Therefore, absent a heroic effort to the contrary, the first AI will be a program which is as hard as a random arbitrary program to analyse. And, as much as I hate to appeal to Wolfram, he has abundantly shown that random arbitrary programs, even very simply-specified ones, tend to be hard to analyse in practice.
(My argument doesn't actually require a neural architecture of the AI; it's just a proxy for a general unanalyseable thing.)