My theory is that that review, and Shalizi's, are as popular as they are because nothing is more convenient than learning one doesn't have to actually do the work of considering a (set of) new ideas and perspectives.
Well, I wrote my thesis about cellular automata, and I promise I've considered Wolfram's set of ideas and perspectives. So while I can't prove anything about the popularity of Aaronson's review in general, your hypothesis is demonstrably false in at least n=1 instance.
That being said, I took a look at your website and GitHub (not to mention your employer, Wolfram Research) and it's obvious you know what you're talking about - I don't think you should be being downvoted as heavily in this thread as you are. Plus you made a Futurama reference, so you're obviously a cool dude.
My point is, let's all avoid implying that people who don't agree with us aren't qualified to discuss this subject. I think both of us are, and Wolfram is a pretty polarizing subject - one that all sides need to keep civil about sometimes.
And you did Gazehawk! Pretty cool stuff, I was telling Stephen about that a few weeks ago.
But, let me say, you're right. But I do think its part of their popularity. Also the fact that they're both very entertaining and erudite writers. And sure, there are plenty of valid criticisms, and they make them.
I love self-sealing arguments that accuse others of making self-sealing arguments. They're simultaneously contradictory and consistent. The essence of paradox and self-parody.
If their arguments are invalid, then perhaps they should be rebutted, or perhaps someone could link to another resource where they have been rebutted.
My theory is they are popular just because rants are more entertaining than reason. People just want to high-five the person who expresses their hate most eloquently.
It seems wrong to frame the problem as "mathematics vs. cellular automata". The old kind of science (OKS?) is not as much about mathematics as it is about the scientific method of verification by experimentation.
If you want to setup a new kind of science, you have to devise a new verification method that is not experimentation-based. If you think you can go without experimentation altogether then it's not science at all (cf. pre-Copernic philosophical ramblings, à la Descartes).
It seems what Wolfram does with "NKS" is build new things; it builds those things by "finding them in the computational universe" (a universe that its automata explore).
Creating new ringtones is fine, funny and (moderately) interesting, but it's NOT science — not because ringtones are trivial, but because there is no concept of right or wrong in the ringtone universe. You cannot invalidate a melody.
Laws of nature expressed in mathematical formulas are verified everyday; if you want to demonstrate that nature is in fact made of cellular automata gone wild, you have to actually recreate nature (or parts of it) and show that it's indistinguishable from the real thing.
Although I don't know much about anything and haven't read NKS, I don't see any claims of this sort coming from Mr. Wolfram; I certainly didn't see it mentioned in the article.
The value of someone doing something different, from a different perspective, is huge. They don't have to be right or successful to have a lasting impact on society.
It doesn't matter the scale, challenging the establishment is difficult and uncommon.
In the article, Dr. Wolfram claims that computer programs, not mathematics, is the best way to model the universe. However, hasn't Lambda Calculus (and therefore functional programming) shown that the two are the same?
Well, Wolfram's thesis is that analytical representation is not the correct frame for modeling the world, and algorithmic/procedural representation (e.g. simulation) is the correct method.
This isn't particularly novel, but it is a key insight of computer science. I just don't think that Wolfram deserves credit for it.
Edit for clarification:
The key insight being that you can model things algorithmically. Whether it's "correct" or not is a different matter. But this type of modeling is different from analytical modeling.
No, this is a mischaracterization of what Wolfram is saying. Wolfram says: why consider computer programs to be valuable as mere approximations to the underlying analytic models, when you can consider computer programs themselves to be an entirely new and unexplored land of complex and interesting models.
Traditional computer science considers computer programs to be merely a means to an end.
A quite delicious irony is given by the example of Naiver Stokes equations. They are themselves an idealization of the flow of fluids composed of discrete particles. Because of our obsession with continuous models, we mostly resort to laboriously solving them numerically -- but it turns out that simple lattice gas cellular automaton models are actually 1) fairly accurate 2) much more computationally efficient, and 3) more suggestive of the underlying microscale physics.
To step back a bit, a helpful analogy would be that simple computations are the 21st century equivalent of differential equations, which were studied rigorously in their own right starting in the 18th century, often prior to their application to concrete problems.
Whether this intuition will turn out to be prescient, or whether it will fizzle out, is another question.
That doesn't make sense in the context of my comment, and besides, i don't think that's true.
If you take the Turing Test to it's logical conclusion, a program that is indistinguishable from a human intelligence is a human intelligence. There's no means to some other end. The program is the intelligence.
This is also something that Turing came up with in the 40s & 50s. Not exactly novel.
Uh... well, I'm getting Wolfram's take right. So you don't think my analogy is true? It's just an analogy.
But it's clear that it has methodological implications for how one does science:
For example, if you think computers are just a way to simulate continuous systems, it would not occur to you to sample random programs and see what they do. It would not occur to you to enumerate simple programs. And you wouldn't think that it is very interesting that such and such a simple program can do such a such a computation.
If you did, it would. And if you were ambitious enough, you would actually try hunting for the program that computes the universe, as Wolfram has been doing on a cluster in his basement (I love this tidbit) for some years now: http://www.ted.com/talks/stephen_wolfram_computing_a_theory_...
No, you're making the assertion first, that everyone positively asserts that the universe is continuous, and second that computer scientists see algorithmic systems as being simulations of analytical systems.
The first i am agnostic to (although i do like the feynman quote at the top of http://arxiv.org/pdf/quant-ph/0206089v2 which was linked to above), and the second is most certainly false, as i have indicated above.
The evidence in NKS to support wolfram's assertion that the universe is a simple program is circumstantial at best, and so his windmill tilting quests for the program that is the universe seems quixotic at best, and arrogantly foolhardy at worst.
Analytical modeling, algorithmic modeling, or whatever other model someone wants to use to represent reality are models until you can prove them to actually be fundamentally connected with the manner in which reality functions.
Re: NKS theory of physics. You're right, it's far from convincing. But it is intriguing speculation, and I think he adequately hedges it as such. Some fascinating partial results are that the natural restriction he introduces for graph automata to be deterministic are enough to induce special and general relativity. That's pretty eery!
Re models: now we're getting into epistemology. I don't think the aim you ascribe to scientists to "prove them (models) to actually be fundamentally connected with the manner in which reality functions" has much meaning when one is talking about, say, quantum field theory. How do I connect the mathematics of QFT with what is "really going on"? You can't. It just is.
> "Some fascinating partial results are that the natural restriction he introduces for graph automata to be deterministic are enough to induce special and general relativity. That's pretty eery!"
That sounds remarkable indeed; can you point to an online reference that has more info on this? thanks.
Turing proved the existence of non-computable reals, so the two are not the same. Also Chaitin's constant is a non-computable real number with a precise mathematical definition.
See: http://en.wikipedia.org/wiki/Chaitins_constant
The Curry-Howard isomorphism states (and this has been proved) that for each formal proof in math, there is a corresponding computer program that halts, and reciprocally.
This means that it is possible to express the proofs of your statements in code.
On the other hand, non-halting programs are AFAIK beyond the reach of formal logic, and as consequence, the set of logical proofs (or more accurately, its isomorphic twin) is a subset of the set of all computer programs.
I've thought about this as well and IMO computer science makes sense as mathematics with a concept of side effects. In that way the universe is more a computer program than a mathematical equation.
There seems to be a strong inverse correlation between claims of having read the book and actually having read the book. I haven't read the whole thing, but I've read enough to know that this particular Zuse-head is bullshitting.
I'll list just a couple of errors that would be impossible to make if one had read even most of the book. Wolfram often pedantically reiterates the same points, so keep in mind that these things are hard to miss:
1) "The Principle of Computational Equivalence" does not state that all-is-computation. It states that whatever 'objective' means we use to quantify computational complexity, we will discover that all computations are either trivial or of equal complexity. I.e. computational complexity (where this is crucially left undefined) "saturates" very quickly in the world of natural computations, no matter how you decide to measure complexity.
2) SW's discovery of universality among the simplest CAs is not a triviality, because unlike what this guy says, the dovetailer is not a simple program -- it is explicitly set up to be universal (in a manner). Its Turing machine rule number is probably in the trillions or higher. Whereas the surprise is that even amongst the very simplest programs, universality is easy to find.
To use an analogy, string theorists would cry with joy if it turned out that there was some small number of "simplest natural string theories" and one of them gave us all the known particles of the Standard Model.
3) Asymptotically optimal program search, in practice, isn't the way you would hunt for universes, and it is relatively easy to see why (TL;DR for now). Schmidhuber's academic work is of no practical relevance to the chapter on physics, although its cool from a math geek perspective. Same with maximally rational agents.
And the main idea here is just Occam's razor, not some arcane formulation of maximum predictive accuracy under a strange universal prior of symbol sequences, as cool as that sounds.
4) Wolfram doesn't propose the universe is a discrete CA, although everyone seems to think this. He makes all the obvious points about why it is unlikely to be so, and goes on to propose a graph automata model as being a suitable generalization of space and time.
So yeah, don't trust every well written review you read on Amazon.
As for not referencing people enough, I have sympathy with this criticism. On the other hand, as the book delves into a million and one different domains, the inquisitive reader would get extremely bogged down if he were to descend into the jargon of each individual field. And you would need to descend into jargon to say anything other than light summarizations of what has come before.
But these light summarizations do exist, in the extensive notes. In fact they're often not so light -- for example there is quite an interesting discussion of why the Pressburger arithmetic and the theory of intermediate degrees isn't a contradiction of the principle of computational equivalence.
Many times when one first thinks that Wolfram is being simplistic or naive, it turns out that he's gone into a lot more depth in the notes (I assume to avoid getting bogged down in the main text).
He really does know his shit.
Disclosure: I work on Wolfram|Alpha. But I have a brain, and I can think for myse.... ALL HAIL THE HYPNOTOAD
I assume you've read GEB. That wanders all over the map, and is fascinating for it.
"Complexity" by Roger Lewin is a sort of journalistic take on the early history of the slightly vague field of complexity science. But its fairly interesting.
"The Computational Beauty of Nature" by Microsoft R&D dude Rob Flake might also be a good candidate.
"The Jaguar and the Quark" by Gell-Mann, complexity theorist and Feynman nemesis, is enjoyable too.
A complexity theorist friend of mine also recommended Rudy Rucker's "The Lifebox, The Seashell, and the Soul" to me, but I haven't read it.
"Darwin Among the Machines" by Freeman Dyson's son (!) is frigging great, but that's now getting off topic.
The claim that 1) the universe is a computer, and 2) the subclaim that it is a cellular automaton are NOT the primary claims of the book. 2) is explicitly argued against even.
"We live in a period when technology looks very organised. But that’s a fluke, a feature of the history of engineering that reflects what we’ve learned to build. When we start just going out into the computational universe and finding stuff that works, it’s all going to look a lot more bizarrely random."
This parallels my understanding of how humans will perceive the post-singularity world.
I like the ideas he talks about in NKS. But much like the problem I have with the US software patent system, most/all of what's in it thats good is not novel, and what's novel is not necessarily good. I too, and I'm sure many others, independently had notions about how nature works more like a process of steps, with patterns, rather than a set of static equations, and that therefore computer programs may be better at modeling them. And that organic lifeforms, in particular, may in a sense just be computer programs, except ones running on natural-made parts rather than man-made ones.
Wolfram just has an arrogant position that he somehow brought them down from the mountain top for us mere mortals to consume. He didn't. He isn't. Smart man? Yes. Good ideas? Yes. Novel? No.
I would recommend reading that to anyone who is also looking at Wolfram's work.