Hacker News new | past | comments | ask | show | jobs | submit login
Dennis Ritchie and John McCarthy (economist.com)
96 points by DanielRibeiro on Nov 5, 2011 | hide | past | favorite | 12 comments



Mr McCarthy has had less direct impact. That is partly because he believed, wrongly, that minicomputers were a passing fad

Actually, on a long enough timescale, he might very well have been right. Of course users will still have many devices with processing in them, but you could reasonably make the case that they won't be "computers" as the word is currently used.

In fact, they allude to this later in the same paragraph!:

Mr McCarthy always argued that the future lay in simple terminals hooked up remotely to a powerful mainframe which would both store and process data: a notion vindicated only recently, as cloud computing has spread

So why was he wrong?


Even today's smartphones aren't "simple" terminals (and probably much more powerful than the mainframes he envisioned). The closest to that model is SSH and Onlive gaming. The trend is for webapps to do as much processing on the client as possible, and use the backend mainly for data. So I think he was wrong (and the article is also wrong in saying he was vindicated). People have constantly predicted a mainframe model; but cheaper, simpler and more convenient processing devices keep seeping computing power towards the user. That is: processing power migrates to the edge of the network. But I don't think incorrect predictions of the far future should in any way be held against McCarthy.

AI tangent: my thoughts on a "hivemind" vs "individuals" (mainframe vs. distributed) is similar to capitalism vs. a planned economy - individuals can collect data from their own viewpoint; form their own hypotheses; and act on them locally. Of course, theoretically, a hivemind with the same processing power could do the same with waldos; however, I think it would be difficult to resist consolidating hypotheses, and concentrating on the best ones (as large companies kill off small projects that later disrupt them). Whereas, individuals will do things their own way - and some of them will come up with something better (like ants searching for sugar; or startups seeking disruption). Most of them end up dead of course. There's also practical concerns favouring distribution: network latency/bandwidth; power consumption and redundancy (not all eggs in one basket).


I think they mean in the context of his lifetime. My brain appended a suffix of "so far" to that first sentence.


And it could be the future of videogames[1].

[1]http://blog.onlive.com/2010/11/17/introducing-the-onlive-gam...


>As a teenager Mr McCarthy taught himself calculus from textbooks found at the California Institute of Technology in balmy Pasadena, where his family had moved to from Boston because of his delicate health.

It's funny that the guy with 'delicate health' lived to 84.


It's fairly common for sickly children to end up normally healthy as adults these days. As a child, I used to catch bronchitis so much that it became the default diagnosis for a while, and I get four colds a year now.


Mr Ritchie had more luck. “It’s not the actual programming that’s interesting,” he once remarked. “It’s what you can accomplish with the end results.” Amen to that, Mr McCarthy would have said.

Uh, why would McCarthy have said that? That doesn't seem in character for him at all, not to mention of course the use of the word "Amen".


> An intelligent computer, he quipped, would require “1.8 Einsteins and one-tenth of the resources of the Manhattan Project” to construct.

His over-the-top quip has also turned out to be breathtakingly hubristic. AI may well be the hardest problem ever proposed.


I'm not sure I agree, if you argue (a bit simplistically) that intellectual prowess is distributed normally and that Einstein is an extraordinary genius chances are we will never see AI.


TL;DR My view is that if Einstein hadn't proposed his theory of relativity, someone else eventually would have (especially as the evidence for it accumulated) though it probably would have taken several decades. That is, "1.8 Einsteins" is an accelerator, not a possibility-maker - in the same way that 1/10 Manhattan Projects is an accelerator (and we've spent many Manhattan's worth on AI already).

---

Einstein wasn't the best mathematician (he was good enough to do the maths his physics required, but he needed help and other mathematicians of the time, such as Laplace, were better). What Einstein had was an insight, and an iconoclastic attitude that allowed him to overturn established dogma - far from being a barrier that he had to force himself to overcom, it seemed to energize him.

I think of Einstein as an explorer. To be an explorer, you need to have the abilities of an explorer and the boldness to try to new things. There also need to be things that haven't been discovered yet, but whose existence is hinted at (e.g. Earth didn't seem to be traveling through an ether).

- Without puzzles to connect you to reality, it's very hard to come up with breakthroughs that matter.

- Without an individualistic attitude, you won't be willing to pursue lines of inquiry that everyone else says is stupid and a waste of time.

- And without the necessary ability, you do it; you can't work out the details to show that it's consistent, you can't make predictions that could be falsified.

In Einstein's later life, he didn't come up with further breakthroughs. Was this because his intellect lost the sharpness of youth (so that Einstein himself was now < 1.0 Einstein)? Was it because he became attached to his own dogma (e.g. god does not play dice with the universe) - his biographer, Walter Isaacson thought so. Or was it because there weren't any amazing discoveries within reach of current experimental evidence? (a lot of work was done on quantum mechanics, an area Einstein didn't want to pursue; later theories like string-theory do not explain experimental evidence in a simple way).

I'm not just writing an essay here on Einstein - I'm talking about the basis of scientific progress. It doesn't depend on "genius" (e.g. "1.8 Einsteins"), but on those three things: sufficient ability; attitude; and unknown discoveries being within reach. There's also persistence - Einstein took about 10 years to generalize his "special relativity". A better mathematician could have done this much more quickly, but they didn't think it was worthwhile. He did.

I also interpreted "1.8 Einsteins" as an accelerator, because of the context of 1/10 of the Manhattan Project.

And if you include everything that contributes to better computation (cpu, memory, bandwidth, languages) as well as explicit AI research (though note that once computers can do something, there's a tendency to not call it "AI" anymore e.g. chess playing so that explicit AI research includes more than it might seem at first), then many Manhattan Projects have been spent on AI, with some progress, but without the slightest hint that we are anywhere near within reach.

---

As a final point, even if the statistical unlikelihood of a 1.8 Einstein meant that AI was likely impossible for us, why does it follow that you disagree? It may well be that it is possible to understand intelligence enough to create it - but that we aren't intelligent enough to do it.

Most of our intellectual progress proceeds from standing on the shoulders of giants; which usually involves a level of abstraction that makes things simple again (i.e the complexity required for understanding a level in itself isn't cumulative; we can understand one aspect at a time). I personally experience this all the time - the only way I make progress is by forming a theory of a confusing thing, that separates out a level, or an aspect, and makes it simple enough to bring it within my intuitively grasp.

Is it impossible that there are theories so interdependently complex that we cannot separate them out in this way - that there is no component that we can understand in isolation that will help us understand the whole? i.e. irreducible complexity. Note that all our scientific progress is based on things that we can understand. There could be many true scientific theories that we could not even formulate - and how would we know?

Note: I personally think AI is possible, that we will be able to understand it. But I recognize this is an opinion, not a fact.


> if Einstein hadn't proposed his theory of relativity, someone else eventually would have (especially as the evidence for it accumulated) though it probably would have taken several decades

Perhaps someone else would have worked out Special Relativity within a decade, e.g. Minkowski working off Maxwell's equations.

> Einstein took about 10 years to generalize his "special relativity". A better mathematician could have done this much more quickly

General relativity, however, is such a huge leap over special relativity, relying on both the "happiest thought in Einstein's life" (i.e the equivalence of gravity and acceleration) and complex math I don't understand, that it alone might have taken decades to "generalize" from Special Relativity.


"Machine whisperers"... love that




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: