Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice article.

>Whether and how an LLM actually "thinks" is a separate discussion.

The "whether" is hardly a discussion at all. Or, at least one that was settled long ago.

"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

--Edsger Dijkstra



The document that quote comes from is hardly a definitive discussion of the topic.

“[…] it tends to divert the research effort into directions in which science can not—and hence should not try to—contribute.” is a pretty myopic take.

--http://www.cs.utexas.edu/users/EWD/ewd08xx/EWD898.PDF


Dijkstra is clearly approaching the subject from an engineer/scientist more practical pov. His focus is on the application of the technology to solve problems, from that pov whether AI fits the definition of "human thinking" is indeed uninteresting.


Dijkstra myopic. Got it.


This must be that 7th grade reading level for which america is famous.


Hardy har har; serves me right for poking fun. Good day, sir.


It's interesting if you're asking the computer to think, which we are.

It's not interesting if you're asking it to count to a billion.


That doesn't really settle it, just dismiss the question. The submarine analogy could be interpreted to support either conclusion.


Wasn’t the point that process does not matter if we can’t distinguish the end results?


You might be conflating the epistemological point with Turing's test, et cetera. I could not agree more that indistinguishability is a key metric. These days, it is quite possible (at least for me) to distinguish LLM outputs from those of a thinking human, but in the future that could change. Whether LLMs "think" is not an interesting question because these are algorithms, people. Algorithms do not think.


Yes, but the OP remarked that the question "was settled long ago", however the quote presented doesn't settle the question, it simply dismisses it as not worth considering. For those that do believe it is worth considering, the question is arguably still open.


I doubt Dijkstra was unable to distinguish between a submarine and a swimmer.


The end result here is to move in the water. Both swimmer and submarine can do that. Whether submarine can swim like human, is irrelevant.


It's relevant if the claim is stronger than the submarine moves in water. If instead one were to say the submarine mimics human swimming, that would be false. Which is what we often see with claims regarding AGI.

In that regard, it's a bit of a false analogy, because submarines were never meant to mimic human swimming. But AI development often has that motivation. We could just say we're developing powerful intelligence amplification tools for use by humans, but for whatever reason, everyone prefers the scifi version. Augumented Intelligence is the forgotten meaning of AI.

Submarines never replaced human swimming (we're not whales), they enabled human movement under water in a way that wasn't possible before.


I do not view it as dismissive at all, rather it accurately characterizes the question as a silly question. "swim" is a verb applicable to humans, as is "think". Whether submarines can swim is a silly question. Same for whether machines can think.


"A witty saying proves nothing" -- Voltaire, Le dîner du comte de Boulainvilliers (1767): Deuxième Entretien




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: