Hacker News new | past | comments | ask | show | jobs | submit login

>No it's not pointless, language is important.

Not that important, and not for this purpose. Things still work the same, even in languages with widely different semantics and ways to refer to them (I don't mean the trivial case where a house is called talo in Finnish etc., but languages where semantics and terms differ.

Using language-specific (en. english specific, or german specific) word definition and etymology to prove some property of the thing reffered to is an old cheap philosophical trick that sounds more profound than it is insightful.

Even more so, we might not say it for a car, but if we've built a human-looking robot with legs, we'd very much say it's a "better runner" if it started surpassing humans at running. Hell, we used to call employees doing manual calculations "calculators" in the past. Later, when machines doing that became available, we used the same term for them.

So the idea that "human is runner but car is not runner", also means that "human is thinker, machine is not thinker", and this has some profound difference, doesn't make sense anyway. Human running is associated with legs, certain way of moving, etc. Thinking is more abstract and doesn't have such constraints.

>Cars are not runners.

That's just an accidental property of having a dedicated word for "runner" in English that doesn't also apply to a car going fast. The term "running" though is used for both a human running and a car going fast ("That car was running at 100mph").

>"For all intents and purposes" is a cop out here.

For all intents and purposes means "in practice". Any lexicographical or conceptual arguments don't matter if what happens in practice remains the same (e.g. whether we decide an AGI is a "thinker" or a "processor" or whatever, it will still be used for tasks that we do via thinking, it will still be able to come up with stuff like ideas and solutions that we come up via thinking, and effectively it will quak, look, and walk like a duck. The rest would be semantical games.

>We're talking about LLMs, you know language learning models.

Which is irrelevant.

LLMs being language learning models doesn't mean the language used to describe them (e..g "thinkers" or not) will change their effectiveness, what they're used for, or their ability to assist or harm us. It will just change how we refer to them.

Besides, AI in general can go way beyond LLMs and word predictors, eventually fully modelling human neural activity patterns and so on. So any argument that just applies to LLM doesn't cover AI in general or "the danger than AI destroys us" as per TFA.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: