Hacker News new | past | comments | ask | show | jobs | submit login

An LLM is not an AI supercomputer. It will regularly give you information that is false, and if you do not know the subject well, you probably won't notice how wrong it is. We are still a very long way from Star Trek's computer being able to help you out.





I look at LLMs like the advice given when Wikipedia launched. Use it as a jumping off point, but check its sources.

Hard to dive, if you don't know how to swim. LLM generated information needs you to be able to understand what it's producing, to be able to judge it. For that, you need education... That it can't trustworthily provide.

You can't expect someone without a background to understand quantum entanglement. Can't expect someone without the knowledge to comprehend memory management.

And if you do have the background... You're going to do a much better job than AI "slop", and that nickname has become popular for a very good reason.


A calculator will give you bad information too if you don't know how to use the tool.

A calculator won't give you bad information if you use it correctly, though. AI, will. The same prompt can generate vastly different answers.

Even the codegen examples given by the AI companies themselves have flaws in them. Critical flaws, like Claude's testing rig that doesn't test what it says it does, for example. The system is inherently flawed for most purposes it is currently being used for.


A calculator won't give you bad information if you use it correctly, though. AI, will. The same prompt can generate vastly different answers.

I think this is exactly it except that part of “knowing how to use it” is also largely about knowing its limitations… ‘trust’ but verify my 11-year old has been using chatgpt/claude since it came out and I have nothing but awesome experience with how she is using it


You sorta just made the point I'm trying to make here.

How the heck, does your 11-year old, verify? Do they turn to you, who already has the necessary background? The AI generated information cannot be verified by someone who cannot already do, what it does.


how do you verify? think about how would you do that if say you were forced to use AI for everything you do (I have friend that works in a place like this…).

of course my kid is 11 so she is learning about algebra and electro motors and logic and roman empire… “AI” is one step in that learning but an insanely patient teacher. back in a day - she did not understand cube roots… she asked, was given an answer, she still didn’t get it, was given an alternate explanation which still didn’t click (worse than original), asked again (“I still do not understand, do you have another way to try to explain it to me…”) and so on. again, it is a guide, a very knowledgeable and patient guide… it is part of the journey, not the destination. anyone using it as destination (by reading MANY HN posts this is a vast majority of people) is going to be in the world of hurt


Could you easily verify the info from the library books back in the day?

Back in the day, publishers were punished with fraud cases if they didn't verify their authors writings, so yes, verification was part of the chain.

Different brands also traded on the trustworthiness of their platforms, and would issue yearly correctionals.

Neither of those is analagous to glue on pizza.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: