Hacker News new | past | comments | ask | show | jobs | submit login

> without needing to wait an answer from a human (that could also be wrong).

The difference is you have some reassurances that the human is not wrong - their expertise and experience.

The problem with LLMs, as demonstrated by the top-level comment here, is that they constantly make stuff up. While you may think you're learning things quickly, how do you know you're learning them "correctly", for lack of a better word?

Until an LLM can say "I don't know", I really don't think people should be relying on them as a first-class method of learning.






Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: