Hacker News new | past | comments | ask | show | jobs | submit login

I don't know how you could think this if you'd ever used GPT4 for anything serious.



I don't think that's necessarily the case.

I get the strong impression that even people who have much experience using LLMs have astoundingly little insight into what they are actually witnessing. This is often paired with astoundingly little insight into what's actually going on in their own cognitive processes.

Somehow, it's still not clear to most people that LLMs and even vector databases create knowledge that wasn't in the original data.

In fact, that's most of what they do! Isolated, non-novel direct quotation is the exception, not the rule.


> create knowledge that wasn't in the original data.

The word is "hallucinate" or "confabulate". The way these models "create" pretend-knowledge is totally useless.


I'm not referring to hallucinations.

I'm referring to novel relationships drawn between datum in the corpus that are a result of training and inference.

This is apparent in something as simple as a summarization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: