Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, the issue is that statistical models can not reason and determine what is logically valid vs what is most probable (which I guess is also its own kind of logic).


And context, so much context.

The sqrt(-1) sometimes doesn't exist, sometimes it's 1i. 2+2=4, except in literature where it can be 5. 1+1=2, but sometimes 3 in advertisements or in ironical text.

We often have some ideas about e.g how it works in a quiz, where you know there is only one factually correct answer. And we are disappointed if the model is wrong. But even in a quiz setting the jury gets that balance wrong every so often, where there are other answers than the official one which are also correct.

Even "logically valid" is context dependend. This is not to say that models don't hallucinate, just that even within the logically valid answers, there is hidden context surrounding the data which is not expressed in the data itself. Fermats last problem is a solved problem in mathematics, but not in documents from before 1994.


The models operate by the logic of boolean arithmetic so in that sense they can not be inconsistent. But in any case, it's pretty obvious no one in this thread understands what I'm getting at but maybe eventually there will be an AGI smart enough to get the point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: