Hacker News new | past | comments | ask | show | jobs | submit login

LLMs are statistical text generators whose results depend on the model and the context given. They have gotten so good because the context they can effectively operate over keeps getting really big. If you take the model and operate on a small context, you will get very uninteresting results.

The only reason it seems like it is reasoning is because it’s probably stuffing a lot of reasoning in its context, and regurgitating that out in ways that are statically weighted with other things in the context on what is being reasoned about.

Frankly, even most commenters on HN don’t get how LLMs operate, thinking the model itself is what knows about different bases like hex and oct, when really, it searched up a bunch of material on different bases to include in the context before the model was ever activated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: