Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
astrange
11 months ago
|
parent
|
context
|
favorite
| on:
Every model learned by gradient descent is approxi...
It can do that. Verifying an answer is just another algorithm it can learn.
LLMs mostly can't do math but that, like most of their other flaws, is because of the tokenizer.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
LLMs mostly can't do math but that, like most of their other flaws, is because of the tokenizer.