Hacker News new | past | comments | ask | show | jobs | submit login

Is this the type of reasoning that could be used to basic math without errors (without using a dedicated math engine)? I wonder why that type of reasoning seems to struggle.

I think chatgpt is interesting in its ability to highlight the subtle variations in a concept we tend to think of as more of a single attribute (e.g. reasoning) because humans tend to do certain types of reasoning tasks at the same relative proficiency as others. No human can write complex code without being able to add large numbers, so they are often lumped into the same skill category.




I think there just isn't enough mathematical data. English language data is magnitudes more and longer then mathematical data so there is a correlation here.

The more data the better the LLM can formulate a realistic model. If it has less data then the resulting output is more of a statistical guess.

There is an argument that can be made here that the more data the more things chatgpt can copy and regurgitate but given how vast the solution space is I think data at best covers less then 1 percent.

Basically I think that if your data covers say 2 percent of the solution space you can generate a better model then if the data covered 1 percent.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: