Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tought some Computational Geometry courses and efficiently computing the intersections of N line segments is not as straightforward as you might initially think. Since somewhere some computation must be done to recognize this and LLMs are not specifically trained for this task, it's not suprising they struggle.

In general, basic geometry seems under-explored by learning.



Yes, but so is telling if a photo contains a dog or understanding sentiment in a paragraph of text. Complexity isn't quite the issue, I think it is that there is a distinction between the type of reasoning which these models have learnt and that which is necessary for concrete mathematical reasoning.


The models do not reason. They have learned associations, because these associations have appeared in their training sets.


They also generalise and categorise and perhaps even form abstractions based on those associations. Those are the beginnings of reasoning.

I expect that as the models grow more complicated so will their reasoning ability.


> Since somewhere some computation must be done to recognize this

Humans don't have a "compute intersections" ability (other than a few who have learned it laboriously through algebra), we have a "see things and count them" mechanism. We aren't visually taking lines in a planar space and determining where they cross. We know what an intersection looks like, we see one, increment a counter, and find the next one. If it's less than around five, we do this all at once. Otherwise we literally count, sometimes in small groups, sometimes one at a time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: