Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

what i mean is that their implementation (thinking only on the first response) renders zero benefit because it doesn’t see the code itself. They run multiple function calls to analyze your codebase in increments. If they ran the thinking model on the output of those function calls, then performance would be great but, so far, this is not what they are doing (yet). It also dramatically increases the cost of running the same operation.


But the way those models work is to run everything once the function calls come in. Are you saying cursor is not using the model you selected on function calls responses?


This sounds like a Cursor issue, not something that effects reasoning models in general.

edit: Ah, I see what you mean now.


That's my point. Cursor, by offering unlimited requests (500 fast requests + unlimited slow requests) to people paying a fixed $20/mo, they've put themselves into a ruthless marginal cost optimization game where one of their biggest levers for success is reducing context sizes and discouraging thinking after every function call.

Software like Claude Code and Cline do not face those constraints, as the cost burden is on the user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: