Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Cursor has been trying to do things to reduce the costs of inference, especially through context pruning.

You can also use cline with gemini-2.0-flash, which supports a huge context window. Cline will send it the full context and not prune via RAG, which helps.



I've just tried gemini-2.0-flash, this is an incredible model that's great for making edits. I haven't tried any heavy lifting with it yet but It's replaced Claude for a lot of my edits. It's also great at agentic stuff too!


I love cline but i’ve never tried the gemini models with it. I’ll give it a shot tonight, thanks for the tip!


Or you can also use Gemini Code Assist extension for VS Code, which is basically free, but so far, the code it wrote almost never worked for me. So far I use only Claude 3.7 or Grok in chat mode. Almost no model, as of today, is good at coding.


Did Grok 3 finally get an API?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: