how is that better than AI Coding tools?
They do more sophisticated things such as creating compressed representations of the code that fit better into the context window. E.g https://aider.chat/docs/repomap.html.
I have never found embeddings to be that helpful, or context beyond 30-50K tokens to be used well by the models. I think I get better results by providing only the context I know for sure is relevant, and explaining why I'm providing it. Perhaps if you have a bunch of boilerplate documentation that you need to pattern-match on it can be helpful, but generally I try to only give the models tasks that can be contextualized by < 15-20 medium code files or pages of documentation.
https://github.com/modelcontextprotocol/servers/tree/main/sr...
This way you will still be in control of commits and pushes.
So far I've used this to understand parts of a code base, and to make edits to a folder of markdown files.