I have never found embeddings to be that helpful, or context beyond 30-50K tokens to be used well by the models. I think I get better results by providing only the context I know for sure is relevant, and explaining why I'm providing it. Perhaps if you have a bunch of boilerplate documentation that you need to pattern-match on it can be helpful, but generally I try to only give the models tasks that can be contextualized by < 15-20 medium code files or pages of documentation.