I’ve also had success writing documentation ahead of time (keeping these in a separate repo as docs), and then referencing it for various stages. The doc will have quasi-code examples of various features, and then I can have a models stubbed in one pass, failing tests in the next, etc.
But there’s a guiding light that both the LLM and I can reference.
Sometimes I wonder if pseudocode could be better for prompting than expressive human language, because it can follow a structure and be expressive but constrained -- have you seen research on this and whether this an effective technique?
I like asking for the plan of action first, what does it think to do before actually do any edits/file touching.