- start by "chatting" with the model and asking for "how you'd implement x y z feature, without code".
- what's a good architecture for x y z
- what are some good patterns for this
- what are some things to consider when dealing with x y z
- what are the best practices ... (etc)
- correct / edit out some of the responses
- say "ok, now implement that"
It's basically adding stuff to the context by using the LLM itself to add things to context. An LLM is only going to attend to it's context, not to "whatever it is that the user wants it to make the connections without actually specifying it". Or, at least in practice, it's much better at dealing with things present in its context.
Another aspect of prompting that's often misunderstood is "where did the model see this before in its training data". How many books / authoritative / quality stuff have you seen where each problem is laid out with simple bullet points? Vs. how many "tutorials" of questionable quality / provenance have that? Of course it's the tutorials. Which are often just rtfm / example transcribed poorly into a piece of code, publish, make cents from advertising.
If instead you ask the model for things like "architecture", "planning", stuff like that, you'll elicit answers from quality sources. Manuals, books, authoritative pieces of content. And it will gladly write on those themes. And then it will gladly attend to them and produce much better code in a follow-up question.
I hadn't seen this before. Why is asking for planning better than asking it to think step by step?