Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What is you opinion on what is the "right level of detail" that we should use when creating technical documents the LLM will use to implement features ?

When I started leaning heavily into LLMs I was using really detailed documentations. Not '20 minutes of voice recordings', but my specification documents would easily hit hundreds of lines even for simple features.

The result was decent, but extremely frustrating. Because it would often deliver 80% to 90% but the final 10% to 20% it could never get right.

So, what I naturally started doing was to care less about the details of the implementation and focus on the behavior I want. And this led me to simpler prompts, to the point that I don't feel the need to create a specification document anymore. I just use the plan mode in Claude Code and it is good enough for me.

One way that I started to think about this was that really specific documentations were almost as if I was 'over-fitting' my solution over other technically viable solutions the model could come up with. One example would be if I want to sort an array, I could either ask for "sort the array" or "merge sort the array". And by forcing a merge sort I may end up with a worse solution. Admittedly sort is a pretty simple and unlikely example, but this could happen with any topic. You may ask the model to use a hash-set but a better solution would be to use a bloom filter.

Given all that, do you think investing so much time into your prompts provides a good ROI compared with the alternative of not really min-maxing every single prompt?



I 100% agree with the over-fitting part.

I tend to provide detailed PRDs, because even if the first couple of iterations of the coding agent are not perfect, it tends to be easier to get there (as opposed to having a vague prompt and move on from there).

What I do sometimes is an experimental run - especially when I am stuck. I express my high-level vision, and just have the LLM code it to see what happens. I do not do it often, but it has sometimes helped me get out of being mentally stuck with some part of the application.

Funnily, I am facing this problem right now, and your post might just have reminded me, that sometimes a quick experiment can be better than 2 days of overthinking about the problem...


This mirrors my experience with AI so far - I've arrived at mostly using the plan and implement modes in Claude Code with complete but concise instructions about the behavior I want with maybe a few guide rails for the direction I'd like to see the implementation path take. Use cases and examples seem to work well.

I kind of assumed that claude code is doing most of the things described this document under the hood (but I really have no idea).


"The result was decent, but extremely frustrating. Because it would often deliver 80% to 90% but the final 10% to 20% it could never get right."

This is everyone's experience if they don't have a vested interest in LLM's, or if their domain is low risk (e.g., not regulated).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: