> Mind you, I achieved 80% of what I wanted after iterating and "telling" the chat that their answers were wrong, and going over the code to double-check if everything was okay
I very often read things like this, and I'm surprised how often the person estimates "around 80%" of the work was good. It feels so perfectly tailored to the Pareto Principal
The LLM does the easy 80% (which we usually say takes 20% of the time anyways). Then the human has to go do the harder remaining 20%, only with a much smaller mental model of how the original 80% is fitting together
I very often read things like this, and I'm surprised how often the person estimates "around 80%" of the work was good. It feels so perfectly tailored to the Pareto Principal
The LLM does the easy 80% (which we usually say takes 20% of the time anyways). Then the human has to go do the harder remaining 20%, only with a much smaller mental model of how the original 80% is fitting together