As the article says, it helps to develop an intuition for what the models are good or bad at answering. I can often copy-paste some logs, tracebacks, and images of the issue and demand a solution without a long manual prompt - but it takes some time to learn when it will likely work and when it's doomed to fail.
This is likely the biggest disconnect between people that enjoy using them and those that don’t. Recognizing when GPT-4’s about to output nonsense and stopping it in the first few sentences before it wastes your time is a skill that won’t develop until you stop using them as if they’re intended to be infallible.
At least for now, you have to treat them like cheap metal detectors and not heat-seeking missiles.