I think it’s more likely, “engineers who didn’t try the tool their CEO specifically asked them to, warning them that there would be consequences if they didn’t.”
Sometimes I do OP’s approach, sometimes yours, but in all cases, writing down what you need done in detailed English gets me to a better understand of what the hell I’m even doing.
Even if I wrote the same prompts and specs and then typed everything myself, it would have already been an improvement.
4o-mini costs ~$0.26 per Mtok, running qwen-2.5-7b on a rented 4090 (you can probably get better numbers on a beefier GPU) will cost you about $0.8. But 3.5-turbo was $2 per Mtok in 2023, so IMO actual technical progress in LLMs drives prices down just as hard as venture capital.
When Uber did it in 2010s, cars didn't get twice as fast and twice as cheap every year.
When I pay attention to o3 CoT, I notice it spends a few passes thinking about my system prompt. Hard to imagine this question is hard enough to spend 13 seconds on.
That’s a good thing. You want an LLM to know about product or service you are selling and promote it to its users. Getting into the training data is the new SEO.
Ironically, cloudflare is also the reason OpenAI agent mode with web use isn’t very usable right now. Every second time I asked it to do a mundane task like checking me in for a flight it couldn’t because of cloudflare.
we seeing many post about site owner that got hit by millions request because of LLM, we cant blame cloudflare for this because it literally neccessary evil
Not in my experience. Unless you explicitly prompt and bias that model for that kind of deep answer (which you won't unless you are already experienced in the field) you're going to get some sycophantic superficial dribble that's only slightly better than the wikipedia page.
reply