It's part of why they love agents and tools like cursor -> turns a problem that could've been one prompt and a few hundred tokens into dozens of prompts and thousands of tokens ;)
It's be nice if I could solve any problem by speccing it out in its entirety and then just implement. In reality, I have to iterate and course correct, as do agentic flows. You're right that the AI labs love it though, iterating like that is expensive.
The bigger picture goal here is to explore using prompts to generate new prompts
I see this as the same as a reasoning loop. This is the approach I use to quickly code up pseudo reasoning loops on local projects. Someone had asked in another thread "how can I get the LLM to generate a whole book", well, just like this. If it can keep prompting itself to ask "what would chapter N be?" until "THE END", then you get your book.
Crazy that OpenAI only launched o1 in September 2024. Some of these ideas have been swirling for a while but it feels like we're in a special moment where they're getting turned into products.
This is kind of like a self generating agentic context.. cool. I think regular agents, especially adversarial agents, are easier to get focused on most types of problems though.
I feel that often getting LLMs to do things like mathematical problems or citation is much harder than simply writing software to achieve that same task.
It's part of why they love agents and tools like cursor -> turns a problem that could've been one prompt and a few hundred tokens into dozens of prompts and thousands of tokens ;)
reply