I like what you're doing there. It does seem like we might need some new kind of language to interface with LLMs. Sort of the language of prompt engineering, that's a bit more specific than just "raw English language" but also more powerful than just pure templating systems.
But we have to admit also that LLMs may become (or maybe even OpenAI-01 already is) smart enough that they can not only write the code to solve some task but understand the task well enough to also be able to write even better Unit Tests than humans ever could. Once AI starts writing Unit Tests (internally even) for everything it spits out we can probably say humans will at that point be truly obsolete for writing apps. However, even then, the LLM output will still need to be computer code, rather than just having the LLMs just "interpret" English language all the time to "run" apps.
Ever heard of the halting problem [0]? Every time I heard these claims, it sounds like someone saying that we can travel in time as soon as we invent a faster than light vessel, or better, Dr Who’s cabin. There’s a whole set of theorems that says ultimately how a formal system (which computers are) can’t be completely automated as there are classes of problems it can’t solve. Anything the LLMs do, you can write a better performing software except for the task that it is best suited for: translation between natural languages. And the latter, it’s because it’s a pain to write all the rules.
LLMs are doing genuine reasoning already (and no I don't mean consciousness or qualia), and they were even since GPT3.5.
They can already take descriptions of tasks and write computer programs to do those tasks, because they have a genuine understanding of the tasks (again no qualia implied).
I never said there are no limits to what LLMs can do, or no limits to what logic can prove, or even no limits to what humans can understand. Everything has limits.
EDIT: And before you accuse me of saying LLMs can understand all tasks, go back and re-read the post a second time, so you don't make that mistake again.