I think I generally understand the transformer architecture. Now, "developing their own goals", maybe that wouldn't make sense for LLMs alone, but "planning how to achieve [some goal]", seems somewhere between "seems like it could be done by adding on a small harness" and "don't they, in a sense, already do that?" .
Like, if you ask ChatGPT to come up with a plan for you for how to accomplish some task, I'm not saying it is like, great at doing this in general, but it can do this to some degree at least, and I don't see any clear limiting principle for "a transformer based model that produces text cannot do [X]" as far as planning-in-text goes.
Like, if you ask ChatGPT to come up with a plan for you for how to accomplish some task, I'm not saying it is like, great at doing this in general, but it can do this to some degree at least, and I don't see any clear limiting principle for "a transformer based model that produces text cannot do [X]" as far as planning-in-text goes.