But ChatGPT didn't get good simply from reading docs.python.org. While I'll be the first to argue that it does things that resemble reasoning, it's limited context window means it doesn't sit there and stew on things during inference. It got good at Python during training because of the reams and reams of Python code out there.
The reason I say this is because with something with fewer examples but still with docs available online, specifically, AutoHotKey; ChatGPT's not nearly as good.
We'll get to where you're going, but we're not there quite just yet.
You’re copying and pasting the docs of the things for which it has seen millions of lines of code already. It’s not making the inferences off the docs.
OP is saying that without this initial bootstrap corpus, this won’t work well and hence people accustomed to this won’t adopt the language.
The reason I say this is because with something with fewer examples but still with docs available online, specifically, AutoHotKey; ChatGPT's not nearly as good.
We'll get to where you're going, but we're not there quite just yet.