The way I’m thinking of it is as a junior engineer who can go do busy work for me. I’m not going to accept their “PRs” without a review. Even if it gets me 75% of the way there, that’s still a big time savings for me.
Hallucination is definitely a problem but can be somewhat mitigated by good prompting. GPT-4 seems less prone to hallucination. This will be better over time.
You can view the prompts used for generating docs here[1] and the prompts used for answering questions here[2]
Documentation of unknown quality is useless noise.
People don't understand the unfathomable amount of garbage that's going to be generated in light of all these models. Doesn't matter how accurate they are, the lack of understanding for that remainder percent of inaccuracy is going to cause false confidence and cause errors to compound like mad.
It would be hell to lose trust to api docs due to those risks.