Hacker News new | past | comments | ask | show | jobs | submit login

How to verify the meaning of docs? How to deal with the model hallucinations?

It would be hell to lose trust to api docs due to those risks.




The way I’m thinking of it is as a junior engineer who can go do busy work for me. I’m not going to accept their “PRs” without a review. Even if it gets me 75% of the way there, that’s still a big time savings for me.


Hallucination is definitely a problem but can be somewhat mitigated by good prompting. GPT-4 seems less prone to hallucination. This will be better over time.

You can view the prompts used for generating docs here[1] and the prompts used for answering questions here[2]

[1]https://github.com/context-labs/autodoc/blob/master/src/cli/... [2https://github.com/context-labs/autodoc/blob/master/src/cli/...]


Very good point, but easily solved - just tag the docs as being generated by GPT-4, and make sure whoever reads them knows it.


That doesn't solve the problem.

Documentation of unknown quality is useless noise.

People don't understand the unfathomable amount of garbage that's going to be generated in light of all these models. Doesn't matter how accurate they are, the lack of understanding for that remainder percent of inaccuracy is going to cause false confidence and cause errors to compound like mad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: