Hacker News new | past | comments | ask | show | jobs | submit login

How accurate is it in its responses?

Have you had any instances of it writing a load of bullshit to a user, and if so how did you address this issue from a technical perspective?




Never bullshit, but like any application built with GPT it does occasionally hallucinate. We published info on how we solved that problem here: https://medium.com/p/f3bfcc10e4ec

We've also made some improvements in how we chunk and parse the content to make sure the information it finds is useful, since we've noticed hallucinations tend to happen when the context you give it is irrelevant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: