Never bullshit, but like any application built with GPT it does occasionally hallucinate. We published info on how we solved that problem here: https://medium.com/p/f3bfcc10e4ec
We've also made some improvements in how we chunk and parse the content to make sure the information it finds is useful, since we've noticed hallucinations tend to happen when the context you give it is irrelevant.
We've also made some improvements in how we chunk and parse the content to make sure the information it finds is useful, since we've noticed hallucinations tend to happen when the context you give it is irrelevant.