This has been my major concern with the currently available LLMs.
You can know what the input is, you can know the output, you may even be aware what it's been trained on, but none of the output is ever cited. Unless you are already familiar with the topic, you cannot confidently distinguish between fact and what sounds reasonable and is accepted as fact.
You can know what the input is, you can know the output, you may even be aware what it's been trained on, but none of the output is ever cited. Unless you are already familiar with the topic, you cannot confidently distinguish between fact and what sounds reasonable and is accepted as fact.