I assume this is only possible if the training data contains only a "right answer". If the training data contains two contradicting answers A and B, then, from the AIs perspective, there is no correct answer.
I assume that for questions like "What year was Bill Gates born in?", it should never return a wrong answer, if the answer was in the training data. If it was not, it should respond that it doesn't know.