The well-educated renaissance man has a lot more feedback on what is useful and what isn't. I think that GPT-3 could be vastly improved simply by assigning weights to knowledge, IE valuing academic papers more.
Humans get this through experience and time (recognizing patterns about which sources to trust) but there is nothing magical about it. Should be very easy to add this.
I had this exact conversation with a friend over the weekend. If GPT-n weights all input equally then we are truly in for a bad ride. It's basically the same problem we are experiencing with social media.
It is a very interesting problem. Throughout history humans have been able to rely on direct experience via our senses to evaluate input and ideas.
Many of those ideas are now complex enough that direct experience doesn't work. IE global warming, economics, various policies. Futhermore even direct (or near-direct experience such as video) is becoming less trustworthy due to technology like deepfakes and eventually VR and neuralink.
It seems to me that this problem of validating what is real and true might soon be an issue for both humans and AI. Are we both destined to put our future in the hands of weights provided by 'experts'?
Humans get this through experience and time (recognizing patterns about which sources to trust) but there is nothing magical about it. Should be very easy to add this.