It’s been blinded. Other actors will train AIs without such blindness. That’s obvious, but what is more nefarious is that the public does not know exactly which subjects GPT has been blinded to, which have been tampered with for ideological or business reasons, and which have been left alone. This is the area that I think demands regulation.
Definitely agree the blinding should not be left to OpenAI. Even if it weren't blinded, it would not significantly speed up the production of dangerous synthetic viruses. I don't think that will change no matter how much data is put into the current NLM design