I think part of the problem comes to the sheer amount of jargon in even the simplest research paper. During my time in graduate school (CS) I would often do work that used papers in mathematics (differential geometry) for some of the stuff I was researching. Even having been fairly well versed in the jargon of both fields I was often left dumbfounded reading a paper.
This would seem to me a situation that is easily exploited by an AI that generate plausible text. If you pack enough jargon into your paper you will probably make it past several layers of review until someone actually sits down and checks the math/consistency which will be, of course, off in a way that is easily detected.
It's a problem academia has in general. Especially in STEM fields they have gotten so specialized that you practically need a second PhD in paper reading to even begin to understand the cutting edge. Maybe forcing text to be written so that early undergrads can understand it (without simplifying it to the point of losing meaning) would prevent this as an AI would likely be unable to do such feat without real context and understanding of the problem. Almost like adversarial Feynman method.
This would seem to me a situation that is easily exploited by an AI that generate plausible text. If you pack enough jargon into your paper you will probably make it past several layers of review until someone actually sits down and checks the math/consistency which will be, of course, off in a way that is easily detected.
It's a problem academia has in general. Especially in STEM fields they have gotten so specialized that you practically need a second PhD in paper reading to even begin to understand the cutting edge. Maybe forcing text to be written so that early undergrads can understand it (without simplifying it to the point of losing meaning) would prevent this as an AI would likely be unable to do such feat without real context and understanding of the problem. Almost like adversarial Feynman method.