Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
College student put on academic probation for using Grammarly: 'AI violation' (nypost.com)
55 points by ZeidJ on Feb 23, 2024 | hide | past | favorite | 45 comments


Perhaps there is more to this story. Has the student posted her paper?

She claims she only used it "to fix spelling and punctuation errors, not to create or edit content"

But how would an plagiarism detector pick up that she fixed spelling and punctuation mistakes using grammarly? If she never made those mistakes in the first place, and didn't use grammarly at all, shouldn't the detector still call it AI generated?

So, this story is either about an AI detector that detects human writing as AI, or it is about a lady lying about to what extent she used AI to write her paper. Or maybe the university is spying on what extensions are installed on student laptops, or sniffing network requests, I don't know.

On a final note, is it cheating if I let a friend proofread my paper and offer me suggestions? Why would AI be any different? Clearly it is cheating if my friend actually writes the paper for me, as it would be if AI wrote the paper for me.


The burden of proof should be very high if you're going to wreck someone's academic career and send them to the dean of student integrity. "Computer Says It's AI" shouldn't cut it. A good lawsuit precedent is needed to smack down these guilt-or-innocent black boxes.


AFAIK AI detectors are about as reliable as the AI they're supposed to detect.

This state is shifting as AI improves, and is unlikely to ever theoretically hit 100% correctness (but may ultimately theoretically hit 0% correctness) .

At best you can may be able to use such a detection as a flag to prompt further investigation. This much may or may not be warranted based on the state of AI detection at point X in time.

However, you should not use AI detection software alone to serve as proof or disproof.


I'm guessing a coin toss is closer to what Grammarly does and that would get you 50% accuracy haha.


> But how would an plagiarism detector pick up that she fixed spelling and punctuation mistakes using grammarly?

These detectors produce tons of false positives all the time. You really can not conclude much from the fact that it detected something.


Schools trying to prevent their students from using AI are fighting a rear-guard action. The detection tools will never be reliable, and the conflict between attempts to ban AI use in educational settings and the widespread use of AI in the real world will become increasingly apparent.

Just yesterday, Google added AI features to my Workspace account. Now, when I open a Google Doc, a handy menu appears offering to change the tone of the text, lengthen or shorten it, convert it to bullet points, or follow my custom prompt. When I’ve asked it to correct text that contains grammatical and spelling errors, it cleans up the mistakes while making only minor changes to the wording. It will also write a new text from scratch—no need to cut and paste from Gemini or ChatGPT.

It won’t be easy, but educators will just have to figure out new methods of teaching and evaluation that assume that students have powerful AI tools available at all times.


You just reminded me of my cobol exams in the 90s.

We had to write (with our hands) the program on paper during a 3 hour exam where we couldn't leave the room and teacher checked them all manually.

I guess this guy had been a cobol teacher for so long that his brain was part human, part compiler.

He was wise enough to know that a few missing ; or . weren't a big deal and that the core logic was the most important stuff.

And during our exercises session before the exam, any editor outside of the dos edit was prohibited.

No wonder I always have a feeling of cheating whenever copilot finishes half of my code.


Long time adjunct instructor here - it's pretty easy to tell when a student is writing above their level or not within their tone and style. A lot of times, I can tell the work is not the students; maybe not AI-created, but definitely created by someone else.

I'd like to see the actual paper. Turning a student in for plagiarism is a serious action and not taken lightly. I think the instructor had other clues besides machine alerts.


"Turning a student in for plagiarism is a serious action and not taken lightly."

In this case the instructor did not turn the student in for plagiarism. The student went to the board of academic integrity herself in an attempt to undo the instructors decision to fail her.


I have found myself nodding in agreement with this take: https://x.com/fchollet/status/1750702101523800239


Academia is getting into stage of "Computer says no" without actually understanding why. How ironic.


Reliably detecting AI-generated text is mathematically impossible.

https://www.newscientist.com/article/2366824-reliably-detect...


Paywall: To continue reading, subscribe today with our introductory offers



Pretty bad use of TurnitIn. They specifically say that their “AI detection” is in beta and to not solely depend on it.


Why are you charging people thousands of dollars to train them do things a computer can do for free. Every time an AI paper gets an A, the class should be permanently cancelled. So many colleges aren't teaching anything useful, just taking money in exchange for a "I'm not from a poor family" certification for employers. And with the loan crisis, even that certification is fake.


It's possible that the goal of the course is not to be able to produce an A paper but to gain the knowledge that would allow one to produce an A paper. That is to say, the paper may be a measurement not the goal.


If the AI provides the knowledge, and you have the AI, you have the knowledge. And as Goodhart explains, that makes it a bad measurement. Invent better measurements that measure the actual goal.


Learning? For humans? Visionary.


Learn all you want. Why do you need a grade to tell you what you learned? Why do you need an employer to require you to prove you memorized a book?


One of the dumbest things I've ever read on this site, impressive!


The other thing is that passing a class isn't for the benefit of the schools; it's for the benefit of the student. If the student gets away with cheating and they fail to develop the skills for their career, then they are the ones to lose out. And if it turns out ASI does everything for us all in the future anyway then it doesn't matter.


I've used turn it in and late as December. The MSword plugin scans your paper and offers you a grammerly like experience. It shows which words, quotes, sentences, have come up in other papers and the probability of plagiarism.

It tells you what it thinks of your papers before you turn it in. I think the threshold can be adjusted depending on the school using it. But my expectations from using it, is that it tells you before your turnitin where the discrepancies are, and it gives you a chance to fix them.

So I'm not sure how it's possible to submit with this big of a discrepancy.


Don't these AI detectors also heavily penalize people whose native tongue is not English?


It is pretty hard to make most AI systems talk broken English with bad grammar. That means if an essay is submitted with that grammar it's probably real


The problem is not necesarily broken english, but English that you actively learn (As opposed to passively absorb it being a native) ESL speakers tend to be a lot stiffer and structured with the way they create their sentences

Native speakers usually make subtle "mistakes"*, or even mess with grammar intentionally, while ESL speakers can and do learn to do this, there's a weird line of "Does this make sense to other people without my context" that does create a difference in how often you do it

This is best exemplified by the trope of "Sorry for my broken english", followed by some of the most terse prose you have ever read

*Is it a mistake if it's widely used? Or even done intentionally?


> Generate a three sentence comment saying it's easy to use LLMs to write a comment in broken English. Use a style and diction suggesting the author is an English second language learner from subcontinental Asia.

Using LLMs for making comment in broken English is very easy thing. It help much because you can type and it give back what you need, but with simple English. Very good tool for who not first language English.

(I'd give it a 3/10. It manages the broken English, but its errors don't really represent the kind of broken English requested.)


> Using LLMs for making comment in broken English is very easy thing. It help much because you can type and it give back what you need, but with simple English. Very good tool for who not first language English.

Well, it does pass in the sense that I would never guess that this was AI generated.


That's the converse of PP's comment.


Not in this instance, but couldn't a student immediately sue the university for defamation if they weren't using AI?


From my experience and seeing others using Grammarly, Turnitin doesn't detect it at all, even if you abuse it for longer sentences. It is a magic wand to fix minor errors for the most part, and not an AI capable of writing parts of paper for you. Potentially she struggled with paraphrasing or used chatgpt.


The software has you found guilty.


Students will need to start screen recording all their assignments now. The nice thing is that if you wait until the school fails you or kicks you out you might be able to sue and get your degree paid for by the proceeds.


I work in EdTech and here are my points:

While AI detection software can easily have a false negative, it's much less likely to have a false positive.

The instructor used 2 confirmations. Turnitin and the other tool. That makes the likely of false positive even lower.

The grammarly part is just a distraction from the main concern, which is whether the main portions of the paper were AI generated or not.

If I was the decision maker, I would count it against the student, HOWEVER, I would give a first warning, not academic probation.

I would also allow students to pass their papers through the exact same software (Turnitin) to pre screen their papers. Can this lead to students not being caught just by editng the parts which Turnitin thinks was generated? Yes, HOWEVER, it still makes the students work for that pass instead of them suddenly getting the axe.


Do you know of any published data evaluating the efficacy of these detectors? Because I've never seen any, so I'm curious how you know the false positive/false negative rates.


Hrrrm, this is counterintuitive to me. My own intuition would be to trust the AI-detection tools as much as AI (which is to say: only so much).

However, it's early days.

If you have a minute, can you explain in a little more detail why you believe these tools are reliable? (or what (may) make(s) them reliable, or reliable enough?) . And/or do you have a source that explains it? Or a place to start reading on why people think these tools are effective?


> While AI detection software can easily have a false negative, it's much less likely to have a false positive.

To what extent does the software simply overfit on OpenAI’s house writing style? And to the extent that students may simply learn to imitate that writing style, what theoretical basis is there for this software having any validity?


People in EdTech are the least credible speakers on this topic.


Do you have a better source for this article? NY Post is not reliable.



What false statements has NY Post made?


The irony of it being a criminal justice professor handing out punishments from poorly understood AI systems with no recourse.


These stories always make me wonder what those colleges and universities are teaching when it can allegedly be so easily replaced with AI-generated content. The times when writing fluffy, unoriginal essays would constitute a "soft skill" are definitely gone.

Anyway, until AI tools become standard, my advice to students and researchers is to use version control systems to document every step of the evolution of a paper. That's what I'm going to do while I'm still in Academia (not that I plan to stay, after 16 years I've had enough of this nonsense).


At the undergrad level essays aren't supposed to be incredibly original. It would be pretty hard for them to be, given the volume of students answering the exact same essay questions.


It makes perfect sense that AI would be perfect tool for same content produced by students. After all we expect thousands if not millions of students to produce same or very similar output from same input. Now feed this secretly to some model and no wonder it can be all produced by AI.

Undergraduates are not and never will be doing anything original. And most of the work is getting them to point where they can produce at least something in pretty standard style and format.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: