10 Ways to Detect AI Writing Without Technology
As more of my students have submitted AI-generated work, I’ve gotten better at recognizing it.
AI-generated papers have become regular but unwelcome guests in the undergraduate college courses I teach. I first noticed an AI paper submitted last summer, and in the months since I’ve come to expect to see several per assignment, at least in 100-level classes.
I’m far from the only teacher dealing with this. Turnitin recently announced that in the year since it debuted its AI detection tool, about 3 percent of papers it reviewed were at least 80 percent AI-generated.
Just as AI has improved and grown more sophisticated over the past 9 months, so have teachers. AI often has a distinct writing style with several tells that have become more and more apparent to me the more frequently I encounter any.
Before we get to these strategies, however, it’s important to remember that suspected AI use isn’t immediate grounds for disciplinary action. These cases should be used as conversation starters with students and even – forgive the cliché – as a teachable moment to explain the problems with using AI-generated work.
To that end, I’ve written previously about how I handled these suspected AI cases, the troubling limitations and discriminatory tendencies of existing AI detectors, and about what happens when educators incorrectly accuse students of using AI.
With those caveats firmly in place, here are the signs I look for to detect AI use from my students.
1. How to Detect AI Writing: The Submission is Too Long
When an assignment asks students for one paragraph and a student turns in more than a page, my spidey sense goes off.
Tech & Learning Newsletter
Tools and ideas to transform education. Sign up below.
Almost every class does have one overachieving student who will do this without AI, but that student usually sends 14 emails the first week and submits every assignment early, and most importantly, while too long, their assignment is often truly well written. A student who suddenly overproduces raises a red flag.
2. The Answer Misses The Mark While Also Being Too Long
Being long in and of itself isn’t enough to identify AI use, but it's often overlong assignments that have additional strange features that can make it suspicious.
For instance, the assignment might be four times the required length yet doesn’t include the required citations or cover page. Or it goes on and on about something related to the topic but doesn’t quite get at the specifics of the actual question asked.
3. AI Writing is Emotionless Even When Describing Emotions
If ChatGPT was a musician it would be Kenny G or Muzak. As it stands now, AI writing is the equivalent of verbal smooth jazz or grey noise. ChatGPT, for instance, has this very peppy positive vibe that somehow doesn’t convey actual emotion.
One assignment I have asks students to reflect on important memories or favorite hobbies. You immediately sense the hollowness of ChatGPT's response to this kind of prompt. For example, I just told ChatGPT I loved skateboarding as a kid and asked it for an essay describing that. Here’s how ChatGPT started:
As a kid, there was nothing more exhilarating than the feeling of cruising on my skateboard. The rhythmic sound of wheels against pavement, the wind rushing through my hair, and the freedom to explore the world on four wheels – skateboarding was not just a hobby; it was a source of unbridled joy.
You get the point. It’s like an extended elevator jazz sax solo but with words.
4. Cliché Overuse
Part of the reason AI writing is so emotionless is that its cliché use is, well, on steroids.
Take the skateboarding example in the previous entry. Even in the short sample, we see lines such as “the wind rushing through my hair, and the freedom to explore the world on four wheels.” Students, regardless of their writing abilities, always have more original thoughts and ways of seeing the world than that. If a student actually wrote something like that, we’d encourage them to be more authentic and truly descriptive.
Of course, with more prompt adjustments, ChatGPT and other AI’s tools can do better, but the students using AI for assignments rarely put in this extra time.
5. The Assignment Is Submitted Early
I don’t want to cast aspersions on those true overachievers who get their suitcases packed a week before vacation starts, finish winter holiday shopping in July, and have already started saving for retirement, but an early submission may be the first signal that I’m about to read some robot writing.
For example, several students this semester submitted an assignment the moment it became available. That is unusual, and in all of these cases, their writing also exhibited other stylistic points consistent with AI writing.
Warning: Use this tip with caution as it is also true that many of my best students have submitted assignments early over the years.
6. The Setting Is Out of Time
AI image generators frequently have little tells that signal the AI model that created it doesn’t understand what the world actually looks like — think extra fingers on human hands or buildings that don’t really follow the laws of physics.
When AI is asked to write fiction or describe something from a student’s life, similar mistakes often occur. Recently, a short story assignment in one of my classes resulted in several stories that took place in a nebulous time frame that jumped between modern times and the past with no clear purpose.
If done intentionally this could actually be pretty cool and give the stories a kind of magical realism vibe, but in these instances, it was just wonky and out-of-left-field, and felt kind of alien and strange. Or, you know, like a robot had written it.
7. Excessive Use of Lists and Bullet Points
Here are some reasons that I suspect students are using AI if their papers have many lists or bullet points:
1. ChatGPT and other AI generators frequently present information in list form even though human authors generally know that’s not an effective way to write an essay.
2. Most human writers will not inherently write this way, especially new writers who often struggle with organizing information.
3. While lists can be a good way to organize information, presenting more complex ideas in this manner can be .…
4 … annoying.
5. Do you see what I mean?
6. (Yes, I know, it's ironic that I'm complaining about this here given that this story is also a list.)
8. It’s Mistake-Free
I’ve criticized ChatGPT’s writing here yet in fairness it does produce very clean prose that is, on average, more error-free than what is submitted by many of my students. Even experienced writers miss commas, have long and awkward sentences, and make little mistakes – which is why we have editors. ChatGPT’s writing isn’t too “perfect” but it’s too clean.
9. The Writing Doesn’t Match The Student’s Other Work
Writing instructors know this inherently and have long been on the lookout for changes in voice that could be an indicator that a student is plagiarizing work.
AI writing doesn't really change that. When a student submits new work that is wildly different from previous work, or when their discussion board comments are riddled with errors not found in their formal assignments, it's time to take a closer look.
10. Something Is Just . . . Off
The boundaries between these different AI writing tells blur together and sometimes it's a combination of a few things that gets me to suspect a piece of writing. Other times it’s harder to tell what is off about the writing, and I just get the sense that a human didn’t do the work in front of me.
I’ve learned to trust these gut instincts to a point. When confronted with these more subtle cases, I will often ask a fellow instructor or my department chair to take a quick look (I eliminate identifying student information when necessary). Getting a second opinion helps ensure I’ve not gone down a paranoid “my students are all robots and nothing I read is real” rabbit hole. Once a colleague agrees something is likely up, I’m comfortable going forward with my AI hypothesis based on suspicion alone, in part, because as mentioned previously, I use suspected cases of AI as conversation starters rather than to make accusations.
Again, it is difficult to prove students are using AI and accusing them of doing so is problematic. Even ChatGPT knows that. When I asked it why it is bad to accuse students of using AI to write papers, the chatbot answered: “Accusing students of using AI without proper evidence or understanding can be problematic for several reasons.”
Then it launched into a list.
Erik Ofgang is a Tech & Learning contributor. A journalist, author and educator, his work has appeared in The New York Times, the Washington Post, the Smithsonian, The Atlantic, and Associated Press. He currently teaches at Western Connecticut State University’s MFA program. While a staff writer at Connecticut Magazine he won a Society of Professional Journalism Award for his education reporting. He is interested in how humans learn and how technology can make that more effective.