11 Ways to Detect AI Writing Without Technology

11 Ways to Detect AI Writing
(Image credit: Image by Jorge Franganillo from Pixabay)
Recent updates

This article was updated in September 2025.

AI is getting better and better at writing, but it still generally has frequent giveaways that tip off me as a teacher when a student might be using it to write their papers. These AI writing “tells” include a lack of creativity, too much cliche use, and a tone that often feels flat. That said, it is getting harder to spot AI writing.

In a previous version of this story, I wrote about how writing about current events was a major weakness for AI; that’s no longer the case, as most advanced AI tools can now incorporate info mined from recent news stories with relative ease. That’s why I’m updating this story so it better reflects current strategies that actually work against the latest slate of AI tools, including GPT-5 and more. I use these tools regularly in the classes I teach, and in conversations with colleagues, I have learned that many of them have developed similar techniques.

Before we get to these strategies, however, it’s important to remember that suspected AI use isn’t immediate grounds for disciplinary action. These cases should be used as conversation starters with students and even – forgive the cliché – as a teachable moment to explain the problems with using AI-generated work.

To that end, I’ve written previously about how I handled these suspected AI cases, the troubling limitations and discriminatory tendencies of existing AI detectors, and about what happens when educators incorrectly accuse students of using AI.

With those caveats firmly in place, here are the signs I look for to detect AI use from my students.

1. How to Detect AI Writing: The Submission is Too Long 

When an assignment asks students for one paragraph and a student turns in more than a page, my spidey sense goes off.

Almost every class does have one overachieving student who will do this without AI, but that student usually sends 14 emails the first week and submits every assignment early, and most importantly, while too long, their assignment is often truly well-written. A student who suddenly overproduces raises a red flag.

2. The Answer Misses The Mark While Also Being Too Long

Being long in and of itself isn’t enough to identify AI use, but it's often overlong assignments that have additional strange features that can make it suspicious.

For instance, the assignment might be four times the required length yet doesn’t include the required citations or cover page. Or it goes on and on about something related to the topic but doesn’t quite get at the specifics of the actual question asked.

3. AI Writing is Emotionless Even When Describing Emotions 

At the beginning of the year, I wrote about how if AI writing was a musician, it would play musak. These days that is still true, though ChatGPT is improving. The chatbot’s writing is still somewhat insincere and flat, but not quite as flat as it once was. For instance, back in January, when I told ChatGPT I loved skateboarding as a kid and asked it to write an essay about that for me, it wrote:

Skateboarding was more than just a hobby when I was a kid—it was a way of life. The moment I first stepped onto a board, I discovered a new kind of freedom that didn’t exist anywhere else. It wasn’t just about rolling down the sidewalk; it was about learning balance, feeling the wind against my face, and carving out a world that belonged entirely to me.

This still doesn’t look like what a student would write and there’s a corniness to it, but the AI touch is a bit harder to spot now than it was a little while back. At the beginning of the year, this same prompt led ChatGPT to create the more obviously AI-generated piece of writing below:

As a kid, there was nothing more exhilarating than the feeling of cruising on my skateboard. The rhythmic sound of wheels against pavement, the wind rushing through my hair, and the freedom to explore the world on four wheels – skateboarding was not just a hobby; it was a source of unbridled joy.

4. Cliché Overuse

Part of the reason AI writing is so emotionless is that its cliché use is, well, on steroids.

Take the skateboarding example in the previous entry. Even in the short sample, we see lines such as “feeling the wind against my face, and carving out a world that belonged entirely to me.” Students, regardless of their writing abilities, always have more original thoughts and ways of seeing the world than that. If a student actually wrote something such as that, we’d encourage them to be more authentic and truly descriptive.

Of course, with more prompt adjustments, ChatGPT and other AI’s tools can do better, but the students using AI for assignments rarely put in this extra time.

5. The Assignment Is Submitted Early

I don’t want to cast aspersions on those true overachievers who get their suitcases packed a week before vacation starts, finish winter holiday shopping in July, and have already started saving for retirement, but an early submission may be the first signal that I’m about to read some robot writing.

For example, several students this semester submitted an assignment the moment it became available. That is unusual, and in all of these cases, their writing also exhibited other stylistic points consistent with AI writing.

Warning: Use this tip with caution as it is also true that many of my best students have submitted assignments early over the years.

6. Excessive Use of Lists and Bullet Points  

Here are some reasons that I suspect students are using AI if their papers have many lists or bullet points:

1. ChatGPT and other AI generators frequently present information in list form even though human authors generally know that’s not an effective way to write an essay.

2. Most human writers will not inherently write this way, especially new writers who often struggle with organizing information.

3. While lists can be a good way to organize information, presenting more complex ideas in this manner can be .…

4 … annoying.

5. Do you see what I mean?

6. (Yes, I know, it's ironic that I'm complaining about this here given that this story is also a list.)

7. It’s Mistake-Free 

I’ve criticized ChatGPT’s writing here yet in fairness it does produce very clean prose that is, on average, more error-free than what is submitted by many of my students. Even experienced writers miss commas, have long and awkward sentences, and make little mistakes – which is why we have editors. ChatGPT’s writing isn’t too “perfect” but it’s too clean.

8. The Writing Doesn’t Match The Student’s Other Work  

Writing instructors know this inherently and have long been on the lookout for changes in voice that could be an indicator that a student is plagiarizing work.

AI writing doesn't really change that. When a student submits new work that is wildly different from previous work, or when their discussion board comments are riddled with errors not found in their formal assignments, it's time to take a closer look.

9. Citations That Don’t Exist

This is probably one of my favorite tells as it's less ambiguous than some others on this list.

As AI gets more advanced, I frequently find myself reading essays that at first glance look pretty good and have decent quotes and citations. However, upon closer examination, these citations can’t be found in the real world.

When you spot this type of case, it can be a great conversation starter around AI. You don’t have to be accusatory but can say something such as, “I tried to find this source you cited because it sounded interesting. Before I grade this, can you send me a copy or tell me more about where you found it?”

However, though AI still occasionally makes up citations and quotations, most models have improved in this regard in recent months. Therefore, this method is, sadly, not as effective as it once was.

10. Repeating Patterns

My oldest child is obsessed with a Sesame Street song that includes the chorus, “Patterns repeat, they go over and over . . . .

This song has been seared on my soul, but it also is a good reminder for spotting student-generated AI writing. Sometimes one assignment will yield a number of papers that are eerily similar. While these are rarely actually identical, AI tends to have patterns when responding to similar prompts, so if you see a few students answering the same question in a way that sounds really similar, AI might be to blame. This can also be a helpful way to tackle AI without accusing a student of using AI because it can fall under the category of traditional plagiarism.

11. Something Is Just . . . Off 

The boundaries between these different AI writing tells blur together and sometimes it's a combination of a few things that gets me to suspect a piece of writing. Other times it’s harder to tell what is off about the writing, and I just get the sense that a human didn’t do the work in front of me.

I’ve learned to trust these gut instincts to a point. When confronted with these more subtle cases, I will often ask a fellow instructor or my department chair to take a quick look (I eliminate identifying student information when necessary). Getting a second opinion helps ensure I’ve not gone down a paranoid “my students are all robots and nothing I read is real” rabbit hole. Once a colleague agrees something is likely up, I’m comfortable going forward with my AI hypothesis based on suspicion alone, in part, because as mentioned previously, I use suspected cases of AI as conversation starters rather than to make accusations.

Again, it is difficult to prove students are using AI and accusing them of doing so is problematic. Even ChatGPT knows that. When I asked it why it is bad to accuse students of using AI to write papers, the chatbot answered: “Accusing students of using AI without proper evidence or understanding can be problematic for several reasons.”

Then it launched into a list.

TOPICS
Erik Ofgang

Erik Ofgang is a Tech & Learning contributor. A journalist, author and educator, his work has appeared in The New York Times, the Washington Post, the Smithsonian, The Atlantic, and Associated Press. He currently teaches at Western Connecticut State University’s MFA program. While a staff writer at Connecticut Magazine he won a Society of Professional Journalism Award for his education reporting. He is interested in how humans learn and how technology can make that more effective.