AI-Generated Essays On The Rise In College Applications: Educators Debate Impact and Ethics

A screenshot of a chat between the author and ChatGPT
(Image credit: Screenshot by Erik Ofgang)

Valerio Capraro, a psychology professor at the University of Milan, has recently seen an uptick in research statements from PhD applicants that appear to be at least partially AI-generated.

“I would say that almost all applications show traces of AI-generated content, and about half seem entirely AI-generated,” says Capraro, who studies social behavior and AI. “This trend is very concerning.”

Capraro recently posted on social media about one such submission he received. After the post unexpectedly went viral, Capraro was surprised to learn that many educators were angry at him for assuming the piece had been AI-generated, and even accused him of discrimination—with one angry educator contacting his department to complain.

“[They] were claiming that I was discriminating against non-native English speakers,” he says, adding that the accusation doesn’t make sense in his context. “All our applicants are, in fact, non-native English speakers.”

In the end, Capraro did not accept this specific application, not because he suspected it was AI-generated, but because it was not compelling, as is the case with much AI-generated text currently.

Ultimately, the incident highlights both the increase in AI-generated college submissions from undergraduate applicants to graduate and PhD programs, and the challenges those reading these submissions face. Existing AI detection tools are flawed and there as is as yet no consensus on how best to respond to AI-generated content.

Using AI For College Admissions 

As AI becomes more available, some educators are even questioning why preventing students from using AI-generated application materials is a problem.

Jeffrey Hancock, a communications professor at Stanford University, recently told the publication CalMatters that students might have stronger applications using AI-generated essays. Hancock suggests doing this by custom training a tool such as ChatGPT on a mix of good and bad college essays. The AI can be told to emulate the good essays and avoid patterns in the bad ones. This strategy might be particularly appealing because many colleges have been slow to implement specific policies regarding AI use in application materials. However, getting caught using AI is still likely to hurt applications to most institutions.

The Common App, used by more than 1 million students annually, has a policy against AI use, said Jackson Sternberg, a PR specialist with the company, via email. Students using the system must agree to the app's terms of service, which prohibit the transmission of fraudulent information, and then sign an affirmation that what they are submitting is their own work. The app’s fraud policy explicitly prohibits submitting "substantive" AI-generated content.

“We investigate all fraud allegations, and if they are substantiated, we take appropriate disciplinary steps," Sternberg said.

Even so, much of the vetting process is left to the higher ed institutions themselves. “Individual member colleges process and review applicant data based on their own policies and procedures,” Sternberg added.

What Educators and Students Can Do  

Capraro believes the best course of action is to engage with AI-generated submission materials the same way he would any other material. “It is the content that counts,” he says. “AI-generated text tends to be just average. It might look good for a high-school essay, but if you are an evaluator for an advanced position, like a PhD fellowship, then AI-generated text tend to get very low grades, not because it is generated by AI, but because it is superficial and often incorrect. As an evaluator, I focus on the content rather than the form.”

Students completing various applications should be encouraged to use AI to help, not replace, their writing, Capraro says. As a non-native English speaker, he’s used AI to help his English writing improve immensely over the past year. For this reason, he doesn’t think applicants should necessarily be prohibited from using AI. “They should be discouraged from over-relying on it as a shortcut for avoiding working,” he says. “Applicants need to understand that there are no real shortcuts in professional settings. Especially for competitive positions, the personal input is fundamental.”

Capraro adds that ultimately AI won’t be good enough for the most ambitious applicants, and that this could be the most effective message to students. “I think that people who will over-rely on AI in the next years will just regress to the average and get the average jobs,” he says.

Erik Ofgang

Erik Ofgang is a Tech & Learning contributor. A journalist, author and educator, his work has appeared in The New York Times, the Washington Post, the Smithsonian, The Atlantic, and Associated Press. He currently teaches at Western Connecticut State University’s MFA program. While a staff writer at Connecticut Magazine he won a Society of Professional Journalism Award for his education reporting. He is interested in how humans learn and how technology can make that more effective.