College Admissions: Should AI Apply?

Universities are wrestling with the implications of applicants leaning on chatbots for essay help

5 min read

Willie Jones covers transportation for IEEE Spectrum, and the history of technology for The Institute.

An illustration of a person and a robot carrying an oversized keyboard together.
Dan Page

A new school year is dawning for the Northern Hemisphere, which means a new crop of high school seniors are staring down the dreaded college admissions process. To secure their place at the institution of higher learning of their choice, many of these students will need to write personal essays that reveal their perspective on the world and on themselves while showing how proficient they are at composing a cohesive, elegant narrative.

If the U.S. college admissions scandal of a few years ago revealed one thing, it’s that it’s difficult to fake being an A student or the starting left tackle on the football team and maintain that ruse for long. But the essay-writing portion of the competition for spots in colleges’ freshman classes is increasingly being infiltrated by AI, says Christopher Hathaway. He’s both a former member of the admissions committee at Yale University and now runs Advantage Ivy Tutoring, a service that coaches high school students to help them make the most of their academic abilities and extracurricular interests. In doing so, says Hathaway, “you inherently end up with good candidates” for schools like Yale that have ultralow acceptance rates. Hathaway operates near the front line of a growing controversy as schools attempt to adjust, articulate, and/or enforce their respective stances on how the use of AI for crafting college admission essays dovetail with their existing honor codes that spell out penalties for, say, cheating on exams and plagiarism.

At first blush, you might assume that applicants to the most competitive schools—think Yale, Harvard, and Princeton, which admit less than 5 percent of candidates—would have the most incentive to take advantage of AI for even the slightest edge. But, as it turns out, the opposite is true. The level of writing skill these schools’ admissions committees want to see is advanced enough that generative AI as it exists today is incapable of producing text with the requisite level of sophistication. The upshot, says Hathaway, is that “AI use for essay writing becomes more prevalent as you get to schools that are less selective—you know, those that are accepting maybe 50 percent of their applicants. The quality of writing obviously is a lower standard. It’s in that milieu that we have seen AI become a presence.”

“We asked one of the chatbots to use an extended metaphor in an essay, and eight of 16 times, it used some kind of an orchestra metaphor.”
—Christopher Hathaway, Advantage Ivy Tutoring

The first challenge for these schools’ admissions officers is answering a basic question: What are some telltale signs of an AI-generated essay? “My team and I recently finished a pretty extensive study with the four mainstream chatbots—Chat GPT, Bard, Bing, and GPT-4,” says Hathaway. “We’re talking 65-plus hours of trials. And what we came up with, in terms of the signature signs of AI, is first, a lack of creativity. In one of the examples, a sample student was interested in art and had gotten really interested in it because of his interest in comic books featuring underwater creatures. In the first draft, the chatbots just said that, effectively. In the next draft, we asked the bots to provide additional detail. Two of these chatbots provided descriptions of the underwater creatures. And the interesting thing was that they came up with the exact same animals in the exact same order.” Chatbots’ “if-one-is-good, more-of-the-same-is-better” approach is an issue that students are going to run into in their real-world attempt to get a one-size-fits-all artificial intelligence to help them deliver a narrative that’s supposed to be deeply personal.

“These bots are just systematically looking for the next best word that fits into a sequence statistically,” says Hathaway. “Add to that the fact that they’re effectively pulling from the same material repeatedly, and so that really impacts authenticity and originality.”

Another issue commonly encountered with AI-generated essays is stilted language that’s corporate in tone and syntactically uninteresting. “We asked one of the bots to use an extended metaphor in an essay, and eight of 16 times, it used some kind of an orchestra metaphor,” Hathaway recalls. “And this was GPT-4, which was, by far, the most competent of the bots.” Imagine being an admissions officer at a large state school who must review 4,000 of, say, 20,000 applications that came in before the deadline. If half of the essays present the same metaphor, reading them back-to-back would become stultifying, and hardly any of the candidates would stand out.

Just as problematic, Hathaway notes, is that when the bots were asked to switch things up, they defaulted to exaggerated, often sensationalist verbiage. “They definitely went overboard,” says, Hathaway, remembering that the chatbots found no middle ground between short, declarative sentences that did a lot of telling, not showing, and incredibly flowery language that became recognizable as one of the bots’ hackneyed hallmarks.

Still, a big challenge for admissions officers—besides boredom—is being certain about whether any particular essay has been cowritten or completely ghostwritten by AI. One bugaboo for universities has been false positives. The use of software-based detectors as a countermeasure has left U.S. schools open to accusations of bias against non-native English speakers. “Non-native speakers who are submitting applications are having a little bit more difficulty with their admissions essays being flagged,” says Hathaway.

“You’ve got places that are already restructuring their curricula in response. George Washington University and Rutgers University are phasing out take-home, open-book assignments because they just assume that people are going to cut these corners.”
–Christopher Hathaway, Advantage Ivy Tutoring

But not every school is interested in turning out the next generation of great (or even above average) writers. The Georgia Institute of Technology in Atlanta is an, er, textbook example of a school whose academic departments place a much lower premium on writing skill and proudly hold more of a protechnology bias than do those at the aforementioned liberal arts colleges. It’s therefore no shock that Georgia Tech has given its applicants the green light to use AI to respond to the essay prompt on the school’s application.

Asked whether schools taking this pro-AI stance should be concerned about students feeling they have license to use the technology to complete assignments given by their professors during the academic year, Hathaway says, “Professors are going to need to adjust the way they’re presenting and assigning work. You’ve got places that are already restructuring their curricula in response. George Washington University and Rutgers University are phasing out take-home, open-book assignments because they just assume that people are going to cut these corners. And so, [Georgia Tech and other schools] are going to need to figure out different ways of assessing students’ skills—whether that’s via in-class assignments and handwritten papers and such, or oral exams.” And since AI doesn’t appear to be going anywhere anytime soon, these adjustments and many others might soon be endemic to academia across the board.

It’s clear that that Georgia Tech has given thoughtful consideration to these issues—and concerns that AI use will widen rather than bridge the digital divide. The primary bit of evidence supporting that is the fact that the technical school’s liberal arts college is offering a new course this fall called “AI Ethics and Policy.” According to the course description, the class will “prepare students to think critically about AI’s impact on humanity and contribute to AI governance and policy.” Still, it’s hard to see how school officials reached the conclusion that an applicant leaning on a chatbot for essay help doesn’t defeat the essay’s stated purpose, which the school says is “to assess your writing ability, and, more importantly, to learn more about you as an individual.”

The Conversation (0)