
Is ChatGPT the New STEROID for STUDENTS?
AI tools like ChatGPT are transforming American education—but not always for the better—as students and institutions grapple with an explosion in academic dishonesty and fading trust in traditional assessments.
At a Glance
Nearly 90% of college students used ChatGPT for assignments within two months of its release.
Educators report growing difficulty distinguishing real work from AI-generated submissions.
AI detection tools often produce false positives, risking penalties for innocent students.
Some universities are returning to handwritten exams to counter AI misuse.
Experts warn that unchecked AI use could erode critical thinking and literacy skills.
From Help to Hacking: AI’s Role in Student Work
At Columbia University, computer science major Chungin “Roy” Lee openly admitted, “I’d just dump the prompt into ChatGPT and hand in whatever it spat out,” contributing only “20 percent of my humanity” to complete assignments. His confession—published in New York Magazine—echoes a broader trend shaking U.S. higher education.
A Pew survey found that AI use among teens for homework doubled from 13% in 2023 to 26% in 2024, with nearly 9 in 10 college students trying ChatGPT within weeks of its launch. The scale of this adoption is creating what some educators describe as an academic integrity crisis.
Watch a report: The Rise of AI Cheating in Schools.
Detection Gaps and Unintended Consequences
Educators are increasingly turning to AI detection tools like Turnitin, but those systems are far from foolproof. A report by Education Week revealed that 11% of assignments were flagged as at least 20% AI-generated, with 3% exceeding 80%. Yet, these systems can mislabel human writing as artificial—particularly for ESL students or those with atypical writing styles—raising due process concerns.
Dr. Mike Perkins warns that current tools may disproportionately impact vulnerable students, while failing to catch the more sophisticated users. “You end up catching the students most at risk of their academic careers being damaged anyway,” he explained, underscoring the ethical complexity of enforcement.
AI Doping and the Erosion of Skill
A growing chorus of voices is framing AI misuse as a form of “academic doping,” akin to steroids in sports. One Reddit user called ChatGPT “steroids for your skills,” while professors across disciplines worry students are outsourcing critical thinking. A journalism lecturer dubbed the tech “Google on steroids,” suggesting it reduces student inquiry to button-pushing.
Surveys by the American Association of Colleges & Universities show that 66% of administrators believe AI is harming attention spans, while 59% report increased campus cheating. Lee Rainie, a researcher on digital disruption, put it bluntly: “It’s an undeniable and unavoidable disruption. You can’t avert your eyes.”
Searching for Solutions in a Hybrid Future
Institutions are responding in divergent ways. Some are banning AI altogether, while others are adopting AI-permissive policies that encourage “responsible” usage. A growing number are also reviving analog assessments like handwritten “blue book” exams to close the digital loophole. As Denise Pope noted, academic dishonesty predates AI—60% to 70% of students admitted to some form of cheating even before ChatGPT.
Still, AI adds a dangerous accelerant. A student named Emma, who used AI to earn top marks, later confessed, “I received a first but it felt tainted and undeserved.” Her comment captures a growing unease: that AI may not just be changing how students learn—but whether they’re learning at all.
With more than half of colleges admitting they’re unprepared for AI’s impact, this moment could mark a pivotal reckoning for American education. The challenge ahead isn’t just stopping the misuse—it’s redesigning a system that equips students for a world where AI is both a tool and a test.