The Anti-AI Backlash Has a Paper Trail
More than half of college students now take at least one online course, but universities are solving their AI cheating problem with a decidedly analog solution: blue book exams. The handwritten test booklets — a relic of pre-digital academia — are making a comeback as educators scramble to detect ChatGPT-generated essays. The problem? Critics say forcing students to write single-draft, timed responses under exam conditions doesn't prepare them for workplaces where AI fluency is becoming table stakes.
"AI writing just sounds off," says Steven Krause, a professor at Eastern Michigan University who reads roughly 1,500 pages of student writing each semester. He argues that experienced professors should be able to detect AI-generated work — especially if they know their students. But Professor Dan Melzer at UC Davis is less optimistic: educators won't be able to completely "outsmart ChatGPT" because students will find workarounds. The cat-and-mouse game has spawned an entire cottage industry of influencers and startups creating programs to "humanize" AI writing, making detection harder for anyone without a trained eye.
The Accommodation Gap
The blue book revival creates serious equity problems. Multilingual writers and students with disabilities who need accommodations are at a massive disadvantage in timed, handwritten scenarios. Plus, writing is meant to be a revision process — forcing students to produce a rushed, single-draft response means professors are evaluating panic, not skill. "Deciphering students' poor handwriting is a headache," Krause adds. The solution also doesn't scale: some classes top 200 students, and AI wearables like Meta's smart glasses render in-person monitoring increasingly futile.
Meanwhile, a handful of schools are taking the opposite approach. Columbia University now offers an "AI writing" course where freshman Maximilian Milovidov is learning to use AI tools critically rather than fear them. His argument: instead of banning AI, why not teach students to interrogate its outputs and use it as a thinking partner? That vision aligns more closely with how employers expect new graduates to work. As prediction market researcher Robin Hanson noted in response to AI benchmark debates, "What is the point of learning AI scores on an exam if you don't tell us how humans do on it?" The question cuts to the core of the education debate: are we testing what students can do, or what they can do without AI?
What to Watch
The schism between ban-it and teach-it camps will likely intensify as AI tools become more sophisticated. Universities that double down on blue books risk producing graduates who can write essays by hand but struggle in workplaces where AI collaboration is standard practice. Schools experimenting with AI literacy courses may attract students and employers looking for practical skills. The real test isn't whether students can avoid AI — it's whether they can use it better than their peers.
