I recently chatted with a friend who teaches science at an underserved middle school. She described her overcrowded classes, the lack of essential resources, and kids with behavioral issues.
“They gave me nothing when I started there,” she said. “No lesson plans, no worksheets, nothing.” I could hear the frustration in her voice.
“Do you use AI in your classes?” I asked.
“Of course! I have so many worksheets to grade! Did I tell you I have more than a hundred students?”
She accepted that her students used AI, too. It didn’t matter where they got their answers as long as they learned them somewhere. Besides, she preferred to design her curriculum around hands-on activities, like labs.
I can imagine some readers shaking their heads at this. Recently, there’s been a lot of Substack discourse about AI’s role in education. The consensus is that bots don’t belong in the classroom. AI tools rob students of their education, preventing students from reading, writing, and thinking for themselves. Silicon Valley’s cult of efficiency is taking over classrooms, and it must be stopped!
At first, I was sympathetic to these arguments. Educators today must walk a fine line. On the one hand, they must teach students how to use AI, so they’ll be technologically literate. On the other hand, they must also teach critical thinking, knowing that students might use AI to cheat. Bots and blackboards are a tricky mix.
After reading a handful of articles, something about them started rubbing me the wrong way. Most teachers made questionable assumptions about AI, their students, and our education system. I started to wonder — were chatbots really the problem here?
Assumption 1: Students are lazy.
“Learning is hard, and current AI tutors are way, way too nice about it,” Dan Meyer said in his recent Substack post. He ran an experiment asking multiple chatbots for help with a calculus problem, pretending to be a struggling student. Whenever the chatbots tried to help him, he responded with “IDK.” Eventually, the chatbots gave him the answer outright.
Meyer found his results concerning. If a student was struggling, how did it help to give them the answer? Wouldn’t that encourage them to give up and not try? A good teacher would walk students through the problem, making them work for the answer. In their quest to be helpful, chatbots might hurt students’ learning.
I won’t dispute Meyer’s results. He’s right — if students don’t want to learn, AI will give them a shortcut. But what about students who do want to learn? And why would we assume they don’t?
Maybe I’m too idealistic, but I take issue with this assumption. I used to work in a public library. Every day, patrons asked me for help teaching themselves things. I talked to entrepreneurs who wanted to start businesses, travelers who wanted to speak foreign languages, and even a mom who wanted to brush up on Algebra to tutor her kid.
Notice the word wanted in there. These learners were self-motivated by real-world goals. They didn’t look for shortcuts because taking the easy way out would only hurt them in the long run.
Some teachers seem to believe that they need to force students to learn. They believe students don’t want to learn anything new because they are lazy, disengaged, and uninterested.
Who knows? I’m not a teacher, and I’m sure some students are lazy, but I doubt they all are (and I’ll drop a link to Laziness Does Not Exist here).
AI doesn't threaten students who are engaged, hard-working, and care about their subjects. For those students, AI can be an invaluable learning tool. An engaged student wouldn’t just spam “IDK” over and over. They would use the tools when they got stuck or needed feedback, just like you’d want them to.
I’m picking on Meyer a bit here, but that’s because his article is emblematic of so many anti-AI arguments. We’re so concerned that some students will use AI to cheat that we don’t consider the value it affords honest kids.
Assumption 2: Perfect classrooms exist.
I remember walking into my first literature course during my freshman semester of college. I’d always loved English class in high school, but I got frustrated by my classmates who Sparknoted the books. I couldn’t wait to talk about books with people who cared!
The members of my discussion group introduced themselves, and then we started discussing our assignment, The Epic of Gilgamesh.
“Oh, I didn’t read it,” one boy said with a grin.
“Me neither,” said another.
“Thank God! I thought I would be the only one,” the girl beside me said.
It turned out that I was the only person in that group of six students who’d done the reading assignment. My heart sank. It was just like high school.
I wish I could say this was an isolated incident, but every literature class I took until my Junior year had a Sparknotes problem. No one read books in college.
I completed my undergrad in 2016, long before ChatGPT erupted onto the scene. I attended UNC, an R1 university and a top 30 school. Despite what the Atlantic might tell you, college students haven’t been reading books for a long time.
There’s a lot of hand-wringing about how AI cheapens the learning experience. Critiques mainly focus on reading and writing, essential skills that students offload to chatbots. These skills are critical for a well-designed curriculum and a well-rounded life, but let’s not kid ourselves. Students haven’t been reading and writing at school for a long time.
When I was in high school, my teachers scarcely ever assigned us writing assignments. My school was overcrowded, and it was common to see more than thirty students crammed into a single classroom. My exhausted and underpaid teachers started looking for shortcuts, giving us worksheets and multiple-choice tests instead of essays that would devour hours of grading time. The few times I did get essay assignments, my teachers gave me simple letter grades with no comments.
This is a far cry from the personalized feedback that AI critics romanticize. Take this recent essay by Marc Watkins. He argues that a machine can never give students the specific, individualized feedback they need to grow as writers. I agree that this kind of feedback is valuable, but I question how often students get it. If it’s a choice between chatbot feedback and no feedback at all, I’d pick the bots every time.
I don’t want to dismiss genuine concerns about AI's impact on students’ learning. But let’s not pretend that classrooms were perfect before November 2022. Maybe a few lucky students at private institutions got this kind of one-on-one attention, but they were the exception, not the norm. My high school was ranked #2 in the state, and it still had problems teaching basic reading, writing, and scientific literacy. I can’t imagine the conditions at lower-ranked schools like the one where my friend teaches.
I’m suspicious of any critique of AI that compares a chatbot to a perfect, attentive teacher with adequate resources, time, and funding. This comparison is dishonest about AI’s capabilities and educators' real-life struggles.
To return to Assumption 1, I also don’t want to give the impression that most students are lazy — quite the opposite. Every student I knew in high school and college worked as hard as possible to scrape by in an increasingly competitive world. As good jobs became scarcer and home prices higher, I can’t blame my classmates for prioritizing their STEM homework over their gen. ed. literature assignments.
Most teachers are trying their best, too. In high schools, they’re working themselves to the bone, teaching a state-mandated curriculum that emphasizes standardized testing. In college, they’re struggling to achieve tenure, publishing or perishing, with scarcely any time left for their students. These are larger, systemic issues that AI didn’t cause and will not fix.
Assumption 3: AI can’t do that!
There’s a lot of AI hype out there. Just because someone says AI can do something doesn’t mean you should believe them. The opposite is also true; plenty of people claim AI can’t do things it can!
This problem is rampant in discussions about AI and education. I’ve read articles claiming chatbots can’t give specific, personalized feedback on writing. I know from my own experience that this isn’t true. Personalized feedback is one of AI’s strengths and some of the most valuable support it can offer learners.
Another article by Hollis Robbins recommended shifting curricula away from skills chatbots excel at and toward things they still can’t do. Not a bad idea, but what does this mean in practice?
Robbins suggested students focus on research methods because AI doesn’t understand the scientific method, cannot write literature reviews, and can’t read landmark papers. The problem with her idea is that AI can absolutely do all of these things.
What’s going on here? Why are so many AI critics saying things that aren’t true?
I don’t believe these writers are deliberately trying to misrepresent AI. I think they don’t realize how capable today’s chatbots are.
If you hate AI, you probably don’t use it often enough to be good at it. These tools are still new, and there’s a learning curve to use them well. Prompt engineering is a skill that many people struggle with. When they fail to get good results, they blame the chatbots for their user error.
If you copy an essay into ChatGPT and ask for feedback with no context, it probably won’t do a great job. What happens if you instead explain the assignment, upload a copy of the rubric, and provide an example of an “A” paper? Or perhaps you can ask the bot to pretend it’s your AP Lit. teacher? Both of these strategies will result in higher-quality feedback. Like anything else, you get out of chatbots what you put in.
Now, let’s look at Robbins’s concerns. She’s right when she says chatbots probably haven’t read all the research papers in a given field. Many papers languish behind paywalls, excluded from a chatbot’s training data.
Her argument falls apart when students log into their library system and download those papers. Nothing can stop students from feeding those papers to a chatbot and asking for summaries. Bots are great at summarizing academic papers and have been for a long time. Back in 2022, David Shapiro created a GPT-3-powered literature review generator. If GPT-3 could do this three years ago, today’s chatbots definitely can.
Again, I don’t mean to pick on these writers. I think they’re sincere and make some excellent points. However, they are wrong about what chatbots can and can’t do.
Right now, people in every industry are scrambling to understand the impact AI will have. Big names share predictions and concerns, and people believe them because they are The Experts. That doesn’t mean they’re right, though. Unless someone has put in the time and effort to understand AI, their domain-specific expertise might not count for much. This problem is so rampant, I wrote an entire series about how to identify true AI expertise.
A teacher can tell you how AI impacts their school and what they see in their classrooms every day. They might have brilliant insights and are worth listening to. Be skeptical once they start making claims about the technology itself, however. That’s not their area of expertise, and they might not be correct.
The biggest assumption of all: AI is the problem.
“Are the major selling points of enhanced “efficiency” and “productivity” offered by AI assistance really the values we want to instill in our students? I think not,” Josh Brake wrote in an article with a provocative title, “We’re the Lab Rats Now.”
Brake misses the fact that our education system was designed in the industrial era to train children to work in factories. Schools have always instilled “productivity” and “efficiency,” and they do so by design. It’s good to be critical of this ethos and want to change it, but let’s not pretend there’s a Silicon Valley conspiracy to turn children into lab animals.
Most of the problems that AI-critical teachers bring up aren’t new. Concerns about lazy students, poorly designed curricula, and a lack of individual attention have been around for a long time. Our education system is still rooted in the nineteenth century. Despite the massive societal, economic, and technological changes of the past 150 years, we still treat students like future factory workers.
AI isn’t the reason students are checked out and taking shortcuts. It’s not why kids aren’t learning to read, write, or think critically. It’s not destroying perfect classrooms that probably never existed. AI might worsen existing problems, but banning chatbots won’t fix anything. Taking away the bots that help my friend grade worksheets won’t magically transform her underserved school.
I, for one, am tired of teachers scapegoating AI as an excuse to avoid discussing the problems with our schools. All AI does is draw attention to issues that already exist. Chatbots put pressure on a broken system and show us where the cracks are.
Yes, we should take AI seriously. But no, it’s not the real problem.
I enjoyed the article and I appreciate the author taking the time to detail this counter to conventional wisdom.
However, I think conventional wisdom is right on this topic. The highest part of human cognition - the part that moves the needle for human civilization and is responsible for most (all) of our advancements - is extremely metabolically expensive. For that reason, humans tend to skirt around it whenever possible. The old adage of “water flows through areas of least resistance” describes this phenomenon. You saw it in your Gilgamesh reading group.
So it seems that training the ability to exert cognitive force is important. It’s something we (on average) try to skip if we can, but civilization knows it’s important and so we codify it and mandate it in our formal education structures.
AI is the first real tool that can be used by almost anyone to completely bypass this metabolically expensive yet critical activity. Cheating on homework and tests was situational and a stopgap at best. But generative AI can completely relieve a human mind of its most potent cognitive burden.
I believe this is the root of the concern of AI in the classroom. Whether or not it’s a valid concern is something people can debate. But I think intuitively people sense a trade is being made with the devil.
Absolute bang on! I have been in K-12 public education for nearly 15 years as a teacher and administrator and have come to exactly the same conclusions since getting serious about exploring AI. This is a post I wish I had written!