How A.I. Can Undermine Teaching and Learning
Don't allow chatbots to do what students must do for themselves
It’s all but impossible to get away from conversations about AI today—how it’s changing the workplace, all the amazing things it will accomplish for science, how it sometimes gets answers wrong, and (for the dystopia-inclined) how it could go sideways and end the world. I’ll leave all of that for others.
I want to discuss something else. I’m concerned that we aren’t taking seriously enough just how seriously it could ultimately undermine secondary and postsecondary schooling. It is already being used in ways that compromise important aspects of teaching and learning. And (this is, after all, a column about governing) unless we appreciate its likely costs, it could do lasting damage to self-government by robbing citizens and leaders of the knowledge, skills, and habits they need.
We’ve Curtailed Other Technologies
Before explaining my concerns with students’ use of AI, I have to address something straightaway lest some of you stop reading now.
There seems to be a widespread perception that AI is here to stay and therefore the only thing for us to do is learn how to deal with it, including its downsides. Many seem to believe that it is naive or Luddite to advise pumping the brakes; that view can take the form of, “Technological advancements are inevitable! Worriers always tell us to slow down or stop! The world must accept and make use of these developments if we want to progress!”
But that is not an accurate description of how technological advancements are adopted or not adopted by societies.
Societies have repeatedly put a hold on ostensible progress once they have come to understand the moral implications and/or real-world consequences of a technological innovation. I’m going to list just a handful of examples. They differ in important ways, and each deserves more attention in its own right. But each also makes my general point: Societies didn’t simply accept the “advancement” in full or in perpetuity. Instead, they thought about it and realized, even if it had some benefit, it had to be curtailed in some way.
Chemical weapons were once used widely. Nations agreed to stop.
Twice an atomic weapon was used; nations rushed to acquire that technology. Soon, however, the international community agreed to end the proliferation.
After accidents like Three Mile Island and Chernobyl, many nations began phasing out nuclear power plants.
Eugenics was once thought to be cutting-edge science.
Both cloning and recombinant DNA were also once thought to be cutting-edge science. Both have been limited.
Drugs and other chemicals are often thought to be miraculous until we realize the downsides: DDT, CFCs, nicotine, cocaine, LSD, OxyContin, thalidomide.
Before and after Covid, there have been limits on “gain of function” and “dual-use” research.
Often, science tells us that we can do something remarkable. In time, however, moral reasoning and experience tells us that we should not.
This is simply a restatement of the idea behind “Amistics,” which refers to societies’ discussions and decisions about which technologies to embrace and which to reject.
How We Might’ve Handled Social Media
We should all be especially wary of “advancements” that affect young people. For instance, we absolutely should have paused and carefully considered the potential long-term adverse effects of social media and handheld devices on adolescents. It will take years of research to understand how this “progress” affected mental health, friendships, attention spans, social development, reading, and much else. But anyone who sees groups of teens looking at their phones instead of each other—not to mention anyone aware of the 5 hours per day teens spend on social media or the 8.5 hours per day on screens—know that something isn’t right.
Imagine if we’d had the wherewithal 10 or 15 years ago to have an Amistics-style national conversation about our kids and social media, asking, “To what extent are we going to allow this technology into our adolescents’ lives?”
We might not have banned it outright. But we probably would’ve recognized its costs and set some limits.
Indeed, recognizing the costs of a scientific advancement doesn’t mean we get rid of it entirely. But it can mean we halt certain uses of it and/or curtail its prevalence and/or carefully study its effects before disseminating it widely.
AI and Schooling
More than half of teens have used generative AI. As of a year ago, a quarter of 11th and 12th graders who’d heard of ChatGPT had used it for schoolwork. If you include other AI tools (like Dall-E 2), it grows to half. The more a student had heard about AI, the more likely s/he was to have used it (since talk of it is growing and growing, we should expect its use to grow and grow, too). And the more a student has heard about it, the more likely s/he is to think it is OK to use it to write essays and answer math problems. The second biggest reason students don’t use it when they are aware of it is because they are concerned it might make mistakes—so as AI improves, we should expect students to use it more. A majority of teachers report being more distrustful of student work because of AI tools.
The results are more concerning among college students. One survey found that nearly 40 percent are using ChatGPT, almost all of whom are using it for schoolwork. Nearly 70 percent use it for writing assignments. Of those, 3/4 use it to generate ideas, and 64 percent to rewrite sentences. Maybe most alarming of all: half ask the chatbot to write a section of an essay; 29 percent ask it to write the whole thing. Nearly 90 percent of students said their use of these tools has gone undetected by professors. A survey by the University of Michigan’s student paper found that 56 percent of students in Literature, Science, and the Arts had used it for an assignment; it was 74 percent in Public Health and 80 percent in Kinesiology.
Today, students can use generative AI instead of reading. It can swiftly summarize just about anything for you—a book, article, essay, speech, lecture. Students now use it to brainstorm ideas, outline reports, draft essays, and even write full papers (it is used most often in language arts classes).
A search engines can help retrieve information; a student will still have to read the essay or analyze the data. Google, in that sense, enables—it does not replace—the hard, essential student effort that produces learning. AI can do much of the thinking for the student. That jeopardizes learning and schooling.
The Slow, Deliberate Work of Learning
There is no shortcut to understanding and internalizing the different worldviews of Antigone and Creon or the lessons of King Lear. You must do the work yourself. A summary of Democracy in America, The Federalist Papers, Beloved, or A Modest Proposal will not do the job. You must read and think. You can’t appreciate the “hero’s journey” and what it teaches us about psychology and sociology without reading and discussing the great works that exemplify it.
So much of learning takes place in the effort. In order to decide what aspect of The Divine Comedy or Emily Dickinson’s poetry to write about, you have to read and then read again and then think. Then you jot down ideas. And scratch them out. And then outline something. And then reorder things. And then look for evidence. And then rework your thesis. By the time you start typing the paper itself, you’ve learned an enormous amount. All of that is lost when a chatbot does the reading, summarizing, brainstorming, outlining, and drafting for you.
Substituting AI for student effort reveals an impoverished understanding of teaching and learning. The goal of schooling is not to help students get an answer as simply and quickly as possible. They have to learn skills, dispositions, beliefs, and habits. They have to learn content and make sense of it; they have to make connections between different works. Schooling doesn’t just fill the mind; it shapes, trains, activates, and directs it.
Algebra, geometry, and calculus are not just about memorizing formulas and axioms or integrating functions; after hours and hours of practice, they teach you different ways of seeing phenomena, and they structure your thinking. Vocabulary isn’t just about memorizing obscure words for a test; it’s about slowly growing the words you know so you can notice, understand, and explain more of the world. Writing papers isn’t busy work; it’s about closely reading a text, constructing an argument, marshaling evidence, and defending your idea.
The Kinds of Citizens, Colleagues, and Leaders We Need
All of this is at the heart of the age-old question: “What does it mean to be educated?” And that answer shapes what we expect from our schools.
Education must enable each generation to become the kind of citizens our society needs, the kind of colleagues our workplaces need, the kind of leaders our institutions need. No matter how much we hope to automate or standardize the world, it will always require human understanding and human decision-making. Algorithms can only get us so far. We need to carefully, thoughtfully shape students. We need adults capable of understanding why wars occur, why power corrupts, why humans today have the same vices as those thousands of years ago. We need adults capable of moral reasoning about civil disobedience, abortion, victimless crimes, executions, immigration, family, and faith. We need adults capable of acquiring, weighting, and synthesizing information and then prudently pursuing the common good. We will not produce those kinds of adults by off-shoring essential learning tasks to a chatbot.
Often, a society’s technological and scientific mistakes can only be seen clearly with the 20-20 vision of hindsight. People in the moment generally can’t predict all of the things that might go terribly wrong.
We can’t say the same thing about substituting AI for the real stuff of teaching and learning. We are being willfully blind if we fail to recognize the easily predictable damage done by allowing students to delegate to a computer the reading, note-taking, thinking, dot-connecting, brainstorming, outlining, drafting, writing, and editing central to secondary and post-secondary schooling.
Great societies sometimes decide—wisely—that a newfangled technology needs to be limited because one of more of its uses will do more harm than good. That’s the case with AI as it relates to most aspects of schooling. Students should learn what it is and how it can do things humans cannot. But it absolutely cannot be allowed to do for students what they must do for themselves.