Should We Stop Using AI in School?

Generative artificial intelligence has been with us for a little over three years now. At the time, I called ChatGPT “astonishing” and “revolutionary.” That’s high praise from me. It’s one of the very rare disrupters: it has the potential to change both the why and the what of school. Just like ubiquitous Internet access and Smartphones changed the purpose of school 15 years ago, AI should force us to rethink what we’re teaching, how we’re teaching, and where our priorities should be as educators.

Since 2022, AI has take over the education conversation. Every meeting, professional development session, conference, and group I’ve been a part of has been obsessed with AI. It has dominated discussions of curriculum (what we should be teaching), pedagogy (how we should be teaching), differentiation, ethics and academic integrity, assessment, information literacy, safety and privacy, and policy. And because it’s changing so rapidly, most of those conversations center around fundamental assumptions that change every few months. So the decisions, conclusions, and consensus that we build have to be constantly revisited because the ground is shifting beneath our feet.

So here we are, at this very specific moment in time, early in 2026. And there’s some cause for concern. Last week, OpenAI released a new research tool for scientists called Prism. This free tool integrates GPT with a LaTeX-based editor to help researchers draft papers, manage bibliographies, and collaborate. It’s supposed to help researchers collect and organize their work so that it’s easier to publish. It’s totally not meant for creating research papers or generating bibliographies by itself. It’s not designed for making plausible-sounding content that’s difficult and time-consuming to validate. It’s not a shortcut to publication for researchers under pressure to publish results from their work, even before the data can support their sensationalist conclusions.

Image (ironically) generated by Gemini.

Publishers are worried that the deluge of AI slop will overwhelm the peer review process, as a sharp increase in plausible research papers make it impossible to separate the real work from the flood of AI-generated content lacking depth or accuracy. And even when it’s used with real research projects, the tendency to confabulate sources undermines otherwise credible research. The danger is that the use of Artificial Intelligence will erode the trust that the scientific community has fostered for generations. If you can’t trust Nature or the New England Journal of Medicine anymore, then there are no sources of reliable truth. There are no facts. Everything becomes someone’s opinion. And my AI-generated opinions are just as valid as yours.

In a mid-January briefing, Google told advertisers that they can now buy advertising placement directly from Gemini-powered search results. This isn’t really a surprise. Anyone who’s been paying attention over the last 20 years knows that Google, at its heart, is an advertising company. Nearly every product they have is funded through advertising. They’ve been pouring tremendous resources into their AI efforts, and it’s no surprise that they’re going to start including advertising in their AI results. But what are the implications of that? When I ask Gemini to help me pick new tires for my car, is it really comparing the options based on the criteria I give it? Is there something hidden in that algorithm that gives just a little extra weight to Michelin, because they’ve purchased ad space in Gemini? When I ask it to recommend a movie to go see this weekend, is it going to lean toward the one that’s paying promotional fees to Google? And even if this isn’t happening, or even if “sponsored” recommendations are clearly identified, how do we know that’s what’s happening? The algorithms are proprietary. We don’t know what they’re doing. By introducing this potential conflict of interest, Google is eroding our trust in Gemini.

They’re not the only ones doing it, either. ChatGPT came under fire last month for recommending a Peloton app in an unrelated conversation. And Perplexity was experimenting with advertising embedded in followup questions in its own generative AI product last year before pausing that program due to lower-than-expected revenue. When the AI tools are being paid to promote products and services, we can no longer be sure that they have our best interest in mind.

Last week, Anthropic introduced Claude’s new constitution. This is a foundational document that outlines Anthropic’s vision for Claude’s character, serving as the ultimate authority for shaping its values and behavior. They’re promoting it as a way to emphasize safety and ethics while establishing hard constraints. They use terms like virtue, wisdom, feelings, and wellbeing to describe the AI tool, introducing a level of personification that should worry all of us. Their claim is that explaing the “why” behind Claude’s behaviors, rather than rigid algorithmic rules, helps the models generalize better in novel situations. While it’s possible that Anthropic actually believes their tool has these characteristics, it’s far more likely that they’re introducing a level of strategic ambiguity to offload liability. If AI is thinking for itself, how can Anthropic be responsible for what it says and does? And, of course, Claude’s highest-priority principle is Anthropic, followed by the developers, and then the end user. If there’s ever any doubt, the AI tool should act in the best interest of the company who created it, not the person using it. That’s baked into the product as its highest priority. Anthropic justifies this by pointing out that their commercial success is critical because it provides the funding for safety research. They alone can protect us from the potential fallout of the AI tool that’s no longer under their control. I’m pretty sure this is the basic plot of The Incredibles.

We should be banning AI tools in schools.

We should not be banning AI tools. We should not be banning anything. Protecting students from dangerous things does not prepare them to handle those dangerous things. But we should definitely be asking better questions. And maybe not pushing so hard to integrate AI into everything we do.

The Center for Universal Education at the Brookings Institute warns against several undesirable outcomes stemming from student AI use. Researchers warn that generative AI can negatively affect students’ cognitive development, because they miss the cycle of trying, making mistakes, engaging with the content, and correcting those mistakes. It’s having a detrimental effect on the development of critical thinking skills, a core competency of every schools’ portrait of a graduate. But beyond the academic issues, it’s also changing how students relate to each other. AI systems are extremely sycophantic. That is, they always try to make you feel good. They provide positive feedback and affirmation, even when your perspective could use some widening. They don’t challenge you to see things differently or from other points of view. In short, they double down on confidence and validation. When teens are hearing that from their AI bots, they also start expecting their friends to exhibit the same characteristics. So the people who disagree or challenge assumptions or play the devil’s advocate are not really friends. The yes-bot creates and emotional dependency. Where can I find someone who really understands me? In my AI bot.

NPR describes a “doom loop of AI dependence,” in which students offload their thinking onto the AI tools. This leads to a cognitive atrophy traditionally associated with aging brains. They start using AI to do their thinking for them, and slowly lose the ability to think for themselves.

So what’s a school to do? We’re literally being inundated with AI tools. Every technology product, every curricular resource, every communications tool, and every device has AI built in. Our parents and community members want students prepared for their future with proficiency in cutting edge technology. Our students are going to use these tools whether we want them to or not. It’s here. That Genie is not going back in the bottle.

But we can be skeptical. We can double down on critical thinking and information literacy and guide students through some of these ethical concerns. We can look for ways to leverage AI to do the things that we can’t do without it, while keeping the productive struggle of learning. We can adopt AI tools that incorporate more friction in the process, challenging students with questions, dialog, and process rather than just spitting out complete solutions. We can recognize that artificial intelligence, like the dozens of shiny tech things that came before it, is not going to solve all of our problems. We can acknowledge the fact that AI, like most technology, creates as many challenges as it alleviates. We can encourage our students to be curious about it, and suspicious of it, and optimistic of the ways it can help them. And maybe we don’t have to be in such a big hurry.

What do you think?