What are you doing about AI?
It’s a question I’ve come to dread. Most of the time, when I get a question from a parent or a teacher or a school leader, I can figure out what they want me to say. What are you doing about school safety? They want to hear that we’re taking reasonable, practical measures to keep students safe. What are you doing about bullying? They want to see some concrete actions to reduce bullying in our schools. What are you doing about fourth grade student growth measures? They want some strategies that will help us ensure that every student is growing, not just that we’re making sure they meet some minimum standard. Those aren’t easy questions to answer, and we don’t always have answers that satisfy the person asking. But at least, I know what they want to hear.
That’s not true with the AI question. What are you doing about AI? Artificial intelligence is a disruptive technology. It’s changing the way we interact with the world. We’re relying on it to answer questions, organize information, and solve problems. There’s lots of hype around AI replacing jobs. Pundits are predicting that most of the things we’ve been teaching in school will be irrelevant in 10 years. How are we making sure that our students have the knowlege and skills they’ll need to thrive in an AI-centered world? Where are the AI literacy classes? How are we addressing AI ethics? When are we going to add a prompt engineering class? This is important stuff that we need to make sure every student is prepared to handle. We’re already behind.
Or maybe not. What are you doing about AI? Artificial intelligence is creating a plastic universe that’s difficult to distinguish from reality. The generated images, videos, and text aren’t just fun memes anymore. They’re being weaponized in ways that make us question the meaning of truth. At the same time, the companies developing AI tools are using them to collect enormous amounts of data about us, and they’re using that data to sell us to advertisers. The tools themselves are negatively affecting our critical thinking and problem solving skills, because those tasks are being done for us. And with the ability to generate any kind of content with a few sentences in a prompt, we’re outsourcing creativity too. AI is making us completely helpless and dependent on it. And, it’s destroying the environment in the process. How are we managing student use of AI? Why aren’t we blocking it? Which tools are safe to use? How are we addressing the use of AI to cheat? What are teachers and school administrators doing with AI, and how do we know when it’s being used?

If I develop a pro-active plan to address the first question, we’re teaching students to embrace AI in ways that make the second question even worse. But if we focus on the second question, are we ignoring our students’ needs and failing to prepare them for their future? When I’m asked the question, the person asking is usually at one extreme or the other, and it’s impossible to tell which end they’e on. The one thing I am pretty sure of, though, is that they don’t want me to talk for half an hour about how there’s no quick, easy answer.
That’s not really new. We’ve seen this with technology for decades now. We spend half our time marveling at the shiny new technologies, and the other half trying to keep kids from using them. We give them devices that have all of the information, and then try to restrict their ability to access that information. We give them tools to connect with each other and with people all over the world, and then we look for ways to disable those tools. We need access to streaming video platforms because there’s a lot of valuable instructional content on them, but we don’t want students watching Youtube videos all day. We want them to play games to practice multiplication tables and spelling words and build vocabulary, but we don’ t want them playing the games that we don’t think are valuable. We want them to have ubiquitous access to reliable technology, but we also want to restrict screen time.
The answer — of course — is balance and intention. We adopt new technologies because they help us do things we can’t do without them. We teach responsible, ethical use of technology. That includes situations where we close the laptops and put away the iPads. We focus on cultivating curiosity. There’s no way for us to teach an AI class in 8th grade that will give students the skills they need for college. The industry is changing far too quickly for that. But we CAN teach students how to learn new things on their own, how to ask relevant questions, and how to approach the new shiny things in an enthusiastic, but cautious way. It’s not a matter of giving them the stuff they need to know. That stuff doesn’t exist yet. We have to give them the resources they need to find the stuff on their own.
So we walk boldly, and cautiously, into the future. We’re not in a hurry to jump on the latest bandwagon. We’re not particularly worried that our students will be left behind. But we’re also not afraid of everything that’s new, and we’re not starting from a stance of prohibiting anything that wasn’t here when we were the students. We move slowly. We ask questions. We refine our approaches. We look for value in the tools and stratgies we adopt. It’s not enough for technology to help us do things better. We have to make sure it’s helping us do better things. That’s not an exciting answer. But it’s the best one we have.