The Fundamental Realities of AI

There are not many transformative technologies. In my role, I’ve become desensitized to the hype of new technologies. Yes, I know. This new shiny thing is going to revolutionize the way we live our lives. It’s going to redefine public education. It will fundamentally change the way humans interact with each other. Sure. But next year, there will be a new shiny thing and we’ll forget all about this one. There are very few technological innovations that have actually had a substantial impact on the way we live our lives. The world wide web was one. Smartphones were another. But there aren’t a lot of them.

I think artificial intelligence has the potential to make the list. AI has been around for a long time. The Turing Test, which tests a machine’s ability to imitate intelligent behavior indistinguishable from a human, has been around since 1950. As technology has advanced, the expectations for AI have also increased. For generations, it’s been a hurdle that we’ve not quite been able to reach. It was always the “almost-shiny” thing.

But a lot has changed in the last year. ChatGPT introduced generative AI to the masses. The tool uses natural language processing to create humanlike conversations. It can answer questions, help compose text, and perform analyses and comparisons. It can easily translate text, and compare text in different languages. And it’s not just ChatGPT. There are a dozen or more similar tools, each with its own focus. Claude is intended to be harmless and honest, using constitutional AI techniques to improve safety. Google’s Bard aims to provide high-quality responses with recent information, which is one of the drawbacks of ChatGPT. Deepmind Sparrow is an experimental research-focused AI. Chatsonic is tailored for copywriting and marketing content.

Dall-E image of “a Monet painting of a kayaker in the water lily pond.”

Beyond the natural language processors, AI tools can also be used to generate images and video, compose music, and visualize data. Google is integrating these tools into Gmail, Docs, Slides, Sheets, and Chat. “It looks like you’re writing a letter. Would you like me to just go ahead and take care of that for you?” It’s like Clippy from the old Microsoft Word days, but it’s actually useful.

This summer, I spent a fair bit of time trying to drink from the AI firehose. Every day, there are new tools, applications, revelations, developments, and insights about artificial intelligence. I created a presentation to share with my admin team last month, and found that I had to revise it a couple times each week because things were changing so rapidly. Through all of this, I managed to distill a few fundamental realities about artificial intelligence and its effect on schools. These are things that are. There’s nothing to be done about them. They’re out of our control. We have to develop strategies to accommodate these new truths.

We can’t block Artificial Intelligence

In schools, the first reaction to new disruptive technologies is to ban them. There’s a long list of things that schools banned when they were introduced: calculators, spell check, Cliffs Notes, voice recorders, cell phones, palm pilots, Wikipedia, YouTube, student email, social media. The list goes on. Some of these things are disruptive to school. Some of them help students focus on higher-order thinking. Some change the roles of teacher and student. Many change the focus of instruction and the means for assessment.

But AI isn’t a thing we can ban. It’s built in to all of the things we use all the time. It’s part of the search engine. It’s embedded in the spreadsheet. It’s an integral part of the apps and tools we use all the time. It’s not quite as simple as taking a device away or blocking an app or web site in the school’s filter. There isn’t really a way to block it without taking most of the technology away.

We can’t reliably detect AI use

AI text looks like it was written by humans. That’s the point. There are tools that claim to detect AI-generated content. None of them are foolproof, and all of them claim higher accuracy than independent testing suggests. If you’re going to accuse a student of using AI to write an essay, how sure do you need to be? Is 65% sure enough? What about 85%? Even if it’s 95% accurate, and you have a class of 25 students turn something in, the likelihood that it’ll correctly assess all of them is 28%. That might be okay for formative work. But if we’re basing grades and GPAs and class rank on whether TurnitIn or Originality.ai thinks text is AI-generated, we should probably be more sure.

The other issue is that we’re in the middle of an arms race. The tools available today are very different from those we were using at the beginning of the summer. And those are worlds away from the version of ChatGPT that made such a splash last December. The idea that third party tools can keep up with detecting the work of the latest AI engines is absurd.

Students are going to use AI

Try to block it if you want. Try to detect it if you can. Our students are going to use it. The only way to stop them is to put them in an environment where they don’t have access to technology. That means writing, on paper, in class. No iPads. No Chromebooks. No phones.

That’s inconvenient. And we should probably try to minimize the situations where we have to be 100% sure students aren’t using AI. But this is the only truly reliable way to do it.

There are ethical / privacy / security considerations around AI

There’s a whole book in this one. But here are a few basic observations that just scratch the surface of this very complex topic:

  • Generative AI will adopt the bias of the data used to train it. Amazon tried to adopt an AI tool to help automate their search for high quality leaders. But the AI based its work on the resumes of previous successful candidates. One of its conclusions was that men are better leaders than women, because in the past, Amazon hired more men than women. The initiative was quickly scrapped, but it illustrates how source data can influence the biases baked in to AI tools, and those biases are not easy to detect.
  • AI can do dangerous things. A supermarket in New Zealand was experimenting with using AI to generate meal plans to creatively use up leftovers, only to find some unusual dishes being recommended. Some included deadly chlorine gas, poison bread sandwiches, and mosquito-repellent roast potatoes. The assumption that the tool would not make recommendations that will kill people was made by all of the humans, but nobody told the AI that.
  • AI tools are hungry for training data. That’s part of the reason why apps like Twitter are locking down access. If you’ve written something and put it on the Internet, chances are someone is using it to train an AI bot. It’s not my intention with this blog that my words be used to train an AI tool that will then be sold back to me. But worse than that, there’s a lot of really awful stuff on the Internet. Training AI to think that hate speech is normal, or that “firmly held beliefs” should be given the same weight as “scientific facts” is dangerous.

This AI stuff is amazing. And it’s not going away. But we’re going to have to spend some time thinking and debating and reflecting to figure out how it’s going to affect us. We have to be purposeful in our approach to it. Because this might not just be the latest shiny thing.

One thought on “The Fundamental Realities of AI

Comments are closed.