Most school districts in the United States employ some sort of web filtering technology. To be eligible for the e-rate program, schools must comply with the Children’s Internet Protection Act of 2000, including the use of “technology protection measures.” According to the Universal Service Administrative Company, which oversees the e-rate program, “A technology protection measure is a specific technology that blocks or filters Internet access. It must protect against access by adults and minors to visual depictions that are obscene, child pornography, or — with respect to use of computers with Internet access by minors — harmful to minors…. For schools, the policy must also include monitoring the online activities of minors.” (source)
So, while the federal government can’t mandate that we use web filtering, they can deny us access to e-rate funds if we don’t do it. In our district, that amounts to tens of thousands of dollars a year, making this a pretty easy decision.
In reality, we were filtering web content before CIPA was enacted. When schools started connecting to the Internet, there was a fundamental shift in how information is accessed in the school. Prior to the web, materials were acquired through the efforts of media specialists and curriculum directors, who were careful to make judicious use of their limited funds. All of the materials were age-appropriate, and most were academic. It’s understandable that connecting a school to the Internet would raise concerns, because the traditional safeguards disappeared. Suddenly, students could access just about anything.
So schools developed policies for acceptable use of the Internet, and they installed filters to protect students from inadvertently accessing inappropriate content on school computers. Many schools still force students (and staff, in some cases) to sign off on these policies, and promise to behave themselves online and not to hold the school responsible if they find things online that are, umm, a bit too educational for school.
The filtering worked reasonably well for about a decade. Sure, there were lots of things blocked by the Internet filter that shouldn’t have been. There were also some things that weren’t blocked that students probably shouldn’t have been accessing. The web is a constantly changing medium, and trying to keep up with lists of sites that are appropriate or not appropriate is an impossible task. But we’ve lived with it.
Over the last few years, though, this has become more of a problem. Traditionally, the filters block sites that have user-generated content. A discussion board, for example, would be blocked. That’s because anyone can post content there, and it’s impossible to tell, day-to-day, or minute-to-minute, if the site contains objectionable content. The “free” web site services (Geocities, Angelfire, etc) were also blocked for the same reason. The pages can change too frequently for the filters to keep up with, so they all get blocked by default.
Over the past few years, though, the web has become much more interactive. We’re not just talking about blogs or social networking, either. Go to CNN.com and you can comment on any news story. The same is true for USA Today, and the New York Times, and the Cleveland Plain Dealer. A news story from a reputable site could have comments on it that contain offensive language or hate speech. Do we want our third graders to be reading this stuff?
The filtering companies have taken a reasonable approach, and don’t (at least by default) block every site that allows user comments. But they do block a lot of sites that allow people to upload their own content. In our district, currently, the blocked sites include Youtube, Facebook, Plurk, and all web-based email systems.
Interestingly, the list of sites that aren’t blocked includes Wikipedia, Wikispaces, Ning, Flickr, Twitter, Ustream, and Google Docs. The rules can be arbitrary and inconsistent. For example, you can’t access Youtube. The site contains user-contributed videos that might be unsuitable for minors. But you can access Google Video, which could contain the same content. The same is true for microblogging platforms. Twitter is allowed, but Plurk is not. You can use Skype from school to audio- and video-conference with others, but you can’t access the web site to download it and sign up for an account.
The problem, I think, comes from trying to adapt to the changing nature of the Internet, while still maintaining the perception that students are protected from inappropriate content. So you can’t access your social network from school, but you can go to Ning and create a new social network, and that one’s okay.
But as teachers increasingly try to move instruction beyond the walls of their classrooms, they’re hitting these roadblocks. There are lots of teachers and students in schools all over the world who would like to do collaborative projects, interacting and working with students and teachers in our schools. This type of interaction promotes serious collaborative (as opposed to cooperative) effort, improves students’ understanding of diverse world cultures, and fosters a global attitude of tolerance and acceptance that we sorely need. At the same time, it shows our students how they can leverage the enormous power of the communication and collaboration tools that are freely available to them. But we can’t access these tools because some Internet vandal may have posted a dirty word or a picture that’s too, umm, anatomically detailed for polite company.
The ironic part of all of this is that the students are fairly adept at circumventing the system. To make the system usable in any practical sense, there have to be holes. There are ways in which a sufficiently motivated student can bypass the censors and get unrestricted access to the Internet. We regularly catch them. But depending on the methods used, it can be pretty difficult and time consuming to figure out what they’re doing and devise a way to prevent it without breaking some legitimate use of the network.
And when they circumvent the filters, what are they doing? Since I get an email every time a student bypasses the filter, and I can remotely watch what they’re doing in real time, I’ve taken some time over the last few weeks to watch. They’re going to Facebook. They’re going to Myspace. They’re going to Youtube. For the most part, that’s it. Once in a while, a student tries to access drug-related information. How much should I be paying for a quarter ounce? How would one go about growing, err, herbs and vegetables in one’s basement? But for the most part, it’s the social networks and interactive web tools.
So what if we didn’t block these things? What would happen? I guess it’s possible that students could inadvertently stumble upon inappropriate content. I’ve certainly seen cases where innocent Google searches have yielded not-so-innocent results. Try doing a Google Image search for any woman’s name, for example, with safesearch turned off. But for the most part, things would be the same. We’re not talking about unblocking sexually explicit content, or hate speech, or this generation’s equivalent of the Anarchist’s Cookbook. We’re talking about Youtube videos and Facebook updates, the vast majority of which contains no objectionable content whatsoever.
We would, almost certainly, see an increase in student use of technology for non-academic purposes. But this isn’t a technology problem as much as it’s a supervision one. There are some places in the district where I can walk into a lab at any time, and see a dozen or more students playing games. Having access to social networking tools isn’t going to make them any more or less productive. If they’re permitted to do non-academic work, they’re going to do it. If we want to stop them from doing non-academic stuff on the computer, we’re not going to do it by blocking access to every time-wasting technology on the Internet.
It’s also possible that we’ll see an increase in students attempting to purposely access inappropriate content. Again, this is a supervision issue. By policy, students are not permitted to use computers in unsupervised settings. If they’re surfing for porn in the computer lab, we have a problem with supervision and student self-responsibility.
That self-responsibility is the piece that we’ve been missing for a long time. I often hear the argument, from both students and staff, that “inappropriate” and “appropriate” content for school is defined by what is blocked or not blocked by the filter. If it’s not blocked, it must be okay. But what happens when the student goes home, to the computer without filtered Internet access? Is everything suddenly okay, just because they’re not at school. The teaching of self-responsibility is an important component that we’ve been overlooking.
I’d like to see us try it. At the very least, we should open the door to social networking tools and web-based email. Let the students actually use email to communicate with one another and with their teachers. Let them access their networks. I have a feeling that the benefits will far outweigh the potential for abuse.
What do you think? Should the schools change the web filtering policies to allow access to interactive web tools like Youtube, social network sites like Facebook, and online email? Now’s your chance to weigh in.