Here’s how education works: You determine what the students need to know. Part of that is identifying what they already know. You find the gap between what they already know and what they need to know. You provide instruction to fill that gap. Then, you use assessment tools to see whether the student has acquired the knowledge and skills you’ve identified. If necessary, remediation is provided to get the students up to speed.
So, for example, we teach math. One of the things we’ve decided that students need to know is to “Identify and represent factors and multiples of whole numbers through 100, and classify numbers as prime or composite.” In Ohio, this is a fourth grade number sense indicator. Our teachers identify whether the students can already do this. If they can’t, instruction is provided. At the end of the instruction, there’s an assessment (usually a test) in which the student demonstrates that he or she can find factors and knows what a prime number is. Student progress is reported on the student’s report card.
But how do we know if the students in one school are getting the same kind of quality learning experience as the students in another school? One teacher might have easy tests, or grading procedures that allow the students to make lots of mistakes without hurting their grades. Another teacher, in another school, might have more rigorous standards. Does that mean the students in the first class, who have the higher grades, can outperform those in the second class, with the higher standards? Probably not. Enter standardized testing.
If you give all of the students in a state the same test, you can measure them on the same scale. This allows students throughout the state to be compared. It also allows the schools to be evaluated based on the achievement of the students.
Traditionally, curriculum decisions were made at the local level. Each community decided for itself what was most important to teach. It’s part of the philosophy of local control for schools. Education is one of the few community based efforts in our society. But what happens when we decide locally to teach American History in eleventh grade, and the students take a state-wide test on American History in tenth grade? The result is a set of test scores that make our school look ineffective.
Each state has adopted state-level standards for each curricular area. The Ohio Department of Education, for example, has outlined how they think math should be taught. That includes the fourth grade factors standard mentioned above. Schools can choose to adopt those standards, or they can use their own. But the test, by which schools are measured and compared in the state, is based on those standards. So unless we follow them, our students are going to do poorly.
Take this a step further. How do we know how the quality of education for students in Ohio compares to the quality of education for students in Indiana? They have a different set of tests. If we really want to compare the two states, we have to use a common assessment. That’s where the National Assessment of Educational Progress comes in.
The NAEP is a national test that assesses students in fourth grade, eighth grade, and twelfth grade. In each subject, about 3,000 students are randomly selected in each state from approximately 100 schools. The selection is done in a way that mirrors the demographic makeup of the state, so factors like race, gender, socioeconomic status and other factors are used to get a representative sample. Results are aggregated at the state and national level, but individual student and school scores are not reported.
Let’s say that the NAEP determines that Ohio’s science test scores are not up to snuff (which isn’t true, by the way). What does that mean? It could mean that we need to do a better job of teaching science. But it could also mean that we’re not teaching the things that they’re measuring. If we really want to ensure we’ll do as well as possible on the test, we have to make sure that our content standards for science align with the national standards on which the test is based. Place enough emphasis on the national test, and the state standards will all conform to the national ones. As we’ve already seen, the local standards then conform to the state ones, and we’re teaching the same curriculum in every school across America.
So what? Aren’t there some basic things that students need to know? Can’t we all agree that knowing the factors of numbers up to 100, and classifying numbers as prime or composite is important? Can’t we all live with teaching that in fourth grade? Sure, we can. We’ll even ignore the more controversial subjects like evolution and sex ed, because that’s a separate discussion. As we increase the pressure on schools to measure up by performing well on the tests, the focus of the schools changes. Anything that’s not on the test is no longer important. Anyone who has ever taught knows this. “Is this going to be on the test?” Translation: “you’re boring me, and I really want to tune out for a while. Is that okay?” If there’s no test, it doesn’t matter if the schools teach it. And schools have enough to do without worrying about things that don’t matter.
That’s why the technology test is such a big deal. The National Assessment Governing Board, which oversees the NAEP, has contracted with WestEd to develop a national technological literacy assessment, which will debut in 2012. They haven’t figured out yet what they’re going to test, and they don’t know which students will take the test, but it’s coming. It’s likely that the test will largely be based on the National Educational Technology Standards for Students developed by ISTE.
Fundamentally, a technology test is a good thing. Schools aren’t going to get serious about addressing student technology skills until they’re measured by student achievement in them. Assuming that NAGB more-or-less follows the ISTE standards, most states will be in pretty good shape. They do, after all, have technology standards that are already largely based on the ISTE ones. So all we have to do is start teaching those standards. After the first couple assessments at the national level, we’ll know how the states compare in addressing technology skills. Then, there will be more focus at the state level on teaching these skills, because nobody wants to be accused of not preparing our students for their future. The result will be a heightened emphasis on addressing these needs in schools across the country.
Here’s the bad news: The ISTE standards were developed in 1998, and revised in 2007. The NAEP for technology will debut in 2012, and is likely to be based on those standards. In 2014, states will see the results from the NAEP tech literacy test, and will start to emphasize technology skills. They will make plans to systematically address the items measured by the NAEP. By 2015, schools will be implementing this stuff at the classroom level, though this timeline may change depending on the amount of emphasis given to these tech skills by the states. By that time, the standards we’re teaching will be eight years old. Eight years ago, 128 MB of RAM was a lot. Eight years ago, we were using Adobe Pagemill to create web pages. Blogging didn’t exist. Neither did podcasting. Online learning environments were brand new, and almost nobody was using them at K-12. Nobody had ever heard about RSS. Video conferencing was bulky and expensive. I couldn’t call my school from home without paying extra for “long distance.” It’s a very different world.
Hopefully, we’ll be able to come up with a way to keep the technology standards, and the technology assessment, relevant. But with the glacial pace of educational change, I’m not very optimistic.