Hardly a day goes by without someone complaining that we do too much testing in US schools.
While the federal requirements for testing only total about 17 days in students' entire K-12 career, the actual amount of testing students experience varies greatly from place to place.
In some schools, it's more than 17 days per year.
I'm convinced that one reason we over-test is that we don't know what information we really need, or even what we already have, so we err on the side of collecting too much data.
Do you really need PARCC, the ITBS, the MAP, RTI screeners, and end-of-course exams? Maybe, but probably not.
But we can't make good decisions about testing—and avoid over-testing—without broad and deep assessment literacy among our staff.
Understanding What Each Test Tells Us
First, I'm convinced that we need to get a clear understanding of what each of our assessments actually tells us. If we don't know, why are we wasting instructional time and money on it?
Does a test tell us:
- How a student compares to the general population of students in the same grade?
- How well a student has mastered the content of a particular course?
- How a teacher or school compares to others whose students take the same test?
- Whether a student is exhibiting a discrepancy between their cognitive ability and their academic performance?
Most tests are used for multiple purposes…some of which may be a poor fit for their design.
The better we understand the assessments we give, the more we can make appropriate use of the data they produce.
Eliminating Gaps and Redundancies
Second, once we understand the type of information each assessment provides, we can look for gaps and pointless overlap between assessments.
Are we getting too much information about students' cognitive skills, and not enough about their academic knowledge? Are we getting too much data about their math performance, but not enough about vocabulary?
Every array of tests is going to have some degree of overlap, and only by drastically over-testing could we eliminate all gaps.
The more we clearly understand our assessments and what they tell us, the better we can make judicious decisions about which tests to keep and which to scrap.
Turning Data Into Decisions
When scores come to us, they're not information. They're data, which must be interpreted and placed into context to become useful information (see my interview with Scott Genzer on Principal Center Radio for more on this topic).
I think one reason we're seeing a massive educator backlash against standardized testing in the US is that we've allowed tests to be used for making decisions that they aren't suited to.
Should math scores really be used to evaluate a PE teacher? Should we use cognitive ability tests to identify students in need of additional content-area instruction? Of course not, but we've seen it, and it makes us wary of just about any testing that falls outside of our control.
Bad testing policies don't stand much chance against well-informed, articulate educators who know their students and their assessments.
If we want to ensure that our students aren't over-tested, and that we get the information we need for instructional decision-making, professional growth, and all the other necessary purposes, we need to increase our assessment literacy.
You probably have great resources in your school and district to help with this, and if you have a recommended resource, I'd appreciate if you could leave a comment below.
If you're interested in increasing assessment literacy at your school, I'd like to invite you to take a look at Successful Assessment 101, a professional development kit produced by our colleagues at Illinois ASCD.
Over the course of 4 sessions, this low-cost kit takes you through the process of examining all of the assessments you give, so everyone is clear on their purpose, design, and appropriate use.