Schools Should Not Succumb To Testing Panic
The headline mirrors the panic being spread in school districts across the country: “Report warns COVID learning loss could impact US GDP in future if students aren't helped.” But the lede for the story from WXYZ (Detroit) (”The unfinished learning students experienced due to COVID-19 is measurable and real”) is only half correct. Articles like this point to the danger of schools throwing their limited resources behind the wrong goal.
Call it unfinished learning, learning loss, the great behinding, or just “we missed a lot of instructional time last year,” there’s no question that students missed out on some education over the last 18 months. Multiple surveys show that teachers see it—students who have not reached the point they would have reached in an ordinary year. But getting a full, real measure of “education” is nearly impossible in the best of times. Attempting to measure that pandemic gap with the wrong tool will only lead to more bad information and missed educational opportunities.
Too much reporting has succumbed to the short and sweet “months behind” model of explaining, which inevitably leads to one of many McKinsey reports that uses some concoction of real and projected results on standardized math and reading tests.
It is crucial to remember that lost “days of learning” and “months of learning” are usually just shorthand for a drop in standardized test scores.
Education has suffered for decades now from using a bad measure of educational effectiveness, a national example of Campbell’s Law. But to use that bad measure now as a measure of how well we come back from the pandemic pause would be a dangerous, critical error.
McKinsey has raised the scary specter of students losing lifetime earnings (and thereby dragging down the GDP of entire nations). Claims like these invariably depend on the work of Raj Chetty or Eric Hanushek; the work linking student test scores to lifetime earnings has been subject to considerable criticism, but the most basic one is that this is correlation, not causation. It’s easily explained; test scores correlate with socio-economic status, and so do life outcomes. What’s still missing is any research showing that raising test scores will raise life outcomes. In short, test scores may drop because of the pandemic pause, but there’s little reason to believe that life outcomes like lifetime earnings (or national GDP) will drop with them.
The danger at the moment is that instead of trying to lift the state of student education, states, policy-makers and school districts will decide to put all their effort into raising test scores. This could lead to a repeat of the worst practices of the last few decades, for instance, dropping all non-tested subjects in order to “accelerate learning” in tested areas, or drilling drilling drilling testing methods instead of teaching the skills and material that are part of a broad, rich education.
High stakes testing has always been corrosive to a high-quality education. To panic now, to focus all our resources and attention on test scores instead of the full spectrum of learning and the emotional needs of students—that would be disastrous. Schools will continue to get pressure to do just that from companies with a test-improvement product to sell. We can only hope that policy makers and school districts will hold out for providing a real, full education for every child.