Educational testing is inherently flawed. No written assessment can do justice to the breadth and depth of students’ intellectual, social, and emotional competencies. As educators clamor to condemn high-stakes tests and implement the changes that the Every Student Succeeds Act will enable, it is important that educators also impose the same scrutiny on international assessments.
Comparison tests such as the Program for International Student Assessment are gaining in status on the basis that they measure higher-order cognitive skills. These assessments have triggered a global discourse around standards in education and have given visibility to some of the core failings of education systems that might otherwise have been lauded by educators and policymakers as being successful. Yet the creators of such assessments do the education community little favor when they offer simplistic interpretations of the resulting test data.
Consider, for example, the way in which PISA results have been used to evaluate ed-tech implementations. Last fall, the Organization for Economic Cooperation and Development published its findings based on PISA data on the impact of computers on student learning. Of course, mainstream news outlets have little appetite for the subtleties of 200-page reports, and so the coverage focused instead on the sound bite offered by PISA: Computers “do not improve” pupil results, according to the BBC.
Sweeping generalizations rarely hold true in education, and PISA’s headline-grabbing claim, picked up by the BBC and other media outlets, is not without a sense of irony.
To assume the fidelity of international-test results without question is to fall victim to the same kinds of simplistic assumptions that have dogged technology implementations. Parents and teachers deserve nuance in the analysis of test scores. As educators we should always demand such nuance; not even international assessments should be exempt.