Value-Added Measures as a “screening test” on both teachers and their observers

Economist Douglas Harris sees one clear priority for the Education Department: “Now that the election is over, the Obama Administration and policymakers nationally can return to governing.  Of all the education-related decisions that have to be made, the future of teacher evaluation has to be front and center.”

Within this large issue of teacher evaluations, the issue of value-added measures, or tying student test scores to teacher ratings, is perhaps the most contentious.  Harris generally praises the Obama administration’s performance through Race to the Top, especially for how it has addressed the problem of a lack of quality feedback for teachers and how it has employed multiple measures to evaluate teacher performance.  However, Harris also points out what he believes was the key error the administration made: “They encouraged—or required, depending on your vantage point—states to lump value-added or other growth model estimates together with other measures.”  Many debates have taken place since that time about how much value-added measures factor into that equation, with teachers and others complaining that value-added measures cannot be trusted.

Harris recognizes the possible benefits of grouping the evaluation measures together—the combination of measures can lead to increased reliability and validity—but also acknowledges the problems with the trustworthiness of the value-added statistics and especially the way that they have negatively impacted teacher trust in the system.

Harris would have the administration move away from a model that focuses on statistics for statistics’ sake and instead move toward a model that uses statistics to “identify and fix evaluation mistakes.” School leaders should look at their task of performing evaluations as a process of encouraging teachers to improve rather than simply one of pinpointing problem teachers: “School leaders, on the other hand, can and should be more concerned about whether the entire process leads to valid and reliable conclusions about teacher effectiveness.”

Employing the medical metaphor of a “screening test,” Harris suggests separating the value-added results from the others and moving it to the beginning of the evaluation process.  Because these type of results are fairly cheap and easy to obtain, they are an excellent way to initially discern which teachers stand out as candidates that need significant help to improve as teachers. But because the value-added results are not foolproof, the second layer of evaluations, namely observations by other teachers and principals, will ensure that the value-added tests were not in error.  Furthermore, Harris argues, because statistics show a strong correlation between value-added results and observation results (which Harris admits can at times be due to the bias resulting from observations occurring after value-added results are known), the value-added results can actually act as a safeguard against unfair observations.

In essence, Harris suggests using value-added results as a check and balance on both sides of the system: they will help quickly find, improve, and, if necessary, weed out those teachers who are indeed under-performing, but they will also protect teachers from biased observations.

Harris concedes, “The screening approach certainly wouldn’t solve all the problems with the new teacher evaluation systems. The choice of additional measures beyond value-added, and the implementation of these measures, are critical. So are the ways in which the evaluations are used in personnel decisions.”

For the full article from Douglas Harris, please visit: