Don’t Rate Teaching Schools Based on Student Test Scores, Study Warns

In recent years, public officials have sought to determine the quality of our nation’s teacher preparation programs, both the traditional programs at colleges and universities, as well as alternative preparation programs like Teach for America. One proposed way to do this is to track graduates into the classrooms where they become teachers to see how well their students perform.

But research is showing that there is a major problem with this approach: it doesn’t work.

Reviewing studies that examined teaching programs in six locations, University of Texas professor Paul T. von Hippel writes that rankings based on “value added” models – complex measurements of growth on standardized test performance – essentially spit out random results.

In fact, von Hippel writes, there are so many possible variables in measuring teaching programs (the strength of an individual class of future teachers, the schools to which they are later assigned, and the makeup of their own students, to name just a few) that it is nearly impossible to assess the success or failure of a program based on student test scores.

“The errors we make in estimating program differences are often larger than the differences we are trying to estimate,” researchers write. “With rare exceptions, we cannot use student test scores to say whether a given program’s teachers are significantly better or worse than average.”

 

For more, see: https://www.the74million.org/dont-rate-teaching-schools-based-on-student-test-scores-study-warns/

and

http://educationnext.org/rating-teacher-preperation-programs-value-added-make-useful-distinctions/

Share