In a recent article in RealClear Education, Ashley LiBetti Mitchel and Chad Aldeman explore the difficulty with evaluating teacher preparation. Excerpts of “Our Failed Investments in Teacher Preparation” appear below:
Each year, new teachers collectively spend about $4.85 billion dollars and 302 million hours on their preparation work. But there is no evidence that any of it really matters.
Most of this money and time is spent on state-required “inputs”: things like passing certification tests and taking a certain number of hours of certain courses. States put other restrictions on the programs that prepare teachers, like requiring a minimum GPA for teacher candidates.
In theory, these requirements on inputs ensure a baseline of quality for teachers and preparation programs. In reality, research shows they amount to little more than meaningless barriers to entry.
In other words, these inputs can’t guarantee that programs will produce effective teachers.
A handful of states and the federal government are moving away from using inputs to define quality. They continue to regulate inputs, but they’re shifting their focus toward a teacher’s performance after she leaves the preparation program. These states measure certain outcomes of teacher performance – like impact on student learning, job placement, retention, and evaluation rating – and link those outcomes back to the preparation program.
The idea is appealing: States loosen input requirements, give providers more freedom to design their programs as they saw fit, and then make decisions about programs on the basis of the success of their teachers.
But recent studies from Missouri and Texas suggest that completer outcomes may not differentiate preparation programs as distinctly as hoped. In Missouri, researchers reviewed three years of classroom performance records for more than 1,300 teachers, all of whom had recently graduated from one of the state’s major preparation programs. In the Texas study, researchers looked at nearly 6,300 new math teachers and 5,000 reading teachers from 100 preparation programs of all types. In both studies, researchers reached the same conclusion: the differences between programs are very small and practically indistinguishable, and almost all of the variation in teacher preparation occurs within programs. Studies of North Carolina and Washington State came to similar conclusions.
The implication can’t be overstated: If states can’t identify meaningful differences in teacher effectiveness between programs, it’s as good as having no information at all.
We look forward to the time when this analysis is wrong, but at this point the sobering reality is that none of the current measures — whether inputs or outcomes — can guarantee a teacher will be ready on Day One.
For the full article, see http://www.realcleareducation.com/articles/2016/02/02/our_failed_investments_in_teacher_preparation.html