The Tennessee Department of Education found that instructors who got failing grades when measured by their students’ test scores tended to get much higher marks from principals who watched them in classrooms. State officials expected to see similar scores from both methods.
“Evaluators are telling teachers they exceed expectations in their observation feedback when in fact student outcomes paint a very different picture,” the report states. “This behavior skirts managerial responsibility.”
The data revealed:
- More than 75 percent of teachers received scores of 4 or 5 — the highest possible — from their principals, compared with 50 percent scoring 4 or 5 based on student learning gains measured on tests.
- Fewer than 2.5 percent scored a 1 or 2 when observed, while 16 percent scored a 1 or 2 when judged by learning gains.
- Of teachers who received the learning gains score of 1, the average observational score was, on average, 3.6.
In this first state review of evaluations the education department suggests some principals will need to be trained again on how to observe teachers. It’s one of numerous recommendations in a 45-page report that captures thousands of teacher and administrator responses to the evaluation program.
A federal Race to the Top grant spurred Tennessee to create an evaluation system tied, in part, to student test scores. Every teacher is evaluated every year, receiving a score between 1 and 5. Teachers can be denied tenure, or lose it, if they score 1s or 2s for two consecutive years. Some educators criticized the system as being unfair, time-consuming and rushed into place, and they unsuccessfully pushed for the first year’s results to be considered a trial run.
Half of each evaluation is based on observations. The other half comes from standardized tests and other measures of student performance.
But almost two-thirds of instructors don’t teach subjects that show up on state standardized tests, so for those teachers — including in kindergarten through second grade, and in subjects like art and foreign languages — a score is applied based on the entire school’s learning gains, which the state calls its “value-added score.”
The report recommends relying less on the school-wide scores, which many teachers fault for failing to capture their individual work. The state suggests bringing in other types of tests to measure these teachers.
The state is also pushing for ways to make sure districts across the state evaluate teachers consistently, although the report doesn’t say exactly how to do this beyond increasing training for evaluators.
The report outlines numerous other changes, and anticipates what could be annual tweaks. The first year drew feedback that included conversations with every school district superintendent, 7,500 conversations with teachers and 17,000 teacher and administrator surveys.
Educators wanted ways to streamline the evaluation process. Principals found their time consumed by class visits, with some responsible for as many as 36 teachers, but they may get a break. High-scoring teachers may get the chance to undergo fewer observations and to choose to use their value-added scores for 100 percent of their overall scores.