Researchers Peek into the Black Box of the Classroom

Writing for Ed Excellence, Michael Petrilli has summarized some of the biggest problems with educational research and has proposed three promising pathways forward. Excerpts from his piece appear below:

Whereas the world outside of our schools has been transformed by information technology, the data we collect on classroom practices is somewhere between nonexistent and laughably rudimentary. In other words, we know almost nothing about almost everything that matters.

To be sure, education research improved dramatically starting in the early 2000s with the creation of the Institute of Education Sciences, the federal mandate for annual tests in grades three through eight, and the concurrent development of longitudinal data systems in most states. Scholars suddenly had the money and the data to examine a variety of educational interventions and their impact on student achievement, significantly increasing our understanding of what’s working to boost student outcomes.

Yet the vast majority of such studies rely on state “administrative data”—information that is collected to enable our systems to keep humming along, but that can also be happily recycled as markers of various inputs or programs whose effectiveness might be studied. Lots of this is related to teacher characteristics—their years of experience, race, training, and credentials. Other data captures bits of the student experience—their attendance patterns, course-taking habits, family background—and that of their peers.

This is all well and good but it’s still very limited. We end up studying the shadow of educational practice rather than the real thing. What we don’t see is what’s actually going on in the classroom—the day-to-day work of teachers and their students—the curriculum, the assignments, the marks students receive, the quality of instruction itself. We simply don’t know what kids do all day: the books they read, the tasks they’re asked to perform, the textbooks teachers use—or even whether they’re used at all or sit unopened in the closet, whether programs are implemented with fidelity, haphazardly, or not at all.

Examining practice has always been a difficult and expensive proposition. The most respected approach involves putting lots of trained observers—often graduate students—in the back of classrooms. There, they typically watch closely and code various aspects of teaching and learning, or collect video and spend innumerable hours coding it by hand. This is incredibly labor-intensive and costs gobs of money, so it is relatively rare.

Alternatives to observational studies are much less satisfying. The most common is to survey teachers about their classroom practices or curricula, as is done with the background questionnaires given to teachers as part of the National Assessment of Educational Progress (NAEP). Though useful, these types of surveys have big limitations, as they rely on teachers to be accurate reporters of their own practice—which is tough even with positive intentions. It’s also hard to know whom to survey about some information.

So that’s the challenge: We lack the systems to collect detailed information about classroom practice that might help us learn what’s working and what’s not, and inform changes in direction at all levels of governance.

Thankfully there are potential solutions. I see three:

  1. Take advantage of data already being collected by online learning providers and services, such as Google Classroom, to gain insights into our schools;
  2. Systematically collect a sample of student assignments, complete with teacher feedback, to learn more about the “enacted curriculum,” its level of challenge, and its variation; and
  3. Use video or audio recording technology in a small sample of schools to better understand instructional practice in America today.

For more, see