Reflecting on NCLB: Are States playing by the same rules?

shankerblogA new report from four researchers associated with Columbia University suggests that arcane rules, not any sort of objective and standardized measure of AYP (adequate yearly progress), drive outcomes under NCLB.

Matt Di Carlo at the Shanker Blog posted recently about this important new report: “Fifty Ways to Leave a Child Behind: Idiosyncrasies and Discrepancies in States’ Implementation of NCLB”, which was written by Elizabeth Davidson, Randall Reback, Jonah Rockoff, and Heather L. Schwartz.

Di Carlo briefly describes the five key factors responsible for widely varying AYP results across states (in 2003, the first year of results, 32% of U.S. schools failed to make AYP, but the proportion ranged from 1% in Iowa to over 80% in Florida):

  1. Deviation from NCLB rules. During the early years of NCLB, a few states didn’t quite follow the law. (Note that this is the only one of the five factors that has been largely rectified.) In at least one case, such failure was due to simple human error – Iowa’s one percent AYP rate in 2003 seems partially to have been a result of a leave of absence taken by the staff member responsible for the data, who suffered an injury. In other cases, states bent the guidelines set forth in the legislation. Texas, for instance, petitioned the U.S. Department of Education for flexibility on a rule that permitted a maximum of one percent of a school’s special education students to use alternative assessments. Their petition was turned down, but they went ahead with the plan anyway and, as a result, 22 percent of Texas schools that would have failed to make AYP in the first year actually made it.
  2. “Generosity” of confidence intervals. As is fairly well known, if just one of a school’s “accountable subgroups” (e.g., low-income, students with disabilities, etc.) fail to meet proficiency targets (or “safe harbor”), that entire school does not make AYP. In order to account for the inevitable fact that, in some schools, these subgroups would consist of very few tested students, NCLB allowed states to apply “confidence intervals.” Basically, these adjustments meant that smaller subgroups (i.e., those consisting of fewer tested students in a given school) would be required to meet lower targets. However, states were given flexibility in how much “leeway” they granted via these confidence intervals, and a few specified none at all. Florida, for example, did not use them, and thus a fairly large group of schools that would have made AYP had this rule been applied did not do so.
  3. Different targets across grade levels. States had the option of either setting the same proficiency targets for all grades or letting their targets vary by grade (and subject). Using the former system – the same targets for all grades – basically meant that schools serving particular grade configurations would have an advantage in making AYP (if their starting rates were higher) whereas others would have a disadvantage (if their starting rates were lower). For example, Pennsylvania set uniform targets, but their high schools’ starting rates were much lower, on average, than those of elementary schools. The end result was that 27 percent of the state’s high schools failed to make AYP in 2004, compared with just 7 percent of elementary schools. 
  4. Number of “accountable subgroups” and minimum sample size. As mentioned above, NCLB required schools to be held accountable for the performance of student subgroups. But states were given flexibility not only in how many subgroups they chose (and which ones), but also in setting minimum sample sizes for these subgroups to be “included” in AYP calculations. For example, schools with only a handful of students with disabilities in a given year could be exempted from having this subgroup count at all. As a rule, states that chose to include fewer subgroups in AYP, or set higher sample size requirements for their inclusion, tended to have lower failure rates, all else being equal. Once again, states varied in the choices they made, and this influenced their results. 
  5. Definition of “continuous enrollment.” Finally, states had to specify the rules by which mobile students (e.g., transfers) were or were not counted toward schools’ AYP calculations. Some states set more stringent enrollment requirements than others, which meant that they excluded more students from being counted in their testing results. For instance, Wisconsin’s rules excluded students who were not enrolled in late September of 2003 (the tests were administered in November 2003). Thus, fairly large proportions of students who took the test were not counted. To the degree excluded students’ performance was different from their “continuously enrolled” peers, these choices affected failure rates.

What is essential to remember in addition to each of these five factors is that states may have been stricter with one of these five while looser with another. In other words, each state combined each of these five factors differently, resulting in “many state-level NCLB configurations . . . being complex, sometimes inconsistent webs of rules that reflected varying incentives and priorities. Making things worse, the ESEA waivers that most states have submitted will only result in more heterogeneity.”

Given that there has been more than a decade since the initiation of NCLB, educators may be inclined to think that the results of this important federal education reform would be clear, but even with the detailed analysis of this report, it is still very difficult to garner clear conclusions.  If nothing else, the confusion surrounding the multiplicity of AYP measurement techniques along with the convoluted ways in which they interact with each other suggests that any reforms to teacher evaluations and testing for the Common Core must be carried out extremely carefully and implemented with a meticulous focus on detail.

For more information, please visit the following website: http://shankerblog.org/?p=8191

Following is the link to the original paper: http://www.columbia.edu/~ekd2110/Fifty_Ways_4_5_2013.pdf

Share