Early Ed Watch - logo

A Second Look at Reading First

Last week the Institute of Education Sciences released the first report from an ongoing national evaluation of Reading First. And, as a front page Washington Post story (and plenty of other newspaper articles across the country) reported, the news wasn’t good. Researchers found no evidence of statistically significant improvements in the reading comprehension of students in Reading First schools, compared to students in similar schools that did not receive Reading First funding. Since the point of Reading First is to improve students’ literacy skills, that’s a disappointing result.

Reporters, Reading First critics, and policymakers quickly concluded that Reading First is not working and needs to be either revamped or scrapped entirely. For instance, House Education and Labor Committee Chairman George Miller (D-Calif.), who has been highly critical of last year’s scandals involving the Bush administration’s management of the program, stated last week that this report “shows that we need to seriously re-examine this program and figure out how to make it work better for students.”

Certainly, the research raises serious questions about Reading First’s effectiveness, but it’s worth taking a closer look before writing the program off entirely. Several points are particularly noteworthy:

First, while the researchers found no evidence that students in Reading First schools had better achievement than those in non-Reading First schools, that finding was not consistent across the whole sample of schools studied. Researchers looked at Reading First schools (and similar non-Reading First schools) from two cohorts of Reading First awards: schools that first received funding between April and December 2003, and schools that received funding between January and August 2004. The study found no evidence that Reading First improved student achievement in the schools that received earlier awards—in fact, analysis suggests that Reading First had, if anything, a negative (though non-significant) impact on children’s reading achievement in these schools. In contrast, the study did find statistically significant improvements in the percentage of students reading at grade level in second cohort of schools, those that received funding later.

Does this matter? It’s tough to say. The researchers suggest two possible reasons Reading First could have improved student achievement in the later school cohort but not the earlier one: First, schools in the later cohort received more Reading First funding per pupil than those in the earlier cohort. Second, schools in the later cohort had lower student achievement in reading to start with, so they had more room to improve. Unfortunately, the researchers didn’t have sufficient data to investigate whether either of these differences—or other potential factors—cause differences in student achievement impacts between the two sets of schools, so we have no way of knowing whether the statistically significant improvements for the later set of schools matters.

Schools that received awards earlier didn’t just fail to show evidence of improved student achievement, though. Researchers also found no evidence that Reading First increased the time teachers in these schools devoted to the 5 essential components of effective reading instruction: phonemic awareness, phonics, vocabulary, fluency, and comprehension—these 5 components are the cornerstone of the Reading First program. In contrast, researchers found evidence of both increased instructional time devoted to the 5 components, and improved reading comprehension test scores in the later group of schools. That makes intuitive sense: Reading First is based on the idea that implementing the 5 components of effective reading instruction will improve student reading, so the program is unlikely to improve student achievement in schools where it doesn’t also cause teachers to increase time devoted to the 5 components.

That still doesn’t explain, though, why Reading First had a greater impact on teacher behavior in the schools that received awards later than it did in those that received earlier awards. Ultimately, we just have to hope that later reports will shed more light on the differences between earlier and later award schools.

That brings us to another important point—this report is only the first one from an ongoing evaluation, and may not capture the full picture. The researchers themselves caution:

“The evaluation design calls for three years of data collection. This report presents findings based on two years of data collection. While there is no prior researcher on the among of time necessary for schools to have fully implemented the Reading First program, prior research on implementation of programs designed to improve student achievement through changing teachers’ instructional practices suggests that while changes in instruction may be evidence sooner, changes in student achievement can take several years to appear. This holds particular salience for the Reading First program, which attempts to promote a comprehensive approach to reading instruction that persists from kindergarten through grade three. Some aspects of Reading First may be easy to implement quickly (i.e., purchase of new core reading programs and assessments, providing research-based professional development). Yet other aspects may require several years to implement effectively and consistently across the entire K-3 grade span (i.e. aligning curricula, instructional practices, and support services with the underlying principles of Reading First) to yield sustained improvement in student reading performance. Further, it will take four years of implementation before any students will have been able to experience Reading First funded activities as they progress from kindergarten through third grade.” (emphasis added)

In other words, it’s too early for this to be the last word on Reading First. Impatience with the speed at which educational improvement progresses is a common issue in education reform—and one that can cause real problems for reform efforts. When the federal government is investing a billion dollars annually in a program, it’s understandable—indeed, essential—to ask how that program is doing—but we also need to be cautious about evaluating programs too early, before they’ve been sufficiently implemented to show results. Policymakers should wait for the final report before making substantial changes to the Reading First program. Since NCLB reauthorization appears to be on hold until at least 2009, they have time.

Finally, we should ask whether the person who should really be declaring victory here is not Reading First’s critics, but E.D. Hirsch. This study focused on one indicator of children’s reading performance: student reading comprehension as assessed by the Stanford Achievement Test. The researchers did not assess children’s phonemic awareness, decoding ability, or fluency, for example. That makes sense because comprehension is, in the researchers’ words “the essence of reading.” But it’s also problematic because, as Hirsch has argued passionately in recent years, reading comprehension is about much more than basic literacy skills. To comprehend, readers must also have a rich content knowledge that enables them to connect what they read to existing knowledge. (Hirsch is fond of citing an article describing a baseball game as an example here: Poor readers who know a lot about baseball will comprehend the article better than excellent readers who have never seen a baseball game.) Teachers observed in this study spent substantial time teaching children reading comprehension, but teaching comprehension strategies is not the same and equipping children with the content knowledge they need to understand what they read.

None of these points or questions reverse the fact that a rigorous evaluation has, so far, found no clear evidence that Reading First improves students reading. Policymakers, parents, and the public should be aware of this fact and should be asking questions about it. But we should also consider the entire picture—including forthcoming reports—before writing the program off or making major changes to it.

Photo by flickr user Leo Reynolds, used under a Creative Commons License.


On improvements.

1.It is possible that teachers in other programs did other things to enhance early literacy scores.
2. The program, when initiated, required teachers to do something new.
It never is easy to accomplish this and often requires a conceptual change on the part of teachers. So, it is possible that the second study benefited from all the efforts to train, encourage, etc., teachers to pay attention to the development of relevant literacy scores.
3. What evidence is there that the teachers actually delivered the requisite material. Further, any evidence that the children attended?