Reading Recovery Meta-Analysis

**Please note: A major update, will be done to this article over the next few days, as we have added 8 new studies, and our reviewing all calculations. 

I wanted to examine the efficacy of the popular Balanced Literacy program Reading Recovery. There existed two previous meta-analyses on the topic, both by D’Agostino, Et al. One in 2004, which did not calculate effect sizes and one in 2016, which found a mean effect size (ES) of .59, which is high moderate. D’ Agostino Et Al also calculated the ES for several specific outcomes, which can be seen in the graph below.

The D’Agostino ES surprised me as it seemed much higher than would be expected. Reading Recovery is a Balanced Literacy Intervention and meta-analysis has consistently shown Balanced Literacy interventions to have a low effect. For example a meta-analysis by Graham Et, al found an ES for Balanced Literacy of .36, which is lower than the mean I have found for phonics interventions (.45), the NRP found (.45), and John Hattie has found (.57). Similarly my recent meta-analysis of Fountas and Pinnell’s LLI found an ES of .34. That being said, my natural instinct was to accept the .59 ES found by D’Agostino. In my experience, people are generally speaking too suspicious of experimental data. People often interpret data through a lens of cognitive dissonance, IE if the data supports their preconceived notions they accept it, whereas if it goes against their preconceived notions, they assume it was doctored by shadowy figures in a smoke filled room of conspiracy. That being said, I decided to re-do the D’Agostino meta-analysis, not to check their data, but rather to see if I could find any additional insights and unfortunately, I did find some very strange anomalies; moreover, my calculated ES was much lower.

This is not to say that D’Agostino intentionally doctored his data, or to cast doubt on his academic integrity. The reality is, his paper was peer reviewed, and this article is not! Moreover, the writers of the D’Agostino meta-analysis are far more qualified than myself. On the other hand, I noticed many questionable inclusion choices, some inappropriately calculated effect sizes, and some mysteriously missing papers that leave me with some concerns. 

The D’Agostino meta-analysis inclusion criteria required studies to be peer reviewed, to have a sufficient sample, a comparison group, and had to include effect sizes or raw data. On its surface, this all sounds very reasonable. However, most of the research on Reading Recovery appears to have been done by the people selling Reading Recovery or supportive organizations. Unfortunately, the majority of this research is either behind Balanced Literacy specific journals, that I do not have access to. Of the remaining sponsored studies, most of the website links have mysteriously disappeared. Indeed several of the website links that can be found in the D’Agostino meta-analysis, or even on Google for Reading Recovery studies, have mysteriously been deleted from the Reading Recovery website, where they were originally hosted. 

Of the 16 studies listed in the D’Agostino meta analysis, I was only able to get access to 5 studies. However, I had to exclude 2 of these studies, Schwartz (2005) and Gardner (1998), because, while they had comparison groups, they did not have an experimental control group and the effect sizes were being calculated as a pre-test post test result and not as a true proper effect size. Luckily, I was able to supplement this research with additional studies found including Sirinides (2008),  and Holliman (2013). The Holliman paper reported effect sizes more in line with the D’Agostino mean effect size, with a Cohen’s d of .49; however, after reading the paper, I realized the effect size was inappropriately calculated. 


Studies Included: 


In order to include a study in my meta-analysis, I made sure it had a control group, examined the efficacy or Reading Recovery, and either calculated its own effect sizes or included the raw data for me to calculate them. In total, I was able to find 7 studies:



  1. Sirinides, Et al wrote their paper in 2018. The paper was an RCT study and included 1490 grade 1 students. Each student was given an average of 40 hours of instruction.

  2. Burroughs, Et al wrote their paper in 2008 and included 291 grade 1 students. The experiment was 1 year long. One criticism of this paper was that their control group received no intervention, which means we are essentially comparing giving students extra Balanced Literacy support, to no support. I am not of the impression that this is a fair comparison; however, it still only found a low effect size.

  3. Holliman, Et al, wrote their paper in 2013 and had a sample size of 240 students. This paper, was quasi experimental. 

  4. Pinnell, Et, al, wrote a paper in 1994 and had a sample of 325 grade 1 students. The students received 75 hours of instruction. 




The results of my meta-analysis suggest a small, but positive impact for Reading Recovery. However, this benefit might not be statistically significant. Typically the barrier for significance is seen as .20, as this is the average ES of a placebo intervention, whereas my found ES was only .37. This would suggest that Reading Recovery may be better for struggling readers than no intervention. However the impact of Reading Recovery is lower than the average impact of an education intervention, Leveled Literacy Instruction, Orton Gillingham approaches, Balanced Literacy overall, and Phonics Overall. This could be considered problematic from a cost benefit analysis, as the program can actually cost up to $9000 per student. 


That being said, it may be unfair to compare Reading Recovery to average education interventions, or phonics overall, as it is specifically an intervention for struggling readers. And as previously mentioned in other articles, the ES for struggling reader interventions tends to be lower. On the other hand the Reading Recovery program automatically excludes the weakest readers, which inflates their results. Another thing to consider, is that Reading Recovery is a small group instruction method and considering that the found ES is so small, we have to consider whether or not the found ES was for the benefit of Reading Recovery or for small group instruction. For example, according to the NRP paper, small group instruction of phonics on average yielded a ES of .45.


My results should be interpreted with caution, as they are very dissimilar from previous meta-studies done on the topic. If D’Agostino, Et al,  interpreted the studies our analysis shared, with the same calculations, he would have to find a mean ES for the studies not included in my analysis of .80, which would be significantly diverging from the effect sizes found in other Balanced Literacy meta-analyses. That being said, my assumption is that the Reading Recovery studies done by the Reading Recovery organization, (which I do not have access to), were likely more positive. Studies sponsored for a specific topic, usually do produce higher results, as I have noted in multiple other articles. I am also assuming that D’Agostino, Et al, accepted some of the pre-test post test calculations that I excluded as inappropriate. These differences may make up the bulk of the difference in our statistical analysis. 

Final Grade: C-: The program principles have been shown to have a statistically insignificant ES.

Qualitative Grade: 1/10

The program includes the following evidence based principles: comprehension instruction. 

Written by Nathaniel Hansford

Last Edited 2022-07-24


R, Colvin. Reading Recovery Revisited. The School Superintendent Association. Retrieved from <>


D’Agostino, J. V., & Harmey, S. J. (2016). An International Meta-Analysis of Reading Recovery. Journal of Education for Students Placed at Risk (JESPAR), 21(1), 29–46.


Gardner, J., Sutherland, A., & Meenan-Strain, C. (1998) Reading Recovery in Northern Ireland: The first two years. Belfast,

Ireland: Blackstaff.  


Schwartz, Robert. (2005). Literacy Learning of At-Risk First-Grade Students in the Reading Recovery Early Intervention.. Journal of Educational Psychology. 97. 257-267. 10.1037/0022-0663.97.2.257.  


Sirinides, P., Gray, A., & May, H. (2018). The Impacts of Reading Recovery at Scale: Results From the 4-Year i3 External Evaluation. Educational Evaluation and Policy Analysis, 40(3), 316–335.


Burroughs-Lange, S. (2008). Comparison of literacy progress of young children in London Schools: A RR Follow-Up Study.

London, UK: Institute of Education. Retrieved from



Hurry, J., & Sylva, K. (2007). Long-term outcomes of early reading intervention. Journal of Research in Reading, 30(3), 227–248.


Pinnell, Html & Lyons, Carol & Deford, Diane & Bryk, Anthony & Seltzer, Michael. (1994). Comparing Instructional Models for the Literacy Education of High-Risk First Graders. 

Reading Research Quarterly. 29. 10.2307/747736. 


Holliman, A.J., and Hurry, J. (2013) The effects of Reading Recovery on children's literacy progress and Special Educational Needs status: A three-year follow-up study. Educational Psychology, 33(6), pp. 719-733


Shanahan, T., & Barr, R. (1995). Reading Recovery: an independent evaluation of the effects of an early instructional intervention for at-risk learners. Reading Research Quarterly, 30, 958–996.


Lyons, C. A. (1988). Reading Recovery: Early intervention for at-risk first graders (Educational

Research Service Monograph). Arlington, VA: Educational Research Service. (ERIC Document Reproduction

Service No. ED303790).


DeFord, D., Pinnell, G. S., Lyons, C. A., & Young, P. (1987). Reading Recovery program: Report of the follow-up studies

(Vol. VII). Columbus, OH: The Ohio State University.