Reading Recovery Meta-Analysis

I wanted to examine the efficacy of the popular Balanced Literacy program Reading Recovery. There existed two previous meta-analyses on the topic, both by D’Agostino, Et al. One in 2004, which did not calculate effect sizes and one in 2016, which found a mean effect size (ES) of .59, which is high moderate. D’ Agostino Et Al also calculated the ES for several specific outcomes, which can be seen in the graph below.

The D’Agostino ES surprised me as it seemed much higher than would be expected. Reading Recovery is a Balanced Literacy Intervention and meta-analysis has consistently shown Balanced Literacy interventions to have a low effect. For example a meta-analysis by Graham Et, al found an ES for Balanced Literacy of .36, which is quite low compared to Hattie’s ES of .60 for phonics. Similarly my recent meta-analysis of Fountas and Pinnell’s LLI found an ES of .33. That being said, my natural instinct was to accept the .59 ES found by D’Agostino. In my experience, people are generally speaking too suspicious of experimental data. People often interpret data through a lens of cognitive dissonance, IE if the data supports their preconceived notions they accept it, whereas if it goes against their preconceived notions, they assume it was doctored by shadowy figures in a smoke filled room of conspiracy. That being said, I decided to re-do the D’Agostino meta-analysis, not to check their data, but rather to see if I could find any additional insights and unfortunately, I did find some very strange anomalies; moreover, my calculated ES was much lower.

This is not to say that D’Agostino intentionally doctored his data, or to cast doubt on his academic integrity. The reality is, his paper was peer reviewed, and this article is not! Moreover, the writers of the D’Agostino meta-analysis are far more qualified than myself. On the other hand, I noticed many questionable inclusion choices, some inappropriately calculated effect sizes, and some mysteriously missing papers that leave me with some concerns. 

 

The D’Agostino meta-analysis inclusion criteria required studies to be peer reviewed, to have a sufficient sample, a comparison group, and had to include effect sizes or raw data. On its surface, this all sounds very reasonable. However, most of the research on Reading Recovery appears to have been done by the people selling Reading Recovery or supportive organizations. Unfortunately, the majority of this research is either behind Balanced Literacy specific journals, that I do not have access to. Of the remaining sponsored studies, most of the website links have mysteriously disappeared. Indeed several of the website links that can be found in the D’Agostino meta-analysis, or even on Google for Reading Recovery studies, have mysteriously been deleted from the Reading Recovery website, where they were originally hosted. 

 

Of the 16 studies listed in the D’Agostino meta analysis, I was only able to get access to 5 studies. However, I had to exclude 2 of these studies, Schwartz (2005) and Gardner (1998), because, while they had comparison groups, they did not have an experimental control group and the effect sizes were being calculated as a pre-test post test result and not as a true proper effect size.

Luckily, I was able to supplement this research with additional studies found including Sirinides (2008), Shanahan (1995), and Holliman (2013). The Holliman paper reported effect sizes more in line with the D’Agostino mean effect size, with a Cohen’s d of .49; however, after reading the paper, I realized the effect size was inappropriately calculated. The original authors calculated this effect size as a pre-test post-test and not in comparison to their control group. I recalculated the effect size correctly, (using a Hedge’s g formula to match the other effect sizes in this article) and got a mean ES of .19. The Shanahan paper did not contain experimental data gathered by Shanahan himself, but it did include some of the raw data from the 1988 Pinnell paper, and the 1988 Defort paper, which were actually in the original meta-analysis. This allowed me to calculate several additional effect sizes.

 

Results:

Studies Included: 

 

In order to include a study in my meta-analysis, I made sure it had a control group, examined the efficacy or Reading Recovery, and either calculated its own effect sizes or included the raw data for me to calculate them. In total, I was able to find 7 studies:

 

Studies: 

  1. Sirinides, Et al wrote their paper in 2018. The paper was an RCT study and included 1490 grade 1 students. Each student was given an average of 40 hours of instruction.

  2. Burroughs, Et al wrote their paper in 2008 and included 291 grade 1 students. The experiment was 1 year long. One criticism of this paper was that their control group received no intervention, which means we are essentially comparing giving students extra Balanced Literacy support, to no support. I am not of the impression that this is a fair comparison; however, it still only found a low effect size.

  3. Hurry, Et al wrote their paper in 2007 and included 400 elementary age students, who received 6.6 hours of instruction. Interestingly, this study compared Reading Recovery and Phonics to a control group. While the Reading Recovery group did outperform the control group, the Phonics group outperformed by a much more significant margin. 

  4. Holliman, Et al, wrote their paper in 2013 and had a sample size of 240 students. This paper, was quasi experimental. 

  5. Pinnell, Et, al, wrote a paper in 1990 and had a sample of 325 grade 1 students. The students received 75 hours of instruction. 

  6. Pinnel, El al, wrote this paper in 1990 and had a sample of 295 grade 1-3 students. The experiment lasted one year. Similar to the Defort paper, I did not have primary access to the original paper, and got the data from the 1995 Shanahan paper. My SD, was therefore based on Shanahan's listed calculation and not the original authors. That being said, the ES for both studies is within a similar range to all other effect sizes found in this analysis, albeit substantially lower than the ones found in the D’agostino paper. 

  7. Defort, Et al, wrote this paper in 1988. The sample included 490 grades 1-3 students. The experiment lasted one year.

 

 

Discussion:

The results of my meta-analysis suggest a small, but positive impact for Reading Recovery. However, this benefit might not be statistically significant. Typically the barrier for significance is seen as .20, as this is the average ES of a placebo intervention, whereas my found ES was only .37. This would suggest that Reading Recovery may be better for struggling readers than no intervention. However the impact of Reading Recovery is lower than the average impact of an education intervention, Leveled Literacy Instruction, Orton Gillingham approaches, Balanced Literacy overall, and Phonics Overall. This could be considered problematic from a cost benefit analysis, as the program can actually cost up to $9000 per student. 

 

That being said, it may be unfair to compare Reading Recovery to average education interventions, or phonics overall, as it is specifically an intervention for struggling readers. And as previously mentioned in other articles, the ES for struggling reader interventions tends to be lower. On the other hand the Reading Recovery program automatically excludes the weakest readers, which inflates their results. Another thing to consider, is that Reading Recovery is a small group instruction method and considering that the found ES is so small, we have to consider whether or not the found ES was for the benefit of Reading Recovery or for small group instruction. For example, according to the NRP paper, small group instruction of phonics on average yielded a ES of .43. 

 

My results should be interpreted with caution, as they are very dissimilar from previous meta-studies done on the topic. If D’Agostino, Et al,  interpreted the studies our analysis shared, with the same calculations, he would have to find a mean ES for the studies not included in my analysis of .80, which would be significantly diverging from the effect sizes found in other Balanced Literacy meta-analyses. That being said, my assumption is that the Reading Recovery studies done by the Reading Recovery organization, (which I do not have access to), were likely more positive. Studies sponsored for a specific topic, usually do produce higher results, as I have noted in multiple other articles. I am also assuming that D’Agostino, Et al, accepted some of the pre-test post test calculations that I excluded as inappropriate. These differences may make up the bulk of the difference in our statistical analysis. 
 

Final Grade: C-: The program principles have been shown to have a statistically insignificant ES.

Qualitative Grade: 1/10

The program includes the following evidence based principles: comprehension instruction. 

Written by Nathaniel Hansford

Last Edited 2022-04-10

References:

R, Colvin. Reading Recovery Revisited. The School Superintendent Association. Retrieved from <https://aasa.org/SchoolAdministratorArticle.aspx?id=15712>

 

D’Agostino, J. V., & Harmey, S. J. (2016). An International Meta-Analysis of Reading Recovery. Journal of Education for Students Placed at Risk (JESPAR), 21(1), 29–46. https://doi.org/10.1080/10824669.2015.1112746

 

Gardner, J., Sutherland, A., & Meenan-Strain, C. (1998) Reading Recovery in Northern Ireland: The first two years. Belfast,

Ireland: Blackstaff.  

 

Schwartz, Robert. (2005). Literacy Learning of At-Risk First-Grade Students in the Reading Recovery Early Intervention.. Journal of Educational Psychology. 97. 257-267. 10.1037/0022-0663.97.2.257.  

 

Sirinides, P., Gray, A., & May, H. (2018). The Impacts of Reading Recovery at Scale: Results From the 4-Year i3 External Evaluation. Educational Evaluation and Policy Analysis, 40(3), 316–335. https://doi.org/10.3102/0162373718764828

 

Burroughs-Lange, S. (2008). Comparison of literacy progress of young children in London Schools: A RR Follow-Up Study.

London, UK: Institute of Education. Retrieved from https://www.ioe.ac.uk/Comparison_of_Literacy_Progress_o

f_Young_Children_in_London_Schools_-_A_Reading_Recovery_Follow_up_Study_.pdf

 

Hurry, J., & Sylva, K. (2007). Long-term outcomes of early reading intervention. Journal of Research in Reading, 30(3), 227–248. https://doi.org/10.1111/j.1467-9817.2007.00338.x

 

Pinnell, Html & Lyons, Carol & Deford, Diane & Bryk, Anthony & Seltzer, Michael. (1994). Comparing Instructional Models for the Literacy Education of High-Risk First Graders. 

Reading Research Quarterly. 29. 10.2307/747736. 

 

Holliman, A.J., and Hurry, J. (2013) The effects of Reading Recovery on children's literacy progress and Special Educational Needs status: A three-year follow-up study. Educational Psychology, 33(6), pp. 719-733

 

Shanahan, T., & Barr, R. (1995). Reading Recovery: an independent evaluation of the effects of an early instructional intervention for at-risk learners. Reading Research Quarterly, 30, 958–996. https://doi-org.ezproxy.lakeheadu.ca/10.2307/748206

 

Lyons, C. A. (1988). Reading Recovery: Early intervention for at-risk first graders (Educational

Research Service Monograph). Arlington, VA: Educational Research Service. (ERIC Document Reproduction

Service No. ED303790).

 

DeFord, D., Pinnell, G. S., Lyons, C. A., & Young, P. (1987). Reading Recovery program: Report of the follow-up studies

(Vol. VII). Columbus, OH: The Ohio State University.