McGraw Hill Wonders Reading Program
Wonders is a basal reading program by McGraw Hill. The program has the components of a structured reading program, but places a heavy emphasis on authentic reading practice. According to the McGraw Hill website, the program includes the explicit, scaffolded, and individualized instruction of phonemic awareness, phonics, fluency, spelling, vocabulary, comprehension and handwriting. While the program has most of the components of an evidence-based program, it has been recently criticized for including too much content. Critics claim that its curriculum is too expansive for teachers to properly instruct within realistic time spans.
Ideally, a peer-reviewed meta-analysis is the best way to determine the efficacy of a pedagogical concept or program. However, to the best of my knowledge, no such meta-analysis exists for Wonders. I searched for studies on Wonders, on the company website, Google, Education Source, and Sage Pub. I was able to identify 8 studies on Wonders, for two of these studies effect sizes had already been calculated. For the rest, I was unable to calculate effect sizes, as they lacked the required statistical data. One confounding factor for my research of Wonders, was that the word “wonders” is used in hundreds of articles on reading. It is therefore more likely that I accidentally missed a study.
The first study was conducted by McGraw Hill, in 2016, within Champaign Community Schools. This study used a fictional control group, based on expected results. The study examined grades 1-5 and found a mean effect size of .05, which is not statistically significant. The highest effect size was for Grade 5 and was .14, which is also not statistically significant. A second study looked at California schools in 2016, that used the Wonders curriculum and compared them to the state average. For this study I calculated an effect size of .24, which is statistically significant, but small. Overall there appears to be little or no research evidence that the Wonders program improves education outcomes, by a statistically meaningful amount.
As the research evidence for Wonders is weak, we cannot call it evidence-based. However, the program contains almost all of the most essential instructional focuses of an evidence-based program. Perhaps research outcomes are low, because there just does not yet exist enough research for the program to sufficiently prove its efficacy. Ultimately, it is much more difficult to determine why a program is working or not working, with a research perspective than if it is working. However, I wonder if the reason the program’s research outcomes are so low, is because it places too much time emphasis on authentic reading practice, and not enough time emphasis on foundational instruction.
Final Grade: B
Most of the program principles are well evidenced, within the meta-analysis literature.
Qualitative Grade: 9/10
The program includes the following essential types of instruction: Individualized, scaffolded, direct, phonemic awareness, phonics, spelling, vocabulary, fluency, and comprehension.
Disclaimer: Please note that this review is not peer reviewed content. These reviews are independently conducted. Pedagogy Non Grata, does not take profit from conducting any program review found on this website.
Written by Nathaniel Hansford: teacher and lead writer for Pedagogy Non Grata
Last Edited 2022-07-24
McGraw Hill. (2016). Champaign Community Schools scores on NWEA MAP® reading assessment increased significantly from fall 2015 to spring 2016. Retrieved from <https://s3.amazonaws.com/ecommerce-prod.mheducation.com/unitas/school/explore/sites/wonders/efficacy-and-success-brochure.pdf>.
McGraw Hill. (2016). California ELA Test Score Analysis: Wonders Research Report. Retrieved from <https://s3.amazonaws.com/ecommerce-prod.mheducation.com/unitas/school/explore/sites/wonders/efficacy-and-success-brochure.pdf>.