A Review Of: The Williams 2022 Meta-Analysis 

In this article I review a new meta-analysis by William’s Et, al on the topic of what study design factors have the biggest impact on math research results. Their 2022 meta-analysis looked at 191 randomnized control trials, conducted over the last 25 years. With a random effects analysis they found a mean effect size of .31 for curriculum interventions, .32 for pedagogical interventions, and .35 for supplemental time interventions. These results suggest that on average, interventions relating to pedagogy, curriculum, and supplemental time all produce small, but statistically similar beneficial results. 


The authors also conducted a moderator analysis, which in my opinion produced far more interesting results, as can be seen in the below chart.

The above data showed the strongest results for supplemental instruction, frequent ongoing teacher training, and teacher led instruction. These findings are important for schools to consider, because they show that while evidence-based changes to school pedagogy and curriculum are important that it is also important to:

-increase the supplemental instruction time for struggling students

-provide frequent ongoing training for teachers

-invest in teachers based interventions over software based interventions

Interestingly this study also showed that the vast majority of math intervention studies showed a statistically significant positive effect, as shown in the below author quote. “a random mathematics intervention effect has a positive impact is 75%, and the probability that the intervention effect is at least 0.10 and 0.25 standard deviations is 68% and 55%, respectively.” (William’s, 2022). The fact that most education studies show a positive impact is something that has been highlighted by both John Hattie and Dylan William. On its face this phenomenon appears counter intuitive, as experiments often test opposing pedagogies and one would assume that the positive effects to negative effects ratio would be more neutral. However, researchers are often very passionate about their pedagogical beliefs and sometimes that bias can unintentionally positively tilt research outcomes. Moreover, as Dylan William has noted, findings that are null or negative, are sometimes not published by the authors, leading towards a positivity bias within the research. Overall these findings highlight the need to consider the magnitude of experimental findings, and not just whether or not findings were positive or negative. In other words, it is not enough to know that research outcomes were positive, we need to know how positive they were to determine the likelihood that an intervention will yield above average outcomes, in actual practice. 

Written by Nathaniel Hansford

Last Edited 2022-08-05



  1. Ryan Williams, Martyna Citkowicz, David I. Miller, Jim Lindsay & Kirk Walters (2022) Heterogeneity in Mathematics Intervention Effects: Evidence from a Meta-Analysis of 191 Randomized Experiments, Journal of Research on Educational Effectiveness, 15:3, 584-634, DOI: 10.1080/19345747.2021.2009072