Word Problems are a basic math intervention that obviously need to be included in almost every math class; however, we also have to consider what percentage of a program should be word problem based and whether or not that percentage is grade/strand specific.I have seen some modern scholars advocate for mostly word problems and for little to no number based problems. Whereas, when I was in school, I’m fairly certain more than 90% of my instruction was number problem based. Number problems are ultimately an abstract concept, as math problems are rarely presented in real life neatly presented in columns, with the correct numbers and operation signs. For this reason, word problems are crucial for helping students to understand how to apply the abstract math concepts/procedures they learn to concrete situations.
That being said, number based problems are quicker and easier for students to solve, this allows students to more efficiently practice and develop their procedural knowledge/computational fluency. Number based problems also have no language component, which means that students' reading abilities do not affect their ability to solve the problems. With these factors in mind, I would have hypothesized that word problems would be more useful for older students who have already developed their language abilities and basic computational fluency.
Jennifer Kong conducted a meta-analysis of 19 studies on the topic in 2020, with seemingly stringent inclusion criteria.
The results of this meta-analysis were extremely impressive. However, when I looked at the results for each individual study included, I noticed a troubling problem. 9 of the 19 studies were conducted by the same Fuchs, Et, al and the effect sizes for that author were on average 11.81x higher than studies not conducted by Fuchs. Indeed several of the effect sizes for Fuchs were above 3, which almost never happens unless you have made a calculation error, or the effect size was based on a case study with a very low sample size. The mean ES overall was 1.02, which is very high but reasonable; however, if we remove the studies by Fuchs Et, al, we get a mean ES of .18, which is statistically insignificant.
As studies rarely have effect sizes above 1.5, unless there is a calculation error, I decided to examine the studies with effect sizes above three. While, to the best of my knowledge, there were no calculation errors, all of these studies involved the use of non-standardized tests that awarded marks for multiple things other than the students answer, including communication skills, ability to identify the problem type, being able to properly apply schema skills and use of diagrams. While one could make the argument that this type of assessment is instructionally appropriate, I do not think it is appropriate within this type of research. As ultimately I would think what we are trying to do here is determine whether or not word problem instruction improves students abilities to solve problems. Moreover, as the experiment groups received schema instruction and the control groups did not, it seems unfair to give the experiment group additional marks for being able to apply schema knowledge. Additionally, not all of these studies were specifically studying the efficacy of word problem instruction, but rather the use of schema instruction to improve word problem outcomes. Indeed, all of the control groups also received word problem instruction. For these reasons, I am unsure of whether these 3 studies should have been included within the meta-analysis, given their obvious outlier status and their methodological problems.
I reached out to Dr. Peltier, who has a shared interest in this topic with me and he referred me to another meta-analysis on the topic done in 2022, by Jonte Myers, which in my opinion was much better done. Their meta-analysis examined 24 studies on the topic and while they found a similar mean ES, they controlled for outliers and found much lower variability in their results. Moreover, they specifically moderated for studies done by Fuchs Et al and found a much more normal mean ES of .38 for his studies, giving me far greater confidence in their results.
This meta-analysis clearly does not support my bias. As we see the highest impacts of word problems for lower elementary students, perhaps because it helps students to develop their concrete mathematical knowledge. We also see much larger results for small group instruction than large group instruction (which I assume means class instruction), possibly suggesting that word problems are best used in small groups, not with the whole class at once. Interestingly, we see word problems have their highest results for number sense and for single step problems. Multi step problems and fractions word problems showed low yields. This surprised me, as multi-step word problems are often advocated for, even within primary grades. Perhaps, multi-step problems are best saved for later grades when students have better developed their computational fluency and procedural knowledge. Although, multi-step problems are obviously necessary at some point, for students to fully develop their concrete knowledge and application skills.
Written by Nathaniel Hansford
Last Edited, 2022-03-06
Disabilities: A Selective Meta-Analysis of the Literature. Learning Disability Quarterly, 44(4), 248–260. https://doi-org.ezproxy.lakeheadu.ca/10.1177/0731948721994843
J, Myers. (2022). A Meta-Analysis of Mathematics Word-Problem Solving Interventions for Elementary Students Who Evidence Mathematics Difficulties. The Review of Educational Research. Retrieved from <https://journals-sagepub-com.ezproxy.lakeheadu.ca/doi/full/10.3102/00346543211070049>.
Fuchs, L. S., Fuchs, D., & Prentice, K. (2004). Responsiveness to mathematical problem-solving instruction comparing students at risk of mathematics disability with and without risk of reading disability. Journal of Learning Disabilities, 37, 293–306. https://doi.org/10.1177/00222194040370040201
Fuchs, L. S., Fuchs, D., Prentice, K., Hamlett, C. L., Finelli, R., & Courey, S. J. (2004). Enhancing mathematical problem solving among third-grade students with schema-based instruction. Journal of Educational Psychology, 96(4), 635–647.
Fuchs, L. S., Seethaler, P. M., Powell, S. R., Fuchs, D., Hamlett, C. L., & Fletcher, J. M. (2008). Effects of preventative tutoring on the mathematical problem solving of third-grade students with math and reading difficulties. Exceptional Children, 74,