top of page
What Math Lesson Plan Format is Best Practice?

Over the last several months, I have attempted to read every meta-analysis conducted on the subject of math instruction. One topic that is surprisingly not often discussed is the topic of lesson plan instruction. When I was in teachers college it was really hammered home that the three-part math lesson was the most evidence-based format. With the three-part math lesson, the teacher demonstrates the skill, the class then practices the skill, and then students practice individually. However, not only did I not find any meta-analyses on three part math lessons, I did not find any meta-analyses on lesson planning at all. That being said, I did find a type of math lesson continuously mentioned within the sub-context of other meta-analyses: CRA.

 

CRA stands for concrete, representational, and abstract. It is also sometimes referred to as CPA (concrete, pictorial, abstract), CSA (Concrete, semi-concrete, and abstract), as well as graduated instruction. With this format teachers teach lessons in three stages, the first being manipulatives, the second being diagrams, and the third being, numerical problems. With this lesson/unit model, teachers are essentially using an iterative approach to teach both conceptually and procedurally, an approach, which Dr. Bethany Rittle showed to provide synergistic benefits. Within the CRA framework, manipulatives are used to ideally link the conceptual and abstract knowledge together for students. If you would like learn more about how manipulatives can best be used to do this check out this great article: https://www.aft.org/ae/fall2017/willingham

 

 I reached out to Dr. Corey Peltier, who has conducted research in this area, he explained to me that there are essentially two types of CRA: (a) CRA sequence versus (b) CRA framework (i.e., CRA-integrated). While there are three stages of instruction with the CRA method, these stages can be taught all in one lesson or they can be taught across a unit.While both CRA and CRAI are iterative teaching methodologies, CRAI is more iterative, as it teaches the procedural and conceptual knowledge within the same, lesson rather than the same unit.  

 

There are to the best of my knowledge, currently no peer-reviewed meta-analyses of the CRA method. While there was a systematic review by Bouck, Et, al, they did not conduct a meta-analysis or calculate effect sizes. Bouck, Et al, found the methodology to be evidence-based. Perhaps the reason no meta-analysis has been conducted is because the vast majority of the CRA studies are single-case studies. I was able to find two RCT studies of the topic. The first study was conducted by Butler Et, al in 2003. This study compared using the CRA method to teaching with just diagrams and abstract practice (RA instruction). The Butler study looked at teaching fractions to grade 6-8 students diagnosed with learning disabilities. The original authors used a Cohen’s d effect size calculation to measure results. However, they used pre-test post-test differences rather than differences between the two groups. I re-calculated the scores using the Hedge’s g formula, but based on the differences between the treatment group and the control group. This led to the below results. 

The Butler study results were on average were moderately low. However, given how similar the treatment and control group experiments were designed, I think that is to be expected. As essentially the control group was receiving the identical instruction, minus the manipulatives. Realistically, it could be argued that this paper is not really measuring the impact of CRA, but rather the impact of manipulatives. 

 

The second RCT study I found was by Morano, Et, al, in 2020. This study compared CRA to CRAI with 28 learning disabled grade 6 students. The study was specifically looking at fractions instruction. Instruction was given in groups of 4-6, 3-4 times a week, for 40 minutes per session. The results between groups were not statistically different, which suggests (within the contexts of this study) that there are no additional benefits for CRAI vs CRA. However, one study is never enough to scientifically prove something, so we will need further research in this area specifically. Unfortunately, this study cannot truly help us evaluate the efficacy of CRA overall, because the control group is too similar. The difference between groups is small, but that is to be expected as the treatment group and control group are so similar. 

 

While the Morano, Et, al paper did not compare results to a non-CRA control group, they did include pre-test post-test effect sizes. Normally, I tend to exclude such results from my analysis, as they tend to be very inflationary. However, these effect sizes were extremely high even for a pre-test post-test effect size. They found an effect size of 5.44 for finding equivalent fractions and an effect size of 2.43 for placing fractions on a number line. That being said, the authors used their own assessment, which does tend to inflate effect sizes. While the Morano paper is the only RCT study looking at CRAI, there was also a single subject case study looking at this approach that found positive benefits by Strickland, Et al in 2013.

 

Again while there is no peer-reviewed meta-analysis on the topic of CRA, there was a peer-reviewed meta-analysis on the topic of concrete-manipulatives for students at-risk or identified with a disability, by Dr. Peltier. Moreover, most of the studies within his meta-analysis were actually on CRA. Upon discussing the issue with Dr. Peltier generously offered to share his data with me and suggested using the BC-SMD data, as he felt it does a better job of estimating math learning gains than the TAU effect sizes. The studies included in his meta-analysis were specifically single-case studies, which does lower the experimental power of these results. However, I believe the combined evidence from this meta-analysis and the 2003 Butler RCT can help to present a compelling picture for the level of evidence for the CRA model. 

 

Likely because this research was a single case, many of the effect sizes were very high, indeed there were multiple effect sizes found above 4. To correct for this I used an IQR outlier formula to identify and remove positive outlier data. This led to removing all effect sizes above 2.58. Of course, removing outlier data, automatically deflates results and leaves the potential to inappropriately remove scores that were justifiably high. With this in mind, I have included the results for with and without outlier data.

 

In total 21 single case studies were located, with BC-SMD scores, on the subject of CRA instruction. Studies were reviewed to see what grade they were for, what strand of math was taught, and what type of CRA was used. However, data was not available for all grades. That being said, all identified studies either used a unit based CRA or did not specify the type of CRA instruction. One interesting component to this research was that 4 studies used a mastery approach, in which students were not able to advance to the next stage of instruction unless they had mastered this prerequisite curriculum. There is a plethora of research showing this type of instruction yields above average results. The studies that used this type of instruction showed on average 147% higher results. However, all of these results also constituted outliers. Due to the small number of studies, and the outlier status of this data, it is hard to detect if these results were high because the study results were high or if it was simply due to extreme variability. 

 

Single Case Study Results:

Discussion: 

The results of the single-case meta-analysis did show very high results, even if all outlier data was excluded, especially for arithmetic. However, these studies were not ideal for testing pedagogical efficacy. Moreover, the only experimental or quasi-experimental study of the topic showed low results. With this in mind, I think I would say I am cautiously optimistic about the efficacy of CRA lessons. That being said, as I am unaware of any other significant research on the topic, CRA remains in my opinion the only evidence-based math lesson format and should therefore be viewed as best practice, at least until more future research confirms otherwise. 

 

Written by Nathaniel Hansford, & Joshua King

Special thanks to Dr. Corey Peltier who consulted on the article. 

Last edited: 2022/06/23

 

References: 

Arizona Software. (2010). GraphClick. Retrieved from http://

www.arizona-software.ch/graphclick/

 

Baron, A., & Derenne, A. (2000). Quantitative summaries of single-subject studies: What do group comparisons tell us about individual performances? The Behavior Analyst, 23, 101–106. doi:10.1007/BF03392004

 

Becker, B. J. (2005). Failsafe N or file-drawer number. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta-analysis: Prevention, assessment, and adjustments (pp. 111–125). Hoboken, NJ: Wiley.

 

Begg, C. B., & Mazumdar, M. (1994). Operating characteristics

of a rank correlation test for publication bias. Biometrics, 50,

1088–1101. 

 

Borenstein, M., Hedges, L. V., Higgins, J., & Rothstein, H. (2005). Comprehensive meta-analysis 2.0. Englewood, NJ: Biostat.

 

Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Hoboken, NJ:

 

Wiley. Bouck, E. C., & Park, J. (2018). A systematic review of the literature on mathematics manipulatives to support students with disabilities. Education and Treatment of Children, 41,

65–106. doi:10.1353/etc.2018.0003

 

Bouck, E. C., Satsangi, R., & Park, J. (2018). The concrete–representational–abstract approach for students with learning disabilities: An evidence-based practice synthesis. Remedial and Special Education, 39, 211–228. doi:10.1177/0741932517721712

 

Bouck, E. C., Working, C., & Bone, E. (2018). Manipulative apps to support students with disabilities in mathematics. Intervention in School and Clinic, 53, 177–182. doi:10.117/

1053451217702115

 

Boyle, M. A., Samaha, A. L., Rodewald, A. M., & Hoffmann, A. N. (2013). Evaluation of the reliability and validity of GraphClick as a data extraction program. Computers in Human Behavior, 29, 1023–1027. Doi:10.1016.j.chb.2012.07.031

 

Brossart, D. F., Vannest, K. J., Davis, J. L., & Patience, M. A. (2014). Incorporating nonoverlap indices with visual analysis for quantifying intervention effectiveness in single-case experimental designs. Neuropsychological Rehabilitation, 24,

464–491. doi:10.1080/09602011.2013.868361

 

Bruner, J. S. (1964). The course of cognitive growth. American Psychologist, 19, 1–15. doi:10.1037/h0044160

Burns, M. K. (2012). Meta-analysis of single-case design research: Introduction to the special issue. Journal of Behavioral Education, 21, 175–184. doi:10.1007/s10864-012-9158-9

 

Busk, P. L., & Serlin, R. C. (1992). Meta-analysis for single-case research. In T. Kratochwill & J. Levin (Eds.), Single-case research design and analysis: New directions for psychology

and education (pp. 187–212). New York, NY: Routledge.

 

Butler, F.M., Miller, S.P., Crehan, K., Babbitt, B. and Pierce, T. (2003), Fraction Instruction for Students with Mathematics Disabilities: Comparing Two Teaching Sequences. Learning Disabilities Research & Practice, 18: 99-111. https://doi.org/10.1111/1540-5826.00066

 

Carbonneau, K. J., Marley, S. C., & Selig, J. P. (2013). A metaanalysis of the efficacy of teaching mathematics with concrete manipulatives. Journal of Educational Psychology, 105, 380–400. doi:10.1037/a0031084

 

Cochran, W. G. (1954). The combination of estimates from different experiments. Biometrics, 10, 101–129.

 

Cook, B. G., Buysse, V., Klingner, J., Landrum, T. J., McWilliam, R. A., Tankersley, M., & Test, D. W. (2015). CEC’s standards for classifying the evidence base of practices in special education. Remedial and Special Education, 36, 220–234.

doi:10.1177/0741932514557271

 

Cooper, H. (2016). Research synthesis and meta-analysis: A stepby-step approach (5th ed.). Thousand Oaks, CA: SAGE.

 

Gersten, R., Fuchs, L. S., Compton, D., Coyne, M., Greenwood, C., & Innocenti, M. S. (2005). Quality indicators for group experimental and quasi-experimental research in special education. Exceptional Children, 71, 149–164. doi:10.1177/001440230507100202

 

Hedges, L. V., Pustejovsky, J. E., & Shadish, W. R. (2012). A standardized mean difference effect size for single case designs.Research Synthesis Methods, 3, 224–239. doi:10.1002/

jrsm.1052

 

Hedges, L. V., Pustejovsky, J. E., & Shadish, W. R. (2013). A standardized mean difference effect size for multiple baseline designs across individuals. Research Synthesis Methods, 4,

324–341. doi:10.1002/jrsm.1086

 

Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21, 1539–1558. doi:10.1002/sim.118614 The Journal of Special Education 54(1)

 

Hopewell, S., Loudon, K., Clarke, M. J., Oxman, A. D., & Dickersin, K. (2009). Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane

Database of Systematic Reviews, 90, 1631–1640.

 

Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional

Children, 71, 165–179. doi:10.1177/001440290507100203

 

Kaminski, J. A., Sloutsky, V. M., & Heckler, A. F. (2009).Concrete instantiations of mathematics: A double-edged sword. Journal for Research in Mathematics Education, 40,

90–93.

 

Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010).

 

What Works Clearinghouse: Single-case design technical documentation. Retrieved from https://files.eric.ed.gov/fulltext/ED510743.pdf

 

Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34, 26–38. doi:10.1177/0741932512452794

 

Losinski, M. L., Ennis, R. P., Sanders, S. A., & Nelson, J. A. (2019). A meta-analysis examining the evidence-base of mathematical interventions for students with emotional disturbances. The Journal of Special Education, 52, 228–241.

doi:10.1177/0022466918796200

 

Maggin, D. M., Pustejovsky, J. E., & Johnson, A. H. (2017). A metaanalysis of school-based group contingency interventions for students with challenging behavior: An update. Remedial

and Special Education, 38, 353–370. doi:10.1177/074

1932517716900

 

Manolov, R., Guilera, G., & Solanas, A. (2017). Issues and advances in the systematic review of single-case research: A commentary on the exemplars. Remedial and Special

Education, 38, 387–393. doi:10.1177/0741932517726143

 

McNeil, N. M., Uttal, D. H., Jarvin, L., & Sternberg, R. J. (2009). Should you show me the money? Concrete objects both hurt and help performance on mathematics problems.

Learning and Instruction, 19, 171–184. doi:10.1016/j.learninstruc.2008.03.005

 

Morin, K. L., Ganz, J. B., Gregori, E. V., Foster, M. J., Gerow, S. L., Genc-Tosun, D., & Hong, E. R. (2018). A systematic quality review of high-tech AAC interventions as an evidence-based practice. Augmentative and Alternative Communication, 34,

104–117. doi:10.1080/07434618.2018.1458900

 

Moyer, P. S. (2001). Are we having fun yet? How teachers use manipulatives to teach mathematics. Educational Studies in Mathematics, 47, 175–197.

 

National Assessment of Educational Progress. (2017). The nation’s report card 2017: Mathematics and reading assessments. Available from https://www.nationsreportcard.gov/

 

National Mathematics Advisory Panel. (2007). Preliminary report: National mathematics advisory panel. Retrieved from https://ed.gov/about/bdscomm/list/mathpanel/pre-report.pdf

 

Parker, R. I., Vannest, K. J., & Davis, J. L. (2011). Effect size in single-case research: A review of nine nonoverlap techniques. Behavior Modification, 35, 303–322. doi:10.1177/0145445511399147

 

 Peltier, C., Morin, K. L., Bouck, E. C., Lingo, M. E., Pulos, J. M., Scheffler, F. A., Suk, A., Mathews, L. A., Sinclair, T. E., & Deardorff, M. E. (2020). A Meta-Analysis of Single-Case Research Using Mathematics Manipulatives With Students At Risk or Identified With a Disability. The Journal of Special Education, 54(1), 3–15. https://doi.org/10.1177/0022466919844516


 

Piaget, J. (1962). Play, dreams, and imitation in childhood. New

York, NY: W.W. Norton.

 

Pustejovsky, J. E. (2016). Between-case standardized mean difference estimator. Retrieved from https://jepusto.shinyapps.io/scdhlm/

 

Pustejovsky, J. E. (2018). Procedural sensitivities of effect sizes for single-case designs with directly observed behavioral outcome measures. Psychological Methods, 24(2), 217-235. Advance online publication. doi:10.1037/met0000179

 

Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R. (2014). Design-comparable effect sizes in multiple baseline designs:A general modeling framework. Journal of Educational and Behavioral Statistics, 39, 368–393. doi:10.3102/ 1076998614547577

 

Riley-Tillman, T. C., & Burns, M. K. (2009). Single case design for measuring response to educational intervention. New York, NY: Guilford.

 

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86, 638–641. doi:10.1037/0033-2909.86.3.638

 

Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276–1284.

 

Salzberg, C. L., Strain, P. S., & Baer, D. M. (1987). Meta-analysis for single-subject research: When does it clarify, when does it obscure? Remedial and Special Education, 8, 43–48. doi:10.1177/074193258700800209

 

Sarama, Julie & Clements, Douglas. (2016). Physical and Virtual Manipulatives: What Is “Concrete”?. 10.1007/978-3-319-32718-1_4. 

 

Strickland, T., & Maccini, P. (2013). The Effects of the Concrete–Representational–Abstract Integration Strategy on the Ability of Students With Learning Disabilities to Multiply Linear Expressions Within Area Problems. Remedial and Special Education, 34(3), 142–153. https://doi.org/10.1177/0741932512441712

bottom of page