The PNG Math List 2026
In the app below, you’ll find the results of 44 meta-analyses on mathematics instruction. Looking at education research through a meta-analytic lens is especially important because individual studies often produce very different results. Classrooms vary widely, students differ, teachers differ, and instructional contexts are rarely identical. As a result, findings from single studies can be difficult to interpret or replicate. Meta-analyses address this problem by examining results across many studies at once, providing a broader and more reliable picture of what the research suggests.
To compile this list, I searched the ERIC database using the terms “math” and “meta-analysis”, and also reviewed meta-analyses previously discussed on this blog. In total, 208 studies were screened, and 44 met the inclusion criteria. To be included, a study had to (a) synthesize results from at least four experimental studies of mathematics instruction, (b) be peer-reviewed, and (c) report Cohen’s d or Hedges’ g effect sizes.
I often receive questions about the peer-reviewed status of work like this. To be clear, this project is not original peer-reviewed research. Its purpose is to organize and present existing peer-reviewed research in a format that is more accessible to teachers. The studies included here are not meant to represent all mathematics research, but rather the body of meta-analytic research available within one major database.
In a previous synthesis, I received feedback about including moderator data for some studies but not others. This time, I made a greater effort to include moderator information whenever it was relevant and interpretable for educators. In some cases, moderator variables were still excluded when they were highly technical or unrelated to classroom decision-making. A new feature in this app allows users to click on a bar to view additional effect sizes reported in the original meta-analysis. That said, for full context, the best practice is always to consult the original study directly.
A central metric in meta-analyses is the effect size, typically reported as Cohen’s d or Hedges’ g. Effect sizes represent the magnitude of an instructional effect in standardized units, allowing results from different studies to be compared. While no benchmark system is perfect, the following guidelines are commonly used as a starting point:
-
< 0.20 = Negligible effect
-
~0.40 = Moderate effect
-
~0.80 = Large effect
-
> 1.20 = Very large effect
These benchmarks should be interpreted cautiously, but they provide a useful frame of reference.
About the Interactive Graph
Each bar in the graph represents a specific teaching approach (e.g., fluency instruction, word-problem instruction, student-centered teaching). Bars are color-coded to reflect the overall quality of the meta-analysis:
-
Red: Lower quality (e.g., included studies without control groups or not peer-reviewed)
-
Green: Medium quality (peer-reviewed and all studies included control groups)
-
Blue: High quality (peer-reviewed, control groups required, and especially rigorous inclusion criteria such as standardized assessments and baseline equivalence)
Benchmark lines are displayed directly on the graph to help interpret effect sizes. Clicking on any bar reveals detailed information, including authors, sample sizes, confidence intervals, and links to the original study.
Unlike some previous syntheses, I have not combined results across meta-analyses that examined similar topics. Each bar represents a single meta-analysis. This means you may see multiple bars for the same instructional approach. The goal is to promote transparency and allow readers to judge the evidence for themselves.
When interpreting these results, I would encourage readers to avoid strong conclusions based on small differences in effect size, especially when comparing meta-analyses of similar quality. One consistent pattern in research synthesis is that the weaker the inclusion criteria, the larger the effect sizes tend to be. This is not necessarily because the instructional approach is more powerful, but because less rigorous studies are more likely to produce inflated results.
In this graph, red bars represent meta-analyses that included weaker study designs, such as case studies or studies without strong control groups. Very large effect sizes are expected in this category. Green bars represent studies with moderate methodological rigor, while blue bars represent the most rigorous meta-analyses, with strict inclusion criteria and stronger controls.
To illustrate this pattern, the largest effect size observed among the blue-rated meta-analyses was 0.33, while red- and green-rated analyses included effect sizes as large as 1.43 and 1.53, respectively. However, when looking at average effects, higher-quality studies tended to report substantially smaller and more conservative estimates overall. For this reason, a difference of less than 0.20 between two green bars is unlikely to be educationally meaningful. In contrast, a blue bar that exceeds the effect size of a red or green bar may be more informative, even if the numerical difference appears modest.
With this in mind, greater attention should be paid to instructional approaches that show large and consistent differences across the evidence base. For example, explicit instruction showed a mean effect size of 1.22, while ability grouping showed a mean effect size of –0.07. Given the size and direction of these effects, it is reasonable to conclude that explicit instruction is likely to be beneficial for most students, whereas ability grouping is unlikely to improve learning outcomes.
**Please note that the below graph is coded on html for PC. It will work far better on a personal computer than a tablet or phone.
Interpreting the Results
To make meaningful sense of this literature, it is necessary to recognize that mathematics instruction involves not only different instructional approaches, but also distinct forms of mathematical knowledge. Most researchers agree that mathematical knowledge can be broadly divided into two primary domains: procedural knowledge and conceptual knowledge (Rittle-Johnson & Schneider, 2015).
Procedural knowledge refers to the ability to accurately and efficiently carry out mathematical procedures and algorithms. This includes knowledge of steps, rules, and symbol manipulations used to solve problems. Conceptual knowledge, in contrast, refers to an understanding of the underlying principles, relationships, and structures that explain why mathematical procedures work.
For example, dividing fractions procedurally involves knowing to multiply by the reciprocal of the divisor. Conceptually, however, division represents the question “how many of this quantity fit into that quantity,” and multiplying by the reciprocal functions as a way of counting how many units of the divisor fit within the dividend (Rittle-Johnson, 2011). While procedural knowledge enables correct execution, conceptual knowledge provides meaning and justification for the procedure.
While this two-domain distinction is well established, it can be further refined. Procedural knowledge can be usefully divided into:
-
Computational knowledge – knowledge of basic facts and numerical combinations (e.g., multiplication facts, number bonds), and
-
Operational knowledge – knowledge of multi-step algorithms and procedures (e.g., long division, fraction operations).
Similarly, conceptual knowledge can be divided into:
-
Concept knowledge – understanding of mathematical principles and relationships (e.g., place value, equivalence, proportionality), and
-
Application knowledge – the ability to flexibly apply conceptual and procedural knowledge to novel or non-routine problems that differ from those explicitly taught.
Importantly, application knowledge is not a separate domain of mathematics, but rather a performance outcome that depends on the integration of both conceptual and procedural knowledge. Successful transfer requires that procedures are sufficiently fluent to avoid cognitive overload and that concepts are sufficiently understood to guide problem interpretation.
In general education, mathematics instruction has often emphasized the development of application knowledge as the primary goal of learning. This emphasis has contributed to instructional approaches that prioritize complex, open-ended problems and student-led inquiry, frequently accompanied by a reduced emphasis on direct procedural instruction over the past two decades (Boaler & Dweck, 2016; Liljedahl, 2021; Small, 2009).
These approaches are commonly associated with constructivist instructional models, which emphasize meaning-making, inquiry, and problem solving over memorization and explicit teaching of procedures. A similar shift occurred in literacy education, where Whole Language and Balanced Literacy approaches prioritized reading comprehension and authentic texts while de-emphasizing explicit instruction in foundational skills such as phonics (Pressley, 2002).
In contrast, research traditions rooted in special education, behavioral science, and cognitive psychology have placed greater emphasis on explicit instruction and systematic procedural teaching, particularly for students with learning difficulties. These perspectives have informed policy papers and research syntheses produced by groups associated with the “Science of Math” movement (Codding, Peltier & Campbell, 2023).
Taken together, contemporary debates in mathematics education largely center on two interrelated questions:
-
Conceptual versus procedural instruction, and
-
Explicit versus implicit instruction.
Regarding the first question, the meta-analytic evidence summarized in Table 1 does not provide direct guidance, as no quantitative meta-analyses were identified that isolate conceptual-only versus procedural-only instruction. However, a landmark narrative review by Rittle-Johnson and Schneider (2015) concluded that mathematics achievement is highest when conceptual and procedural instruction are integrated. Their review further demonstrated a bidirectional and synergistic relationship, in which gains in one domain support growth in the other. Current evidence therefore does not support prioritizing one form of knowledge to the exclusion of the other.
The second question—explicit versus implicit instruction—can be addressed more directly through meta-analytic evidence. Three meta-analyses examining explicit mathematics instruction were identified in the ERIC database. Gersten et al. (2009) reported a large mean effect size (g = 1.22) across 11 studies involving students with learning disabilities. Baker, Gersten, and Lee (2002) examined four studies of low-achieving mathematics students and reported a mean effect size of g = 0.58. Stockard et al. (2018), analyzing 70 studies of Direct Instruction mathematics programs, reported a mean effect size of g = 0.55. Across studies, effect sizes ranged from moderate to very large, with the strongest effects observed among students with the highest instructional needs.
By comparison, two meta-analyses examined inquiry-based mathematics instruction. Xie, Wang, and Hu (2018) analyzed 26 studies of inquiry-based instruction with Chinese elementary students and reported a mean effect size of g = 0.52. However, Chinese mathematics education is typically characterized by strong procedural foundations and extensive explicit instruction, which may have enabled students to benefit more from inquiry-based approaches. Lazonder and Harmsen (2016) conducted a meta-analysis of five mathematics studies and reported a smaller mean effect size of g = 0.28.
Problem-based learning, often associated with constructivist instruction and characterized by minimally guided group problem solving, was also examined by Xie et al. (2018). Across 21 studies, they reported a mean effect size of g = 0.58. Notably, both Xie et al. (2018) and Lazonder and Harmsen (2016) found that inquiry-based approaches produced larger effects when accompanied by additional instructional guidance, suggesting that the effectiveness of inquiry depends substantially on the presence of structured support.
Overall, this body of research suggests that explicit instruction tends to be more effective in mathematics, especially for students who struggle or have significant learning gaps. For students with fewer difficulties, the advantage is less clear. That said, there are important limitations in this research that teachers should be aware of before drawing strong conclusions.
First, these meta-analyses do not adequately account for the types of tests used to measure learning. They do not distinguish between different strands of math, different kinds of knowledge (such as procedural versus conceptual), or whether the assessments were closely aligned with instruction or designed to measure broader transfer. They also do not separate norm-referenced tests from classroom-based or curriculum-aligned measures. This matters because what we choose to test strongly influences what “works”, and combining very different kinds of assessments can make results look clearer than they actually are.
Second, many of these studies only examine one instructional approach at a time, such as inquiry-based learning or direct instruction, rather than directly comparing the two. Because studies with positive findings are more likely to be published, both approaches often appear effective, even though they represent very different ways of teaching. In addition, many studies compare a new instructional approach to “business as usual,” rather than to a clearly defined alternative. This makes it difficult to determine whether an approach is truly more effective, or simply different from typical classroom practice.
Third, and perhaps most importantly, these studies rarely account for the size of the groups being studied and this turns out to matter a great deal. In a meta-analysis I currently have under peer review, about 80% of the variation in reported effect sizes was explained by sample size alone. Studies with more than 100 students showed effect sizes that were, on average, seven times smaller than those found in smaller studies. In practical terms, this means that small studies often make teaching approaches look far more powerful than they really are. Smaller samples are more likely to produce unstable and exaggerated results, especially when only positive findings are published. Larger studies tend to give more realistic estimates of impact.
This does not mean that instruction doesn’t matter. It means that very large effect sizes should be treated with caution, particularly when they come from small or tightly controlled studies. For teachers, the takeaway is simple: impressive numbers do not always translate into equally impressive classroom results.
Because these limitations are not addressed in most existing meta-analyses, it is hard to know whether their conclusions would hold up under more careful analysis. A more rigorous synthesis one that compares instructional approaches directly, accounts for assessment differences, and gives greater weight to large, well-designed studies, would likely produce more modest but more trustworthy estimates. (Indeed, I would love to conduct such a meta-analysis this year. If you are interested in collaborating on such a project, please contact me.)
At present, the strongest evidence comparing explicit and constructivist approaches comes from reading research. Multiple large reviews have found that systematic, explicit instruction produces roughly twice the impact of less explicit, constructivist-oriented approaches (National Reading Panel, 2000; Stuebing et al., 2008; Hansford, 2025). Importantly, these reviews did not merely conclude that explicit instruction was more effective; they also emphasized that instruction should be systematic—that is, organized around a carefully designed scope and sequence rather than delivered in an ad-hoc or purely responsive manner.
While systematic instruction has been studied far less directly in mathematics, there are strong reasons to believe it is especially important in this domain. Mathematics involves a large number of interdependent concepts and algorithms, many of which rely on prior knowledge for successful application. Gaps early in instruction can therefore compound quickly. Although reading and mathematics are distinct subjects, both operate under the same fundamental learning constraints: limited working memory, the need for practice to build fluency, and the importance of mastering foundational skills before engaging in more complex tasks. For this reason, it is unlikely that students would benefit from fundamentally different instructional principles in reading and mathematics.
What Are the Pillars of Effective Math Instruction?
When educators think about reading research, they often reference the five pillars of reading: phonemic awareness, phonics, fluency, vocabulary, and comprehension. Although this framework has been critiqued for oversimplifying the complexity of reading, it has nevertheless proven useful for teachers as an organizing structure. Each pillar represents a broad strand of instruction, and each is supported by a substantial body of meta-analytic evidence demonstrating its contribution to reading achievement.
A similar organizing framework is useful in mathematics. Based on the research reviewed in Table 1, four broad components of mathematical knowledge emerge as foundational: math facts, operations, concepts, and application. These components correspond closely to the refined knowledge distinctions discussed earlier and capture both the content and performance demands of mathematics learning.
To identify instructional pillars within mathematics, I reviewed the meta-analyses summarized in Table 1 and examined which pedagogical approaches were (a) clearly aligned with one or more of these four domains and (b) associated with moderate to large effect sizes. This process informed the instructional framework presented below. **Please note that the below pillars are coded on html for PC. It will work far better on a personal computer than a tablet or phone.
This framework is not intended to suggest that these are the only effective approaches to mathematics instruction. Rather, they represent some of the most consistently evidence-based pedagogies in the literature and can reasonably be viewed as forming the cornerstone of an effective mathematics program.
Notably, this framework aligns closely with the National Research Council’s Adding It Up synthesis, which identified Understanding, Computing, Applying, Reasoning, and Engaging as the central strands of mathematical proficiency (National Research Council, 2002). The present model similarly emphasizes the integration of knowledge, skill, and application while foregrounding instructional practices supported by empirical evidence.
Beyond the Pillars
One limitation of pillar-based frameworks is that they can imply that instruction operates in a pedagogical vacuum—independent of student motivation, engagement, and self-regulation. However, research increasingly suggests that these active components of learning play a critical role in academic outcomes.
As Burns, Duke, and Cartwright (2024) argue in their Active View of Reading, learning is not solely determined by instructional inputs, but also by how learners engage with, persist through, and regulate cognitive demands. Although their framework was developed in the context of reading, its implications extend directly to mathematics instruction. Effective math teaching requires not only evidence-based instructional practices, but also attention to the motivational and self-regulatory processes that allow students to benefit from instruction.
This broader perspective helps explain why debates around technology in mathematics instruction have become so polarized. In response to the documented harms of excessive screen time and social media use, many educators are understandably skeptical of digital tools. However, this skepticism often overlooks an important distinction: entertainment-driven technology and instructionally designed technology are not the same thing. In fact, instructional technology is among the most heavily studied variables in mathematics education.
Across the meta-analyses included in this review, f specifically examined the impact of technology on mathematics achievement (Aspiranti & Larwin, 2021; Myers et al., 2023; Saygılı & Çetin, 2021; Liu et al., 2022). All five reported positive effects, with several showing moderate to large effect sizes. These effects were strongest when technology was used to support explicit instruction, structured learning management systems, visual representations, word-problem instruction, and mathematical accuracy. In other words, technology appears most effective when it is used to amplify sound instructional practice, not replace it. Well-designed math platforms can be particularly helpful for several reasons.
-
First, they enable meaningful differentiation at scale. Because software can provide immediate feedback, students are able to take the time they need to learn a concept while advancing quickly once mastery is demonstrated. In SAGE Math, for example, students move through content at their own pace, receiving feedback on every item rather than waiting for periodic checks or end-of-unit tests.
-
Second, instructional software can apply assessment data with a level of precision that is difficult to replicate manually. When students complete a diagnostic in SAGE Math, the system automatically identifies specific areas of difficulty and uses that information to generate targeted practice and printable workbooks aligned to each student’s assessed needs. When a student answers multiple questions incorrectly in a row, their responses are analyzed to identify common misconceptions. If a student consistently forgets to find a common denominator when adding fractions, for instance, the system flags this pattern and provides explicit corrective feedback rather than generic hints. During practice, continued errors trigger checks for prerequisite skills, ensuring that students are not pushed forward without the foundation needed to succeed.
-
Third, software can reliably store and deliver large amounts of curriculum knowledge in a consistent way. SAGE Math currently includes explanations, worked examples, and practice across more than 114 distinct math problem types. Comparatively a new report by the University of San Diego, which has a 70% exclusion rate, showed that the majority of their students could not complete middle school Math questions. Indeed 60% of students could not even divide a fraction by 2!
What Does This Look Like in Practice?
I don’t want to tell teachers exactly how they should teach mathematics. I don’t believe the research base in math education is strong or precise enough to justify a rigid, prescriptive approach to instruction. That said, I also recognize that reading research summaries and meta-analyses can feel overwhelming, and it is not always obvious how those findings translate into day-to-day classroom practice. For that reason, I want to share how I personally teach mathematics.
I currently teach Grades 7–8, but over the course of my career I have taught mathematics from Grades 2 through 8. My approach has been shaped by both research and experience, and it continues to evolve as I reflect on student learning.
I am a strong believer in backwards planning. I keep copies of all of my math assessments for the entire year in a file folder on my desk. This is important to me because my instructional planning is grounded in helping students succeed on the skills and understanding those assessments are designed to measure. I am constantly thinking about what my next unit is, which assessment areas students have struggled with in the past, and how I can better support them before those gaps become entrenched.
Alongside this, I use formative assessment on a weekly basis to monitor student understanding. When a large number of students struggle with a type of question I believe I have taught, I treat that as a signal to reflect on my instruction rather than as a failure on the part of the students. For example, if I am teaching order of operations and notice that many students struggle when negative numbers are involved, I pause to revisit integer operations explicitly before moving on. This feedback loop helps me identify instructional gaps early and respond before misunderstandings compound.
As I plan units, I am also mindful that student needs vary across domains and developmental stages. The younger my students are, the more emphasis I place on numeracy and math fact fluency. At the beginning of a unit, my instruction is heavily focused on conceptual understanding. As the unit progresses, there is a gradual release of responsibility, and I increase my emphasis on procedural instruction and algorithms. Once students demonstrate sufficient mastery, I begin to introduce more challenging and varied applications.
As a concrete example, below is a unit plan for teaching the addition and subtraction of fractions. **Please note that the below unit is coded on html for PC. It will work far better on a personal computer than a tablet or phone.
This unit is not meant to be a prescriptive template, but rather an illustration of how I try to balance conceptual and procedural instruction while teaching explicitly.
That balance is especially important when teaching algorithms. When I introduce a new algorithm, I follow a consistent instructional routine. First, I model the algorithm step by step. Next, we work through a problem together as a class. I then present a new problem and ask students to solve it independently. Once most pencils stop moving, I write the correct solution on the board and ask students to close their eyes and raise their hands if they solved it correctly.
If most students are successful, I assign several additional problems and repeat the process, circulating to support students who need help. If many students are unsuccessful, I pause and troubleshoot publicly. I might cold call students to share their answers and use those responses to identify common errors. For example, when teaching theoretical probability, if students say that the probability of rolling the same number on a die twice is 1/12, I know they are adding denominators instead of multiplying probabilities, and we address that misconception directly. We continue this cycle until students have completed roughly 10–20 problems with a high degree of accuracy.
While this specific routine has not been directly examined in the meta-analyses discussed earlier, I think of it as responsive teaching—instruction that is explicit and structured, but also flexible enough to respond to student understanding in real time.
One area where this approach has been especially powerful is in teaching word problems through schemas. Rather than treating each word problem as a completely new challenge, schema-based instruction teaches students to recognize common problem structures—such as part–whole problems, comparison problems, or rate problems—and to associate those structures with appropriate solution strategies. Myers et al. (2023) found that explicitly teaching students to identify the type of problem they are solving leads to meaningful improvements in problem-solving performance. For teachers, this often means shifting the focus from asking “What operation do I use?” to “What kind of situation is this?” Instead of relying on surface-level keywords like more or left, students learn to analyze the underlying structure of the problem. Once students can reliably identify the schema, the mathematics becomes far more manageable: the structure guides how the problem is set up, which operations are used, and how the final answer should be interpreted. In my experience, schema-based instruction reduces guessing, supports transfer to unfamiliar problems, and is particularly beneficial for students who otherwise find word problems overwhelming. Importantly, this approach preserves conceptual understanding while delivering it through explicit instruction, rather than expecting students to infer problem structures on their own.
To see what schema-based instruction looks like in practice, consider this word problem: You have ¾ of a pizza. Your friend eats ¼ of a pizza. How much pizza do you have left? Instead of jumping straight to an operation or searching for keywords, students are first encouraged to think about what is happening in the situation. In this problem, there is a starting amount, something is taken away, and we want to know what remains. That tells students this is a “taking away” or decrease type of problem. Once students recognize that structure, the math becomes much clearer. Because something is being taken away, subtraction makes sense. Students can then write the equation ¾ − ¼ and use what they know about subtracting fractions to find the answer. Solving gives 2⁄4, which simplifies to ½. Finally, students connect the answer back to the story and explain that one-half of the pizza is left. Teaching students to identify the type of problem first helps them feel less overwhelmed, reduces guessing, and makes word problems easier to solve—even when the numbers or context change.
A Practical Thank You
If you’ve made it this far, thank you for taking the time to engage seriously with the research and ideas in this article. I know that translating research into classroom practice can be challenging, so I wanted to offer a few practical resources that align directly with the instructional principles discussed above.
First, I’m offering a free one-week trial of my online learning platform, SAGE Online Academy. If you sign up at www.sageonlineacademy.ca and use the code “free trial”, you’ll receive full access for yourself and up to 30 students at no cost.
Below, you’ll also find three completely free demo apps from SAGE. The first is a Math Facts Fluency app, designed to help students build automaticity in addition, subtraction, multiplication, and division. The second is an explicit instruction app that teaches math algorithms for Grades 1–8, covering over 90 different problem types. It includes step-by-step explanations, a diagnostic assessment, and a manipulatives calculator. The third app allows teachers to print paper versions of all the worksheets from the algorithm app.
In the full version of SAGE Online Academy, students can save their progress, receive immediate feedback on errors, practice previously missed skills, and access support from an AI tutor. Teachers can also generate custom workbooks tailored to individual student needs in seconds. Moreover, when students answer questions, in the online platform, they get arcade tokens, which grant them access to engaging online games. These tools were built to support explicit, responsive instruction—not to replace teachers, but to make targeted support easier to deliver. **Please note that the below apps are coded on html for PC. They will work far better on a personal computer than a tablet or phone.
Written by Nathaniel Hansford
Last Edited 2025/12/20
Want to support what we do at Pedagogy Non Grata? Consider donating on Patreon: patreon.com/u70587114
Want to learn more about the science of teaching? Check out my book: The Scientific Principles of Teaching
References:
Aspiranti, K. B., & Larwin, K. H. (2021). Investigating the effects of tablet-based math interventions: A meta-analysis. International Journal of Technology in Education and Science, 5(4), 629–647. https://doi.org/10.46328/ijtes.266
Baker, S. (2002). A synthesis of empirical research on teaching mathematics to low-achieving students. The Elementary School Journal, 103(1), 51–73.
Boaler, J. (2015, January 28). Fluency without fear: Research evidence on the best ways to learn math facts. YouCubed. https://www.youcubed.org/evidence/fluency-without-fear/
Boaler, J., & Dweck, C. S. (2016). Mathematical mindsets: Unleashing students’ potential through creative math, inspiring messages and innovative teaching. Jossey-Bass.
Burns, M. S., Duke, N. K., & Cartwright, K. B. (2024). The active view of reading: A framework for understanding the role of engagement in reading development. Reading Research Quarterly, 59(2), 189–206. https://doi.org/10.1002/rrq.514
Codding, R. S., Peltier, C., & Campbell, J. (2023). Introducing the science of math. Teaching Exceptional Children, 56(1), 44–53. https://doi.org/10.1177/00400599221121721
Gersten, R., Chard, D. J., Jayanthi, M., Baker, S. K., Morphy, P., & Flojo, J. (2009). Mathematics instruction for students with learning disabilities: A meta-analysis of instructional components. Review of Educational Research, 79(3), 1202–1242. https://doi.org/10.3102/0034654309334431
Hansford, N., Dueker, S., Garforth, K., Grande, J., King, J., & McGlynn, S. (2025). An exploratory quantitative analysis of research on balanced literacy and structured literacy. Discover Education, 4, Article 922. https://doi.org/10.1007/s44217-025-00922-8
Liljedahl, P. (2021). Building thinking classrooms in mathematics (Grades K–12): 14 teaching practices for enhancing learning. Corwin.
Liu, M., Pang, W., Guo, J., & Zhang, Y. (2022). A meta-analysis of the effect of multimedia technology on creative performance. Education and Information Technologies, 27(3), 3415–3437. https://doi.org/10.1007/s10639-022-10981-1
Myers, J. A., Hughes, E. M., Witzel, B. S., Anderson, R. D., & Owens, J. (2023). A meta-analysis of mathematical interventions for increasing the word problem-solving performance of upper elementary and secondary students with mathematics difficulties. Journal of Research on Educational Effectiveness, 16(1), 1–35. https://doi.org/10.1080/19345747.2022.2080131
National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel: Teaching children to read—An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction (NIH Publication No. 00-4769). U.S. Government Printing Office. https://www1.nichd.nih.gov/publications/pubs/nrp/documents/report.pdf
National Research Council. (2002). Helping children learn mathematics. National Academies Press. https://doi.org/10.17226/10434
Pressley, M., Bogner, A., Raphael, K., & Dolezal, S. (2001). “Balanced literacy” instruction. Focus on Exceptional Children, 34(5), 1–14. https://doi.org/10.17161/fec.v34i5.6788
Rittle-Johnson, B., & Schneider, M. (2015). Developing conceptual and procedural knowledge of mathematics. In R. C. Kadosh & A. Dowker (Eds.), The Oxford handbook of numerical cognition (pp. 1118–1134). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199642342.013.014
Saygılı, H., & Çetin, H. (2021). The effects of learning management systems (LMS) on mathematics achievement: A meta-analysis study. Necatibey Faculty of Education Electronic Journal of Science and Mathematics Education, 15(2), 341–362. https://doi.org/10.17522/balikesirnef.1026534
Small, M. (2009). Good questions: Great ways to differentiate mathematics instruction. National Council of Teachers of Mathematics.
Stuebing, K. K., Barth, A. E., Cirino, P. T., Francis, D. J., & Fletcher, J. M. (2008). Effects of systematic phonics instruction are practically significant: A response to Camilli, Vargas, and Yurecko. Journal of Educational Psychology, 100(1), 123–134. https://doi.org/10.1037/0022-0663.100.1.123
UC San Diego Senate–Administration Workgroup on Admissions. (2025, November 6). Final report. University of California San Diego. https://senate.ucsd.edu/media/740347/sawg-report-on-admissions-review-docs.pdf
Xie, C., Wang, M., & Hu, H. (2018). Effects of constructivist and transmission instructional models on mathematics achievement in mainland China: A meta-analysis. Frontiers in Psychology, 9, Article 1923. https://doi.org/10.3389/fpsyg.2018.01923
