top of page
Multi-Sensory Instruction:

One complex question that has nagged at me is the issue of multisensory instruction. Reading Rockets defines multi-sensory instruction as “instruction [that] combines listening, speaking, reading, and a tactile or kinesthetic activity.” With this instructional approach teachers often have students do things like use manipulatives, use hand gestures, or even sometimes draw letters in sand. Within the science of reading communities, this pedagogical concept is very popular, especially with dyslexia advocates. This likely goes back to the popularity of the Orton-Gillingham approaches, which have emphasized the importance of multi-sensory instruction. Indeed, one of the major Orton-Gillingham training companies calls themselves the Institute for Multi-Sensory Education. 

 

Admittedly, when I first started to research this topic, I was a bit skeptical. This was in part because the concept seems (while not holistically the same) very similar to the idea of teaching students to learning styles, which has been largely debunked within the scientific community. In regards to teaching to learning styles, meta-analyses typically show low outcomes and most neuroscientists reject the model. Moreover, teaching to learning styles has logical flaws. For example, the more different ways we try to teach a concept, the greater the chance we diminish the specificity of our instruction. For example, what is a more specific way to teach vocabulary, with a dance or with a white board? Similarly, the more multi-sensory our instruction is, the greater the likelihood is that our instruction is less specific to the curricular goal. For example, writing letters with a pencil or a keyboard is more specific than writing them in sand, as it more closely resembles our end goals. 
 

Of course, as I have previously pointed out, theorizing and rationalizing are not enough to determine if something is scientifically true. To validate the efficacy of a pedagogical program, meta-analytic data is necessary. However, to the best of my knowledge, there have not been any meta-analyses done specifically on the topic of multi-sensory education. That being said, I have seen many point to Dr Elizabeth Stevens’ meta-analysis of  Orton-Gillingham phonics instruction as being determinative for this topic, as the Orton-Gillingham programs are typically multi-sensory based.

 

Her meta-analysis on the topic found a mean effect size for outcomes in phonological awareness, phonics, fluency, and spelling of .22, and for outcomes in comprehension and vocabulary a mean effect size of .14, with a mean overall effect size of .22. These results are all barely statistically significant and would indicate an overall very low effectiveness. It is also important to note that these results are in line with other meta-analyses on the topic. The NRP found a mean ES of .21, and my meta-analysis of Orton-Gillingham programs found a mean ES of  .37, which is significantly higher than the NRP or Stevens result, but still lower than the mean effect size for phonics programs, and is below average for education studies in general. Across the literature, it seems quite clear that the Orton-Gillingham phonics program studies show lower results than most other phonics programs (with the exception of SPIRE studies). 

 

Please see the graphs below, for reference:

The Orton-Gillingham approach is very popular, and its proponents have been instrumental in paving the way for both the science of reading and for dyslexia advocacy. The programs appear generally speaking to align well with the science of reading, and admittedly I do not have a good explanation as to why they show lower outcomes. I originally hypothesized (like Timothy Shanahan) that it was because the Orton-Gillingham studies were mostly on severely dyslexic students. However, in my own meta-analysis of reading programs, I found significantly higher results for phonics studies that looked at instruction for dyslexic students, compared to whole-class instruction. My next hypothesis was that the reason these programs underperformed was because of the multi-sensory component. 

​

However, when I started to dive deeper into this particular area of research I realized that most phonics programs included some element of multi-sensory instruction. With this fact in mind, I do not believe that a meta-analysis of Orton-Gillingham programs is necessarily a good direct measure for the efficacy of multi-sensory instruction. I wanted to be able to identify which phonics programs were multi-sensory and which ones were not. However, neatly defining programs in this way was impractical, as such an identification would appear to be inaccurate. Indeed, there was less of a clear cut dichotomy with phonics programs and more of a continuum, with some programs being purely multi-sensory and others being more multi-modality (meaning they taught phonics with a variety of different approaches, but did not emphasize the sensory/tactile components as much). For me, this raised two challenging research problems: 

  1. Was the impact of phonics programs due to phonics or due to multi-sensory instruction?

  2. How do we measure the effect of multi-sensory instruction within phonics studies, if most studies have some multi-sensory elements? 

​

To attempt to answer this question, I decided to do a sub-analysis of my language programs meta-analysis. I ranked each program with a number of 1-3 based on how multi-sensory the program was; 1 being the least multi-sensory and 3 being the most. Jolly Phonics and Wilson were ranked  as 3. SPIRE, Words Their Way, Empower, Reading Simplified were ranked as 2. Corrective Reading, Open Court, Reading Mastery, and Spelling Mastery were ranked as 1. In order to rank these programs, I reached out to several tutoring acquaintances who had used these programs. I first analyzed the results using a Pearson effect size calculation, comparing the study results with the multi-sensory rank. The higher the effect sizes, the more likely there would be a positive effect for multi-sensory instruction. 

​

The resulting correlation effect size was .19, suggesting there was a positive but statistically insignificant impact for programs that were more multi-sensory. I then repeated this experiment again, but removed all core instruction data and only included studies for dyslexic or at risk readers. The resulting correlation effect size was -.14, which would suggest a small but negative correlation between how multi-sensory instruction was and the results for dyslexic students. The mean effect size for programs classified as 1 was .66; the mean effect size for programs classified as 2 was .48; the mean effect size for programs classified as 3 was .36. Conversely the average impact for phonics overall was .45. 

(Please note the above graphs were weighted for the number of studies and therefore, the simple mean shown above does not match the mean displayed.) One reliability concern I have with this data was the inclusion of the Jolly Phonics data in the rank 3 classification, as the mean ES for Jolly Phonics was 3.29 times higher than the mean for the other programs included in that classification. Indeed, if we remove the Jolly Phonics program data, the level 3 classification has a mean result of .27, making it the lowest result. Moreover, the Pearson correlation becomes even less significant, suggesting that there is no meaningful correlation whatsoever. Another concern for me is the overall subjectivity of the coding; because programs are ranked on a sliding scale, the coding is automatically more subjective. However, I do not think a yes/no coding would have been accurate, and I did my best to code the programs based on user testimony. Ultimately, this analysis suggests that multisensory instruction does not increase or decrease experimental outcomes. One caveat, I will make is that many dyslexic tutors have told me they felt multisensory instruction helped some of their specific students, but not all of their students. If this were true, it might explain why the research outcomes were not supporting this finding, as experimental evidence tends to show what works for the majority of students, not necessarily the individual student. 

 

After conducting this research, I do not feel that the current body of evidence can be used to make any firm conclusions either way. That being said, I recently spoke to the esteemed Dr. Steve Graham, who has told me that he is currently working on a very extensive meta-analysis of the topic. He hypothesized that multi-sensory instruction would show a slight positive benefit due to the increased methods of instruction, but not due to the sensory or tactile elements. With this in mind, I may change my position once that meta-analysis is complete, but as it currently stands, I do not think there is a strong scientific argument for or against multi-sensory instruction. 

 

While researching for this topic, I noticed two very popular claims that were often made with great conviction: 

 

1. That multi-sensory instruction was a necessary component in proper phonics instruction.

2. That multi-sensory instruction was especially useful for dyslexic students. 

 

As far as I can tell, the scientific evidence for both claims is weak to non-existent. There are several phonics program studies that include little to no multi-sensory instruction and show high results, as well as several phonics program studies that include lots of multi-sensory instruction and show low results. If multi-sensory instruction was necessary for student learning, we should not see phonics studies without multi-sensory elements showing high results. Similarly, my data analysis conducted for this showed a negative correlation effect for teaching dyslexic students with multi-sensory methods. Indeed multi-sensory studies on core instruction showed significantly higher results. One possible hypothesis I could offer to explain this, is that dyslexic students may require more explicit and specific instruction to learn how to read than do non-dyslexic students, whereas non-dyslexic students may be able to better benefit from the engaging elements of multi-sensory instruction. 

 

References: 

Stevens, E. A., Austin, C., Moore, C., Scammacca, N., Boucher, A. N., & Vaughn, S. (2021). Current State of the Evidence: Examining the Effects of Orton-Gillingham Reading Interventions for Students With or at Risk for Word-Level Reading Disabilities. Exceptional Children, 87(4), 397–417. https://doi-org.ezproxy.lakeheadu.ca/10.1177/0014402921993406

bottom of page