Is RTI an Evidence Based Concept?

Response to Intervention or RTI for short is defined by John Hattie as “a multi-tier approach to the early identification and support of students with learning and behavior needs. The RTI process begins with high-quality instruction and universal screening of all children in the general education classroom (Tier 1). Struggling learners are provided with interventions at increasing levels of intensity to accelerate their rate of learning. Those not making progress are then provided with increasingly intensive instruction usually in small groups (Tier 2). If still no progress, then students receive individualized, intensive interventions that target the students’ skill deficits (Tier 3).” 

 

That being said, the RTI model is actually quite complicated and I often see people attempting to simplify it as a professional learning model, a formative assessment model, or as an alternative to traditional special education models. However, I would view these positions as reductive. I was actually trained as an RTI teacher and it was through RTI that I was actually originally exposed to the idea that teaching could be a science. According to my personal experience, and RTI training I would define RTI as an action research framework that has:

 

-Teachers collectively plan instruction and assessments

-Monitor student success via formative assessment and data collection

-Collectively mark assessments

-Collectively plan intervention for students not meeting learning expectations, via tiered instruction. 

 

In my opinion this framework attempts to leverage the effects of collective-self efficacy, formative assessment, data-collection, and individualized instruction. Of course the most central tendency of this framework is that students receive more instruction based on their needs. The core of the classroom is labeled tier 1 and receives the standard core curriculum. Students who are struggling a little are given some additional instruction and support and labeled tier 2. And students who are struggling a lot are labeled tier 3 and given the most support. This is meant to differ from regular special education in that these labels and groupings are not static, they are fluid. Meaning, if a student is especially struggling with a particular learning goal, they are given tier 3 instruction. Moreover, if a learning disabled student is not struggling with a specific learning goal, they are not necessarily given tier 3 instruction during the instructional period of that learning goal. 

 

The fluid groupings of the RTI system is often the aspect that gets most criticized, as there is the concern that a student with a Dyslexic need, might receive less tier 3 instruction than would happen within a traditional special education system. And of course, students diagnosed with Dyslexia should absolutely be receiving additional tier 3 instruction. However; I personally like the fluid groupings for two reasons. Firstly, I believe they help promote a growth mindset, and avoid making students feel labeled; the students receive tier three instruction, they are not tier three students. Meaning the same students will not always necessarily be receiving tier 3 instruction. Secondly, I think this approach can lead to more individualized instruction, as the instruction is about specific learning needs, not specific students.

For example, let's say we set learning the vowel sounds as a learning goal for the unit. The students who make the slowest progress in learning those sounds, will receive the most instruction, until they learn those sounds at which point the student is bumped back into tier one instruction. This ensures that additional instruction is always specific to the curriculum goals and the specific needs of students. Moreover, as the instruction is dependent on the students knowledge of individual curriculum, the students receiving the additional instruction will change with each unit. This system can lead to less students receiving tier three instruction on average, but if done correctly, it should not lower learning results. Indeed the research would suggest that most students benefit from this system, as I will show below. Moreover, I think it is important to note, this system does not necessitate students not receiving additional instruction specific to learning disorders or needs. I believe schools can both use an RTI system and mandate dyslexic students receive additional support. I think it would be a false dichotomy to say that the choice has to be one or the other, of course this is not to say that there are not schools in which that false dichotomy has been contrived. 

 

Of course, as always my default position in determining the efficacy of a pedagogical strategy is to rely on the meta-analytic data. To the best of my knowledge, there have been 5 meta-analyses on RTI. All of them showed strong results. However, two of them were not peer reviewed. And most of them included inflating results. I have graphed these effect sizes below; however, someone may notice that my reported effect sizes are different from the ones Hattie reports on. This difference is because where possible, I have removed the inflationary results. 

The first meta-analysis was conducted by Swanson, Et al in 2001. This meta-analysis included 30 studies and made calculations using Cohen’s D. That being said, this meta-analysis did not exclude case studies, which would normally inflate results. However, they did break down results based on whether or not they were experimental or case studies. Surprisingly though, the case study effect sizes (within design) results were much lower than the experimental effect sizes (between design). This meta-analysis showed high results for younger students younger than 13, average achieving students, and underachieving students. It showed moderate results for learning disabled students, hearing impaired students, and intellectually disabled students. It showed low or negative results for RTI coaching models and for students in secondary school. 

The second meta-analysis was conducted by Burns, Et, al in 2005. This study found a mean effect size of 1.27. However, it did not exclude case studies, which would likely inflate its results. It did not include any pertinent co-variance research. 

 

The third meta-analysis was conducted by Tran, Et, al in 2011. This meta-analysis was the only one which specifically looked at reading outcomes. However, it did not exclude case studies, which may have inflated results. This meta-analysis showed high results for word ID, reading comprehension, word attack, vocabulary, verbal IQ, phonological memory, orthographic skills, phonological awareness, and spelling. It showed moderate evidence for rapid naming speed, problem solving, and reading fluency. It showed a statistically insignificant positive benefit on IQ scores, and a negative result on behavior. I think the behavior piece is particularly interesting. While working at an RTI school, I noticed substantial improvements to reading scores. However, many teachers felt it increased behavioral difficulties in the school. That being said, the behavior issues often stemmed from podding, which is a practice that is often associated with RTI, but is not mutually exclusive to it. Podding happens when teachers combine instruction. IE each week one teacher teaches tier 3, one teacher teaches tier 2, and one teacher teaches tier 1. Personally I love the podding strategy, but it can definitely lead to unique behavioral challenges. 

The next paper was written by Bagasi, as part of Masters thesis. According to John Hattie, the meta-analysis included 19 studies and found a mean effect size of .47. That being said, I was unable to personally locate this paper. 

 

The last paper was written by Dr. Torres, for his PhD thesis. Dr. Torres offers several different effect sizes depending on the experiment design. John Hattie has used the random effects control group design effect size as his reference for this paper, which was 1.32. This paper specifically looked at ELL students and despite being not peer-reviewed was the only meta-analysis that excluded case studies. 

 

Discussion: 

RTI is often cited as an evidence based practice in literacy instruction. While we have several meta-analyses showing a high effect size, only one of these papers excluded non-experimental papers. Moreover, only one of these meta-analyses specifically looked at literacy instruction. That being said I think it is reasonable to say that we have some moderately impressive evidence for the use of RTI, but that from a scientific perspective we might want to be cautiously optimistic. 

 

Personally, RTI was a big part of what started my journey on the quest to answer the question, how do we best help students learn. Anecdotally I have found the framework incredibly powerful and fully believe that it has the potential to transform educational results within a school. That being said, RTI is a complex system, requires training, and teacher buyin to be successful. I think there are perhaps more simple and cost effective ways to improve reading education within a school. Lastly, while RTI is often specifically used for literacy instruction, in my personal experience it makes even more sense to use for math instruction. 

 

Written by Nathaniel Hansford

Last Edited 2022-06-05

References: 

J, Hattie. (2022). RTI. Visible Learning, Metax. Retrieved from <https://www.visiblelearningmetax.com/influences/view/response_to_intervention>. 

 

Tran, L., Sanchez, T., Arellano, B., & Lee Swanson, H. (2011). A meta-analysis of the RTI literature for children at risk for reading disabilities. Journal of learning disabilities, 44(3), 283–295. https://doi.org/10.1177/0022219410378447

 

 Swanson, H. L., & Lussier, C. M. (2001). A Selective Synthesis of the Experimental Literature on Dynamic Assessment. Review of Educational Research, 71(2), 321–363. https://doi.org/10.3102/00346543071002321

 

 Burns, M. K., Appleton, J. J., & Stehouwer, J. D. (2005). Meta-Analytic Review of Responsiveness-To- Intervention Research: Examining Field-Based and Research-Implemented Models. Journal of Psychoeducational Assessment, 23(4), 381–394. https://doi.org/10.1177/073428290502300406