Wood, S. G., Moxley, J. H., Tighe, E. L., & Wagner, R. K. (2018). Does use of text-to-speech and related read-aloud tools improve reading comprehension for students with reading disabilities? A meta-analysis . Journal of Learning Disabilities , 51 (1), 73–84. https://doi.org/10.1177/0022219416688170

Journal Article

Wood, S. G., Moxley, J. H., Tighe, E. L., & Wagner, R. K. (2018). Does use of text-to-speech and related read-aloud tools improve reading comprehension for students with reading disabilities? A meta-analysis. Journal of Learning Disabilities, 51(1), 73–84. https://doi.org/10.1177/0022219416688170


Elementary; High school; K-12; Learning disabilities; Meta-analysis; Middle school; Multiple ages; Oral delivery; Oral delivery, live/in-person; Postsecondary; Reading; Recorded delivery (audio or video); Text-to-speech device/software; U.S. context





Oral delivery accommodations were investigated, including reading pen (digital device) and text-to-speech (TTS) software with computer-synthesized voice. The researchers used the phrase "and related read-aloud tools" to note that additional embedded features often accompany TTS, such as variable speed/reading rate setting, voice type options, and concurrent highlighting of words on the screen as they are read aloud. Both complete and partial oral delivery of reading passages and individual test items were used.


This meta-analysis comprised compilation and systematic analyses of datasets from 22 studies, published during the period 1993–2013, that incorporated 2,942 participants. The students attended elementary, secondary, and postsecondary education, ranging from grade 3 through postsecondary levels. Studies were included if they measured reading comprehension; measured effect sizes for students with dyslexia, reading disabilities, or learning disabilities (subtype reading); included an oral presentation condition; and were reported in English. It appeared that all studies centered on U.S. education contexts. Both between-subjects and within-subject studies were included; this distinction refers to studies where average performance comparisons were made between different samples of participants versus comparisons for performances by the same sample.

Dependent Variable

A variety of reading comprehension assessments—including state assessments, standardized tests, and researcher-developed reading assessment/s—were used across the studies that were synthesized in this meta-analysis. The assessments included: Accelerated Reader (AR), comprehension section of California Achievement Test 5th Edition (CAT/5), Formal Reading Inventory (FRI), Gates-MacGinitie Reading Tests (GMRT), Gray Silent Reading Test (GSRT), reading comprehension of Iowa Test of Basic Skills (ITBS), Jamestown Reading Series (JRS), Missouri Assessment Program (MAP), U.S. History and Civics tests on NAEP, researcher-modified Neale Analysis of Reading Ability II (NARA II), Nelson-Denny Reading Test (NDRT), SAT Critical Readings, Six Way Paragraphs Middle Level (SWP), STAR Reading Assessment Test (STAR), Test of Silent Sentence Reading (LS60), Texas Assessment of Knowledge and Skills (TAKS), Timed Readings in Literature (TRL), and researcher-developed reading assessment/s. Data analyses were performed for several factors distinguishing subsets of studies, and effect sizes and significance levels (p-scores) were reported.


Three major points were reported in response to the research questions: effect size, moderating factors, and research quality. The average weighted effect size for all oral delivery accommodations on reading comprehension for all students with reading-related learning disabilities was 0.35, and for K–12 students with reading-related learning disabilities was 0.36 (p < .01). Oral presentation of text was found to be effective in supporting higher performance in reading comprehension for students with reading-related disabilities. Moderator analyses yielded only one significant moderator: study design, whether between-subjects or within-subject. The average weighted effect size for between-subjects studies alone was 0.61 (p < .001); for within-subject studies, effect size was 0.15 (p > .05). The researchers stated, "Possible reasons for this may include regression to the mean, attrition, and order effects due to lack of counterbalancing treatments" (p. 80). [as noted in Li, 2014] The researchers concluded that the quality of the current body of research varied. Limitations of the meta-analysis were reported, and future research possibilities were suggested.