Flowers, C., Kim, D. H., Lewis, P., & Davis, V. C. (2011). A comparison of computer-based testing and pencil-and-paper testing for students with a read-aloud accommodation . Journal of Special Education Technology , 26 (1), 1–12. https://doi.org/10.1177/016264341102600102

Journal Article

Flowers, C., Kim, D. H., Lewis, P., & Davis, V. C. (2011). A comparison of computer-based testing and pencil-and-paper testing for students with a read-aloud accommodation. Journal of Special Education Technology, 26(1), 1–12. https://doi.org/10.1177/016264341102600102

Tags

Educator survey; Electronic administration; Electronic administration; Electronic administration; Elementary; High school; Individual; Intellectual disabilities; K-12; Learning disabilities; Math; Middle school; Multiple content; Oral delivery; Oral delivery, live/in-person; Physical disability; Reading; Science; Student survey; Text-to-speech device/software; U.S. context

URL

https://www.isetcec.org/journal-of-special-education-technology-jset/

Summary

Accommodation

Students completed assessment items with the oral delivery accommodation in either the paper-and-pencil or computer-based format. In paper format, test proctors administered oral delivery live and in-person, in an individual setting. On the computerized test, students were provided text-to-speech software (Read & Write Gold) for completing the test independently.

Participants

Students with disabilities who were eligible for a read-aloud accommodation participated. Disabilities were sorted into the following categories: health-impaired disability, mild mental disability, specific learning disability, and other disabilities. Data were drawn from a larger data set of scores (grade 3 through grade 11), narrowed for propensity score matching to students in grade 7 (n=128) and grade 8 (n=322), from a southeastern state (U.S.). Additional demographics included gender (males and females) and ethnicity (Black, White, and other).

Dependent Variable

Extant data from 2007 and 2008—comprising scores from 23,925 students on the state assessment in reading, science, and mathematics—were examined. Propensity score matching narrowed the data set to 450 students' scores. Students (n=602) in grade 3 through grade 11 completed 22-item surveys on their computerized testing experience. School staff members (n=259)—including test proctors, testing coordinators, principals, and technology coordinators—completed 57-item surveys about their experiences with the paper-based and computer-based tests, including their observations about test-takers.

Findings

Results showed no differences in effect sizes between grade levels, suggesting that there were no differences across grade/school levels. There were differences in effect sizes between subjects, with larger effect sizes found for reading than for math or science. There were small to moderate differences between paper-and-pencil test (PPT) and computer-based test (CBT) conditions that tended to favor the PPT condition. DIF analyses showed that items did not favor either group more frequently. Finally, teachers and students both reported that students preferred the CBT condition (although the results did not support better performance in this condition). The researchers concluded that the CBT condition has the potential to provide students with a fair alternative testing condition. The results found in this study suggest that scores were generally lower in the CBT condition, but this may have been due to extraneous factors. Limitations of the study were reported.