Taherbhai, H., Seo, D., & Bowman, T. (2012). Comparison of paper-pencil and online performances of students with learning disabilities . British Educational Research Journal , 38 (1), 61–74. https://doi.org/10.1080/01411926.2010.526193

Journal Article

Taherbhai, H., Seo, D., & Bowman, T. (2012). Comparison of paper-pencil and online performances of students with learning disabilities. British Educational Research Journal, 38(1), 61–74. https://doi.org/10.1080/01411926.2010.526193

Tags

Electronic administration; Electronic administration; Electronic administration; K-12; Learning disabilities; Math; Middle school; Reading; U.S. context

URL

https://onlinelibrary.wiley.com/journal/14678535

Summary

Accommodation

Participants were compared on their performance on an online test administration format.

Participants

This study examined an extant data set of students in grades 7 and 8 (ages 13–16) with learning disabilities who were already selected by IEP teams to participate in state assessments based on modified achievement standards in Maryland (U.S.).

Dependent Variable

The modified version of Maryland’s state assessments of mathematics and reading served as the dependent variable. The tests were administered through two modes, the regular paper format and an online computerized format. The scores of two groups of students with learning disabilities, matched on ability level based on previous test performance, were compared at the item-level and test-level.

Findings

At the test-level, there were no significant differences between scores on the paper and pencil and online test modes, for both math and reading; that is, the online test mode did not benefit students with learning disabilities. At the item-level, some individual items in both grade levels and both content areas behaved differently between the test modes. The paper-based format benefited participants on only a couple of the paper-based math items, and the online format benefited participants on a few more of the math and reading items, all at a moderate level of differential item functioning (DIF). However, the researchers note that the number of items showing these differences were fewer than might occur by chance, suggesting that the meaning of these results did not imply something important about the test format. Limitations of the study were reported, and future research possibilities were suggested.