Kingston, N. M. (2009). Comparability of computer- and paper-administered multiple-choice tests for K–12 populations: A synthesis . Applied Measurement in Education , 22 (1), 22–37. https://doi.org/10.1080/08957340802558326
Kingston, N. M. (2009). Comparability of computer- and paper-administered multiple-choice tests for K–12 populations: A synthesis. Applied Measurement in Education, 22(1), 22–37. https://doi.org/10.1080/08957340802558326
This study synthesizes the results of 81 studies performed between 1997 and 2007 which investigated the comparability of computer-administered and paper-administered tests.
Information on the participants in this metaanalysis of 14 studies, with 81 separate data points (by grade and by subject) altogether, included that they ranged from elementary through high school, various ethnicities, various ability levels; specific numbers of participants in the combined group of studies was not specified. The research context comprised studies throughout the United States.
Information on the dependent variables on this metaanalysis of 14 studies included that they were primarily assessments—along with some surveys—which tested knowledge and skills in many subject areas, including language arts, math, reading, science, and social studies.
Grade level appeared to have no effect on comparability. Subject did appear to affect comparability, with computer administration appearing to provide a small advantage for English Language Arts and Social Studies tests (effect sizes of 0.11 and 0.15, respectively), and paper administration appearing to provide a small advantage for Mathematics tests (effect size of -0.06). Regarding perceptions of assessments with accommodations, all studies reviewed indicated that a majority of students prefer computerized assessments.