Pommerich, M. (2004). Developing computerized versions of paper-and-pencil tests: Mode effects for passage-based tests . The Journal of Technology, Learning, and Assessment , 2 (6). https://ejournals.bc.edu/index.php/jtla
Pommerich, M. (2004). Developing computerized versions of paper-and-pencil tests: Mode effects for passage-based tests. The Journal of Technology, Learning, and Assessment, 2(6). https://ejournals.bc.edu/index.php/jtla
Notes
[no doi available]; Also downloadable from ERIC online database: https://eric.ed.gov/?id=EJ905028
Tags
URL
Summary
Accommodation
Students' performance differences between paper and computer test administration were examined in this study. Highlighting of text, alignment of item with text, layout of passage, location of line breaks, and ease of navigation were discussed as potential contributors to performance differences across test modes.
Participants
Two studies were analyzed in this report: Comparability 1 (1998) and Comparability 2 (2000). Participants in both studies were in grades 11 and 12. In Comparability 1, approximately 8,600 students participated in testing across 40 schools. In Comparability 2, approximately 12,000 students participated across 61 schools. A location was not specified; however, the study is written in English.
Dependent Variable
In both studies, tests were administered via paper-and-pencil and computer formats in English, Reading, Science Reasoning, and Mathematics content areas. A computer interface was used in Comparability 1 (Interface 1) that was then modified based on findings and feedback and used in Comparability 2 (Interface 2). Students were randomly assigned to either paper-and-pencil or computer administration of a fixed-form test. Within the paper and computer administration modes, students were randomly assigned to a content area creating eight administration conditions.
Findings
Completion rates, total score performance, and item level performance were described in detail across all testing conditions in both studies (Comparability 1 and 2). Highlighting of text, alignment of item with text, layout of passage, location of line breaks, and ease of navigation were also discussed as contributing factors to performance differences across students. Some items yielded no performance differences between administration formats, however, there were other items that students responded to differently. These differing results were affected by the different characteristics of each test and the position of items within the test. Researchers concluded that it would be beneficial to understand all factors that can impact examinee behavior and design a computer interface accordingly. This would guarantee that test takers are responding to the test content and not computer features.