Russell, M., & Haney, W. (1997). Testing writing on computers: An experiment comparing student performance on tests conducted via computer and via paper-and-pencil . Education Policy Analysis Archives , 5 (3). https://doi.org/10.14507/epaa.v5n3.1997
Russell, M., & Haney, W. (1997). Testing writing on computers: An experiment comparing student performance on tests conducted via computer and via paper-and-pencil. Education Policy Analysis Archives, 5(3). https://doi.org/10.14507/epaa.v5n3.1997
Tags
URL
Summary
Accommodation
Mode of administration—computer versus paper-and-pencil—was the accommodation examined.
Participants
One hundred fourteen (114) students in middle school (grades 6-8) participated.
Dependent Variable
Three kinds of assessment were used: (1) An open-ended (OE) assessment consisted of 14 items, which included two writing items, five science items, five math items, and two reading items. (2) A test consisted of NAEP items which was divided into three sections and included 15 language arts items, 23 science items and 1 math item were multiple choice. However, 2 language arts items, 3 science items, and 1 math item were open-ended and required students to write a brief response to each item's prompt. (3) A performance writing assessment required an extended written response. Both groups performed the OE assessment in exactly the same manner, by hand via paper-and-pencil. The experimental group performed the NAEP and writing assessment on computer, and the control group performed both in the traditional manner, by hand on paper.
Findings
Unlike most previous research on the effects of computer administered tests, which has focused on multiple choice tests and has generally found no or small differences due to test administration, the researchers indicated that there were substantial effects due to mode of administration. The size of the effects was found to be 0.94 on the extended writing task, and 0.99 and 1.25 for the NAEP language arts and science short answer items. Effect sizes of this magnitude were deemed unusually large and of sufficient size to be not just of statistical, but also of practical significance. The student responses to written items were judged by three raters on a scale of 1–4 in which 1 and 2 represented a less than adequate response and scores of 3 and 4 represented an adequate or better response. The computer mode of administration had the effect of increasing the success rate on the performance writing item from around 30% to close to 70%.