Davis, L. L., Kong, X., McBride, Y., & Morrison, K. M. (2017). Device comparability of tablets and computers for assessment purposes . Applied Measurement in Education , 30 (1), 16–26. https://doi.org/10.1080/08957347.2016.1243538

Journal Article

Davis, L. L., Kong, X., McBride, Y., & Morrison, K. M. (2017). Device comparability of tablets and computers for assessment purposes. Applied Measurement in Education, 30(1), 16–26. https://doi.org/10.1080/08957347.2016.1243538


This study apparently used the same data set, with some differences in data analyses, as Davis, Morrison, Kong, & McBride (2017).


Calculation device or software (interactive); Electronic administration; Electronic administration; Electronic administration; Extra blank or specialized paper; High school; K-12; Math; Multiple accommodations; No disability; Reading; Science; Student survey; U.S. context





The impact of the device type for presenting test items was investigated, comparing performance with computer screen—desktop or laptop—versus electronic tablet. Additional details were provided about the screen sizes and equipment makes and models for the laptop and tablet devices. The electronic tablet format required touch-screen responses by students, while students answered computer-presented items using external keyboards. The only additional supports included flag and review capability, a four-function calculator tool permitted by the software, and scratch paper and pencils. Students completed low-stakes tests in different content areas with varieties of item types.


Students without disabilities (n=964) from high schools in five different school districts in Virginia in spring 2014 participated. Additional demographic data such as gender and race/ethnicity were also reported. Mostly, students were assigned at the classroom level to testing conditions, and procedures were used to develop equivalent groups of students between conditions.

Dependent Variable

Tests for each participant were composed of a total of 59 items on three content areas, mathematics, reading, and science. Items were drawn from various sources: math items from a national norm-referenced test of computation, geometry and pre-algebra strands, reading items from a formative assessment item bank with passages of varying lengths and genres, and science items were from two criterion-referenced tests in biology and chemistry; the tests were reviewed by content experts. The level of difficulty was set to grades 7 through 10, in order that students could be expected to have already learned the material. There were seven different item types: drag and drop, fill in the blank, graph point, hot spot, inline choice, multiple choice, and multiple select. These item types were employed based on content area. Reading tests contained drag and drop, hot spot, multiple choice, and multiple select items. Math item types were drag and drop, fill in the blank, graph point, multiple choice, and multiple select. Science item types were drag and drop, fill in the blank, hot spot, inline choice, and multiple choice. Most items permitted partial credit to be awarded. Each participant also completed a 10-item survey about their previous and present experiences with different devices. Researchers also collected students' state reading test scores to check for randomness in assignment.


Student participant groups did not perform differently in any content area/s by device, between computer screen (non-interactive) versus tablet touchscreen. Students answering computer-delivered items scored essentially the same on average as students testing on electronic tablets with touchscreen responding, covering math, reading, and science content. There were also no significant mean differences in response patterns across the various item types. However, two individual reading items indicated a performance difference, with students using tablets scoring significantly higher than those using computers. Student participants reported having previous experience using devices during large-scale assessment; nearly all (95%) had taken assessments on paper, most (85%) on desktop computers, most (75%) on laptops, much fewer (24%) on electronic tablets, and almost none (5%) on smart phones. Student perceptions of test content difficulty was not significantly different between devices. Preferences for device option during assessments varied; the largest proportions of participants preferred using paper only or paper and computer screen; much smaller proportions preferred touchscreen responding. Participants reporting having previous tablet-delivered test experience also expressed more positive perceptions of using tablets during testing. Limitations of the study were reported, and future research directions were suggested.