Eberhart, T. (2015). A comparison of multiple-choice and technology-enhanced item types administered on computer versus iPad (Publication No. 10008819) [Doctoral dissertation, University of Kansas]. ProQuest Dissertations and Theses Global. https://www.proquest.com/docview/1762721968
University of Kansas (Lawrence, KS); ProQuest document ID: 1762721968; also available online on KU ScholarWorks at https://kuscholarworks.ku.edu/handle/1808/21674
The impact of the device type for presenting test items was investigated, comparing computer screen with iPad electronic tablet.
The quantitative data were drawn from a set of grade 7 students (n=38,010) scores from across a state in the Midwest U.S. The scores on the ELA test comprised 22,824 students, and on the math test comprised 42,498 students. Additional demographic characteristics of student assessment participants; disability category was not considered as a focus of this study, so these details are not available. For the qualitative data collection, 10 students in grade 7 in a single school were met individually for the cognitive lab—also called "think aloud"—interview and observation sessions. Additional demographic details, such as gender, ethnicity, class standing, and a socioeconomic status proxy were also reported for these interviewees.
An extant statewide assessment score data set for students taking both math and English language arts (ELA) was analyzed, comparing across two independent variables: the types of test items—multiple choice and technology-enhanced—and the presentation platform of computer screen (either desktop or laptop) versus iPad electronic tablet [which are listed in this summary as the accommodation conditions under study]. Technology-enhanced items used various ways to answer questions, including background graphic, drop-down, matching, matrix, multiple drop buckets, ordering, select text, sticky-drop buckets, and straight line response types. English language arts concepts covered reading comprehension and writing skills, and math concepts included procedures, problem-solving, reasoning, and data modeling; these content standards were drawn from the Smarter Balanced Assessment Consortium. Qualitative data were gathered from transcripts of 10 grade-7 students who participated in one-on-one conversations with the researcher following a cognitive lab—also called "think aloud"—interview protocol. They did this while responding to a subset of state assessment items. The researcher also documented observations of students during this interview process. Students also reported on a short post-test survey about demographic characteristics and their familiarity with various technologies.
When comparing scores by the device on which they were presented, the researcher found main effects; that is, there were statistically significant differences in mean scores for both ELA and math: students scored higher on computer than tablet. Further, comparisons by device type when completing multiple-choice questions yielded higher performance on computer than tablet. Notably, there were not significant performance differences by device for the technologically-enhanced items. The researcher noted that the relative amount of screen space for the calculator on the iPad required more manipulation of this tool when completing math items, which could be an explanation for the lower performance in math on the iPad. Students scored, on average, higher (in statistical significance) on multiple-choice items than technology-enhanced items, with moderate to large differences. The pattern of higher scores on multiple-choice compared to technologically-enhanced items has been attributed to the additional time and effort needed for navigating and scrolling on the latter more complex items. Interaction effects were also found, for device and item types. In other words, when simultaneously comparing scores by item type and by the computer device on which they were presented, the researcher found statistically significant interaction effects in mean scores for all three math test forms, and one of three ELA test forms. The researcher indicated that these interactions might relate to the possibility that construct-irrelevant variance was introduced for math items because use of the technology-enhanced item templates were less fitting for measuring math performance, or more fitting for measuring ELA performance, or both. Another explanation is that students were more familiar with using technology-enhanced item formats when reading for comprehending information (an ELA-related skill) than when working toward math problem solutions. Overall preferences of the ten cognitive lab student participants included that five preferred the computer laptop, one preferred the iPad, and four liked both devices when completing the math and ELA test items. The laptop preference was commonly supported with the reason that using the mouse both to mark answers and "guide them in reading the items and answer choices" (pp. 53-54); the touch screen radio button was difficult for some to select. Overall, seven students preferred the multiple-choice items, one preferred the technology-enhanced items, and two students liked both item types. Comments supporting the multiple-choice preference centered on their simplicity, in that students were able to easily eliminate some options and pick their answers; the technology-enhanced items took longer to complete. Additional usability issues were identified and discussed. Limitations of the study were reported, and future research directions were suggested.