Davis, L., Morrison, K., Kong, X., & McBride, Y. (2017). Disaggregated effects of device on score comparability . Educational Measurement: Issues and Practice , 36 (3), 35–45. https://doi.org/10.1111/emip.12158

Journal Article

Davis, L., Morrison, K., Kong, X., & McBride, Y. (2017). Disaggregated effects of device on score comparability. Educational Measurement: Issues and Practice, 36(3), 35–45. https://doi.org/10.1111/emip.12158

Notes

This study apparently used the same data set, with some differences in data analyses, as Davis, Kong, McBride, & Morrison (2017).

Tags

Electronic administration; High school; K-12; Math; Multiple content; No disability; Reading; Science; Student survey; U.S. context

URL

https://onlinelibrary.wiley.com/journal/17453992

Summary

Accommodation

Students’ performance on a multiple-content assessment while using computers (with keyboards) and while using tablets with touchscreens was compared.

Participants

 A total of 964 high school students in five school districts in Virginia participated. They had all completed or were enrolled in certain English, math, and science classes at the time of testing. In addition, they had all taken online tests prior to the study. Students’ disability status was not reported. Demographics including gender and ethnicity were reported; these variables were used in investigating groupwise performance score comparisons. 

Dependent Variable

The students' performance on a specially-developed assessment with reading, math, and science items was measured for the independent variables of using a computer or using an electronic tablet. The students were randomly assigned—sometimes individually, sometimes at the classroom level, as possible—to the computer or the tablet condition for the assessment. Performance was also disaggregated by the participants' gender and ethnicity to observe any inter-group differences. Test items expected several response types, including multiple choice and five different technology-enhanced items: hot spot (involving hovering cursor over a part of a data figure), drag and drop, fill in the blank, multiple select (choosing more than one answer among a set of options), inline choice (selecting the best word or phrase to complete a sentence), and graph point (involving plotting data points on a graph). Students also responded to a survey about their experiences with the device conditions—that is, computer and electronic tablet.

Findings

The researchers compared raw scores for all participants and used ANOVA analyses to examine intergroup differences and interaction effects. They found no significant score differences between the use of computers and tablets on the math and science sections of the assessment. On the reading items, mean scores for the male group of high school students were slightly higher when using with tablets than when using computers. There were no correlations for mean scores by ethnicity group with computers or with tablets. The researchers provided limited information on findings from the student surveys, in reference to the performance findings. Specifically, about the higher scores for male participants in reading with tablets (compared to computers), they stated, "student survey responses did not reveal any differential use in devices between genders which would offer an explanation in terms of either experience level with devices or novelty of devices" (p. 44). Limitations of the study were reported, and future research directions were suggested.