Gilbert, K., Kranzler, J., & Benson, N. (2021). An independent examination of the equivalence of the standard and digital administration formats of the Wechsler Intelligence Scale for Children-5th Edition . Journal of School Psychology , 85 , 113–124. https://doi.org/10.1016/j.jsp.2021.01.002

Journal Article

Gilbert, K., Kranzler, J., & Benson, N. (2021). An independent examination of the equivalence of the standard and digital administration formats of the Wechsler Intelligence Scale for Children-5th Edition. Journal of School Psychology, 85, 113–124. https://doi.org/10.1016/j.jsp.2021.01.002

Tags

Electronic administration; Elementary; High school; K-12; Middle school; Multiple ages; No disability; U.S. context

URL

http://www.elsevier.com/locate/jschpsyc

Summary

Accommodation

Electronic administration (with digital tablet) of an intelligence test was compared with the typical paper-based test format.

Participants

Students attending elementary, middle, and high schools ("K–12") from the U.S. states of Florida (n=53), Tennessee (n=3), and Texas (n=9) participated. Data on age, sex (male/female), race/ethnicity, and socioeconomic status (proxy of lunch subsidy status) were reported. Student participants attended public schools, charter schools, or private schools; about half of participants were from a developmental research school in Florida with a deliberately diverse population. Students ranged in academic ability, and disability status was not reported; however, students with sensory and mobility impairments were excluded.

Dependent Variable

Norm-referenced general cognitive ability was measured by the Wechsler Intelligence Scale for Children, fifth edition (WISC-V; Wechsler, 2014), subtest scores and full-scale IQ (FSIQ) scores.

Findings

Comparing the performance of all participants on both digital and paper forms of the WISC-V, researchers analyzed construct equivalence and measurement unit equivalence for the two test formats. Student performance was higher for the digital form than the paper form, especially for the students who completed the digital form first. Further, there was a significant mean difference in full-scale IQ and for the Processing Speed composite score, due to the fact that the coding subtest had measurement unit non-equivalence. These variances in score patterns indicated that the performance of these student participants without disabilities did not seem to mean the same between the two test formats. Because the study design did not include systematic comparisons of students with and without disabilities, conclusions about WISC-V performance between test forms cannot be reached for students with disabilities. Limitations of the study were reported, and future research directions were suggested.