Seo, D. G., & Hao, S. (2016). Scale comparability between nonaccommodated and accommodated forms of a statewide high school assessment: Using Iz person-fit . Journal of Psychoeducational Assessment , 34 (3), 230–243. https://doi.org/10.1177/0734282915596126
Seo, D. G., & Hao, S. (2016). Scale comparability between nonaccommodated and accommodated forms of a statewide high school assessment: Using Iz person-fit. Journal of Psychoeducational Assessment, 34(3), 230–243. https://doi.org/10.1177/0734282915596126
The accommodated form of the test used as a comparison condition included oral presentation accommodation—either provided in person as a read-aloud accommodation, or through using a recorded human voice played on a separate device (via cassette or CD)—which was also provided for students using braille format.
The data set was composed of 19,788 students in grade 11 taking the state science assessment in Michigan in 2010. Of these students, 8,670 students received the oral presentation accommodation. Note: The researchers did not specify the number or proportion of test-takers who were students with disabilities and the number or proportion who were English learners, and who had received or did not receive the accommodations, so impact of accommodations could not be compared by participant population demographics. [The focus was on developing data analysis approach, “Iz Person-Fit Index,” for examining the fairness of testing conditions.]
The state science assessment (Michigan Merit Examination) scores comprised the performance measure, for comparison between accommodated and non-accommodated versions; both versions included 32 items. Additionally, 20 items were selected from the ACT for each of these versions. The two versions covered the same science content, but were developed independently. Of the 52 items on the accommodated and non-accommodated tests, 7 items were the same, and 45 items were unique to the version. This difference in the item content for the versions (i.e., the items were not the same between versions), along with the different numbers of students receiving and not receiving the accommodations (8,670 vs. 11,118), were the issues that Person-Fit Analysis needed to consider in making comparisons. Note: the researchers used the data set for one of the six standardized (nonaccommodated) forms of the MME in science, and one accommodated form, for the current study.
The researchers discussed the concerns that other approaches (e.g., differential item functioning/DIF or differential test functioning/DTF) for comparing the accommodated and non-accommodated versions presented, and the ways that Person-Fit Analysis was designed to consider these issues. They reported that their calculations of fit and misfit percents for the accommodated version and the nonaccommodated version yielded similar results, demonstrating scale comparability between these versions. They further explained that the accommodated version validly measured student ability, in a manner equivalent to the nonaccommodated version of the science test. They concluded that, through their use of these data as an example, Iz Person-Fit Index analysis showed itself to have promise for completing these types of comparative analyses when the circumstances do not fit the application of t-tests or DIF approaches. Limitations of the study were reported, and future research directions were suggested.