Koretz, D., & Hamilton, L. (2000). Assessment of students with disabilities in Kentucky: Inclusion, student performance, and validity . Educational Evaluation and Policy Analysis , 22 (3), 255–272. https://doi.org/10.3102/01623737022003255
Koretz, D., & Hamilton, L. (2000). Assessment of students with disabilities in Kentucky: Inclusion, student performance, and validity. Educational Evaluation and Policy Analysis, 22(3), 255–272. https://doi.org/10.3102/01623737022003255
Students received a variety of accommodations that were allowed on the Kentucky Instructional Results Information System (KIRIS), and that were determined to be appropriate according to their individual needs. These included dictation (scribe), assistance with spelling, oral presentation (read-aloud), paraphrasing, cueing, technological aids, signing, and combinations of these accommodations.
Students with disabilities throughout Kentucky (U.S.) participating in the KIRIS assessment in 1995 and 1997 were included—totalling 31,604 data points. In 1995, these 10,813 students with disabilities included 2,780 who did not receive accommodations and 8,033 students who did receive accommodations; these test scores included grades 4, 8, and 11. In 1997, these 20,791 students with disabilities included 4,234 who did not receive accommodations and 16,557 who did receive accommodations; these test scores included grades 4, 5, 7, 8, and 11. Certain grades are only tested in a few content areas. Approximately 10% of all Kentucky students are served under IDEA, and about one-third of these students are identified as having learning disabilities. A variety of other disabilities were represented in the population tested: speech/language impairments, intellectual disabilities, emotional/behavioral disabilities, physical impairments, hearing impairments.
Performance on the KIRIS assessment served as the dependent variable (this included assessments in reading, math, science, and social studies). These assessments included both multiple choice (MC) and open response (OR) item formats.
The authors suggest that scores obtained by some students may not be trustworthy due to inappropriate use of accommodations. In particular, students receiving the dictation accommodation seemed to receive implausibly high scores. For open response parts of the assessment, item-test correlations were no different for students with disabilities (with or without accommodations) than for students without disabilities, and differential item functioning (DIF) was relatively infrequent in some subject areas. However, there were numerous instances of sizeable DIF shown by students receiving accommodations, particularly in mathematics. Lower correlations among parts of the test for students receiving accommodations also raised concern about the validity of this test for students receiving accommodations. [See also Koretz,1997; Koretz & Hamilton, 1999]