Südkamp, A., Pohl, S., & Weinert, S. (2015). Competence assessment of students with special educational needs — Identification of appropriate testing accommodations . Frontline Learning Research , 3 (2), 1–26. https://doi.org/10.14786/flr.v3i2.130
Südkamp, A., Pohl, S., & Weinert, S. (2015). Competence assessment of students with special educational needs — Identification of appropriate testing accommodations. Frontline Learning Research, 3(2), 1–26. https://doi.org/10.14786/flr.v3i2.130
Notes
Also located on ERIC webpage https://eric.ed.gov/?id=EJ1091035 OR on U webpage http://journals.sfu.ca/flr/index.php/journal/article/view/130/243
Tags
URL
Summary
Accommodation
As further described in the Dependent Variable section, there were two test versions serving as the comparison conditions for the grade 5 standard reading literacy test (of 56 items). One version was a "reduced test" with fewer items (37 items) which removed items of high difficulty and thereby also provided extended time. The "easy" version (35 items) removed high difficulty texts and items and replaced them with fewer items for students in grade 3 -- thereby resulting in an "out-of-level" test and permitting extended time.
Participants
Students with special educational needs in learning (SEN-L) comprised the group under study, and were compared with students without disabilities and with low-performing students. The researchers noted that students with SEN-L, in Germany, are a "heterogeneous group of students with multifaceted etiology" (p. 4), and are distinguished from students with specific learning disabilities (SLD) in that students with SEN-L have general cognitive impairments and attend separate special education programs; students with SLD do not necessarily have these impairments in general cognitive abilities. For this study, secondary data sourced from a group of studies were analyzed, including from 433 grade 5 students with SEN-L, 5,208 grade 5 general education students, with a subgroup of the grade 5 general education students (n=700) who achieved in the lowest academic track, and an additional set of 490 students from the lowest academic track. Additional demographic information was reported for each participant group, including age and gender; about 25-30% of the participants spoke a language other than German at home.
Dependent Variable
German National Educational Panel Study (NEPS) data from reading literacy testing in November and December 2010 comprised the dependent variable. The standard reading test, for general education students, incorporated five different texts, and test items employed multiple choice and matching response formats. Two accommodated versions of the standard reading test were developed. One was called "reduced test" and removed one text and its nine relevant items, and 10 other items of high difficulty, yielding 37 items to be completed in the same 30 minutes (thereby having extended time accommodation). The other accommodated version, called "easy test," removed the three most difficult texts and their 37 relevant items—and replacing them with three texts and 23 items developed for grade 3 students. Students with SEN-L were randomly assigned to complete the standard test (N=176), the reduced test (N=173), or the easy test (N=84). Students in general education (N=5208) completed the standard reading test without accommodations, including students in the lowest academic track (N=700). An additional 490 students achieving in the lowest academic track were randomly assigned to complete only the reduced test (N=332) or the easy test (N=158). [Note: there were 37 anchor items in the reduced test and 12 anchor items in the easy test"—linking the test versions with one another for statistical analyses, including DIF (differential item functioning).]
Findings
The researchers provided substantial detail about test response patterns as well as correctness of answers for each student participant group, distinguishing between items that were skipped ("omitted"), "not reached," and invalid responses. Students had the least number of omitted responses for the easy test, the least number of not reached items for the reduced test, and the least number of invalid responses on the reduced test—attributed to the fact that the reduced test has the fewest items requiring matching. Further, for students with special educational needs in learning (SEN-L), item completion increased for the reduced and easy versions in comparison to the standard test. There were few differences between the general education participants and the students in the lowest academic track (LAT) in item fit for the standard test, but a higher rate of item misfit for students with SEN-L. The reduced test demonstrated better item fit for students with SEN-L and for students in the lowest academic track (LAT) than the other test versions; in contrast, the easy test fit relatively well for the students in the LAT, and not well for the students with SEN-L. In terms of item difficulty, the easy test was found to be too easy for students in the LAT, and too difficult for students with SEN-L. Differential item functioning (DIF) analyses found that, for students in the LAT (without disabilities), the test versions were comparable to one another; few items had more than slight DIF. In contrast, all three tests demonstrated many items with strong DIF for students with SEN-L. The researchers concluded that, due to item functioning and variance across the measures, scoring for students with SEN-L was not comparable or valid across all three test versions. In addition, the versions other than the standard test were not suitable "for a valid comparison of competence levels between students with SEN-L and students in general education" (p. 19).