Kavanaugh, M. (2017). Examining the impact of accommodations and universal design on test accessibility and validity (Publication No. 10264765) [Doctoral dissertation, Boston College]. ProQuest Dissertations and Theses Global. https://www.proquest.com/docview/1886122090

Dissertation
Kavanaugh, M. (2017). Examining the impact of accommodations and universal design on test accessibility and validity (Publication No. 10264765) [Doctoral dissertation, Boston College]. ProQuest Dissertations and Theses Global. https://www.proquest.com/docview/1886122090

Notes

Boston College (Boston, MA); ProQuest document ID: 1886122090; also located on Boston College webpage https://dlib.bc.edu/islandora/object/bc-ir:107317

Tags

Disabilities Not Specified; Electronic administration; High school; K-12; No disability; Science; U.S. context

URL

https://www.proquest.com/docview/1886122090

Summary

Accommodation

The testing conditions of paper or computer format, and additional accessibility features and accommodations, were systematically compared for their impact on performance. The paper-based accommodated condition, for 2,343 students, was compared with computer-administered format with embedded accommodatons (referred to as "NimbleTools") for 656 students, and a paper-based unaccommodated condition for 2,000 students. The accessibility features and accommodations were individually assigned to students with support needs, whether or not they had identified disabilities. The researcher noted that state policy permitted 32 accommodations for this grade 11 science test.

Participants

An extant data set of 4,999 science assessment ("NECAP") scores from grade 11 students using or not using accessibility features and accommodations, from across New Hampshire, Rhode Island, and Vermont were examined. The data set included students with IEP-assigned accommodations of various types who either completed the science assessment on paper (1,509 students) or via computer administration (503 students), along with 126 students (with disabilities) who did not receive accommodations. The numbers of students with various disabilities were not specified, except that students with learning disabilities were a large contingent. Students without disabilities who received accessibility features on the paper-based format numbered 834, and 153 students without disabilities completed the computerized test with accessibility features, while 1,874 students without disabilities completed the paper-based test without supports.

Dependent Variable

The 2009 New England Common Assessment Program (NECAP) grade 11 science assessment scores were analyzed for performance effects of unspecified complex combinations of accessibility features and accommodations along with two test formats. The focus of the data analyses was on the impact of the test formats on the item responses; in other words, the researcher sought to determine whether the test's construct validity was affected by computer format versus paper format. Both differential item functioning (DIF) and confirmatory factor analysis (CFA) were employed to compare performance by groups of accommodated and unaccommodated test-takers.

Findings

Accommodations provided to students who completed the paper form of the state science assessment were different in some cases from those provided to students who used the NimbleTools in the computer-administered format. For instance, oral delivery was provided by a test administrator during the paper test, but was provided through a speech synthesizer as an embedded tool on the computerized test; further, about 16% of students used oral delivery on the paper form, but about 87% of students used the computer-presented oral delivery support. The researcher indicated that the "overall item functioning and underlying factor structure was consistent across accommodated and unaccommodated conditions" (p. 155) and in both paper-based and computer-administered assessments. In other words, a similar construct, although possibly not well-defined, was measured in both accommodated formats. Most items functioned similarly across test versions, with only a few exceptions in which items did not systematically favor one condition. The researcher concluded that the computer-administered assessment with embedded supports did not affect the science construct, and brought into focus the positive features of the NimbleTools system. Limitations of the study were reported, and future research directions were suggested.