Lam, E. A., Rose, S., & McMaster, K. L. (2020). Technical characteristics of curriculum-based measurement with students who are deaf . Journal of Deaf Studies and Deaf Education , 25 (3), 318–333. https://doi.org/10.1093/deafed/enaa003

Journal Article

Lam, E. A., Rose, S., & McMaster, K. L. (2020). Technical characteristics of curriculum-based measurement with students who are deaf. Journal of Deaf Studies and Deaf Education, 25(3), 318–333. https://doi.org/10.1093/deafed/enaa003

Tags

Educator survey; Electronic administration; Elementary; Hearing impairment (including deafness); High school; K-12; Middle school; Multiple ages; No age; No disability; Student survey; U.S. context

URL

https://academic.oup.com/jdsde

Summary

Accommodation

Electronically-based (e-based) assessments incorporated a set of accommodations, including: (a) directions and practice items using visual simulation format, not requiring spoken or signed directions; (b) immediate computer-generated corrective feedback after each item response; (c) text movement features upon students' response selection; (d) removal of prior response when students change their answers; (e) on-screen display of time countdown; and (f) a motivational feature indicating progress through test items. This set of e-based test accommodations was compared with a standard paper-and-pencil administration of different test-like classroom tasks.

Participants

Forty (40) students with hearing impairments including deafness in grades 2–12 participated. Participants included 33 elementary students, 5 middle school students, and 2 high school students, and were from one independent school district or one of two collaborative districts in a metropolitan area in the Upper Midwest (U.S.). Students' reading levels were between grade 2 and grade 5. Twenty-one (21) teachers participated in the study, with 10 teachers choosing a "low participation" option which entailed facilitating scheduling only, and 11 teachers choosing a "high involvement" option, which entailed presenting the assessments to their (40) caseload students. Detailed student participant demographics included race/ethnicity, sex, language, IEP status, free/reduced lunch, type of hearing loss, and amplification use.

Dependent Variable

Several data collection tools were applied for measuring reading skills, reading preferences, and to check for criterion validity evidence. The curriculum-based measures (CBM) included a maze task, assessing reading comprehension. The researchers drew the four reading passages anchored to the grade 3 reading level from EdCheckup LLC and Children's Educational Services database (2005) for this maze task. The CBM silent reading fluency (SRF) task assessed word recognition, and used three grade 3 reading passages from the Test of Silent Contextual Reading Fluency: 2nd Edition (TOSCRF-2; Hammill, Wiederfholt, & Allen 2006) and one grade 3 reading passage from the Reading Milestones Placement and Monitoring assessment (RMPM, McAnally & Rose, 2011). Criterion-based achievement tests included the Measure of Academic Progress (MAP; Northwest Evaluation Association, 2003) and the Woodcock-Johnson, third edition (WJ-III) Passage Comprehension Test (Woodcock, McGrew & Mather, 2001). The researchers informally documented their observations of student test sessions. Finally, ratings of feasibility were collected. On a researcher-developed survey, students reported the degree of helpfulness of the set of six accommodations, and their preferences between the electronic and paper-and-pencil test forms. The teachers completed the Usage Rating Profile-Assessment (URP-A) (Chafouleas, Miller, Briesch, Neugehauer, Riley-Tillman, 2012) which consists of 28 items divided into six factors: accessibility, understanding, home-school collaboration, feasibility, system climate, and system support. The teachers completed the URP-A after administrations of both the paper-pencil and e-based assessments.

Findings

The performance of students with hearing impairments, including deafness, was not significantly different on the maze curriculum-based measure (CBM) for reading comprehension between the paper-pencil or electronic administration conditions. Further, participants performed similarly on the CBM and the criterion-based achievement test in reading comprehension, indicating that the maze had sufficient criterion validity. Student participants scored higher on the paper-pencil CBM on silent reading fluency (SRF), assessing word recognition, in comparison with the electronic test form. However, there were performance discrepancies between the CBMs and the criterion-based achievement test scores for word recognition, suggesting that the CBM probes were not measuring fluency as intended. The potential for the e-based accommodations to assist students with hearing impairments was inconclusive. Student survey results indicated that most students (62%) preferred the e-based test format for the maze, and that fewer (43%) preferred electronic format for the SRF. In contrast, very few preferred the paper-pencil format; many students expressed no preference. On the helpfulness of the six accommodative features, 89% or more of the students rated five positively, yet nearly half of participants indicated that the on-screen timer was not helpful. Further, students might not have been fully aware of the features, indicating that the expected benefits might not have been fully realized due to limited use of, and limited familiarity with, the available features. The teacher survey results ranged in their ratings of strength of agreement on a set of aspects of the CBMs; the most relevant aspects indicated that they had a clear understanding of how to implement CBMs with their students, and that using the CBMs—including the e-based accommodations—was feasible given typical time and resource constraints in their settings. However, as a group, teachers were only slightly supportive of employing CBMs with their students. The researchers noted that maze-type CBMs through electronic formats have aspects suggesting feasible implementation, such as saving test administration time. Informal observations by researchers indicated the possible concern that not all students were fully accurate with moving and placing the mouse cursor when navigating the e-based format, slowing item response rates. The researchers asserted the need for students to be well-practiced with electronically-based assessments in order to derive performance benefits. Limitations of the study were reported, and future research directions were suggested.