Derr-Minneci, T. F. (1990). A behavioral evaluation of curriculum-based assessment for reading: Tester, setting, and task demand effects on high- vs. average- vs. low-level readers (Publication No. 9030319) [Doctoral dissertation, Lehigh University]. ProQuest Dissertations and Theses Global. https://www.proquest.com/docview/303877909

Dissertation
Derr-Minneci, T. F. (1990). A behavioral evaluation of curriculum-based assessment for reading: Tester, setting, and task demand effects on high- vs. average- vs. low-level readers (Publication No. 9030319) [Doctoral dissertation, Lehigh University]. ProQuest Dissertations and Theses Global. https://www.proquest.com/docview/303877909

Notes

Lehigh University (Bethlehem, PA); ProQuest document ID: 303877909

Tags

Elementary; Examiner familiarity; Extended time; Individual; K-12; No disability; Reading; Small group; U.S. context

URL

https://www.proquest.com/docview/303877909

Summary

Accommodation

The administration of the test was altered in several ways: who administered the test (teacher vs. school psychologist), location of the test (reading group vs. teacher desk vs. office outside the classroom), and duration of the test (timed vs. untimed).

Participants

Participants included 100 third and fourth grade regular education students: 35 students reading below average grade level, 31 students reading at their average grade level, 34 students reading above their average grade level.

Dependent Variable

A curriculum-based assessment based on Cones (1981, 1987) elaboration of a methodology for validating behavioral assessment procedures was used. The assessment measured correct words per minute (CWPM) and percentage of errors.

Findings

Students read more CWPM when assessed by their own teacher, in their reading group, compared to the teacher desk. Students read more CWPM at the teacher desk compared to the office setting. Timed students read more CWPM than the untimed students. Furthermore, students committed more errors when assessed in an office setting, compared to the teacher desk. Also, the students had more errors when at the teacher''s desk as compared to being assessed in the reading group. The location, duration, and tester effects mentioned above were similar across reading levels.