Scheuneman, J. D., Camara, W. J., Cascallar, A. S., Wendler, C., & Lawrence, I. (2002). Calculator access, use, and type in relation to performance in the SAT I: Reasoning test in mathematics . Applied Measurement in Education , 15 (1), 95–112. https://doi.org/10.1207/S15324818AME1501_06
Scheuneman, J. D., Camara, W. J., Cascallar, A. S., Wendler, C., & Lawrence, I. (2002). Calculator access, use, and type in relation to performance in the SAT I: Reasoning test in mathematics. Applied Measurement in Education, 15(1), 95–112. https://doi.org/10.1207/S15324818AME1501_06
Tags
URL
Summary
Accommodation
After completing the test participants were asked to respond to a set of three questions about their use of a calculator during the test.
Participants
All participants in this study were in grade 11 or grade 12 in high school (241,743 students participated in the November 1996 administration of the exam and 253,576 participated in the November 1997 administration of the exam). Of these, 417,425 (202,391 in 1996 and 215,034 in 1997) were used for data analysis of this nationwide (U.S.) population.
Dependent Variable
The participants were administered the SAT I: Reasoning test in Mathematics in domestic (U.S.) test centers. Questions about use of the calculator on the test were placed in the answer sheets for the November 1996 and the November 1997 administrations of the examination.
Findings
Almost 95% of students brought calculators to the November administration of the examination in both years. About 65% used their calculators on one third or more of the items. Group differences in the use of calculators were detected with girls using calculators more frequently than boys and Whites and Asian Americans using them more often than other racial groups. Although calculator presence, frequency of use, and calculator type were all correlated with test scores, this relation appears to be the result of the more able students using calculators differently from the less able students. Regression analyses revealed that a small percentage of the variance in test scores was accounted for by calculator access and type of calculator. Differential item functioning analyses (DIF) showed items favoring both frequent use and little use of calculators. Data concerning the rate of completion provided evidence that those using calculators less often were more likely to complete the exam.