Wang, S., Jiao, H., Young, M. J., Brooks, T., & Olson, J. (2007). A meta-analysis of testing mode effects in grade K–12 mathematics tests . Educational and Psychological Measurement , 67 (2), 219–238. https://doi.org/10.1177/0013164406288166

Journal Article

Wang, S., Jiao, H., Young, M. J., Brooks, T., & Olson, J. (2007). A meta-analysis of testing mode effects in grade K–12 mathematics tests. Educational and Psychological Measurement, 67(2), 219–238. https://doi.org/10.1177/0013164406288166

Tags

Electronic administration; Electronic administration; Electronic administration; Elementary; High school; K-12; Math; Meta-analysis; Middle school; No disability; U.S. context

URL

http://epm.sagepub.com/cgi/reprint/67/2/219

Summary

Accommodation

Studies employing computer-based and paper-and-pencil administration were examined, applying meta-analytic procedures.

Participants

This study was a meta-analysis of data sets from K–12 students; the studies appeared to have been completed within the U.S. educational system.

Dependent Variable

Meta-analytic procedures were applied to K–12 mathematics test scores.

Findings

The results based on the final selected studies with homogeneous effect sizes show that the administration mode had no statistically significant effect on K–12 student mathematics tests. Only the moderator variable of computer delivery algorithm contributed to predicting the effect size. The differences in scores between test modes were larger for linear tests than for adaptive tests. However, such variables as study design, grade level, sample size, type of test, computer delivery method, and computer practice did not lead to differences in student mathematics scores between computer-based and paper-and-pencil modes.