Test–Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation
journal contributionposted on 09.02.2016 by Jordan Harshman, Ellen Yezierski
Any type of content formally published in an academic journal, usually following a peer-review process.
Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives. We present a brief review of psychometric reliability from its beginning in the early 1900s as well as a summary of critiques of the test–retest reliability coefficient. Then, we posit our own novel measurement, the zeta-range estimator, to assist in quantifying and accounting for measurement error that those interested in educational researchers will find beneficial. We provide a proof-of-concept using simulated data and then analyze the reliability of items on the Adaptive Chemistry Assessment Survey for Teachers, a survey designed to characterize data-driven inquiry. While the focus is for CER, the zeta-range estimator also holds significant value for those outside of educational research, as future work can expand our proof-of-concept to account for more than two measurements. While this estimator is a great starting place, we discuss its limitations and hope future research can use the ideas presented here to explore new frontiers in measurement error determination.