Expected Agreement Coefficient for Norm-Referenced Tests With Classical Test Theory.
Since the first distinction between norm-referenced and criterion-referenced interpretations of test results, many researchers including Glaser and Nitko and Popham and Husek argued that reliability coefficients in the classical test theory are appropriate for norm-referenced tests. These coefficients depend on the relative standing of an examinee on a norm group.
The paper introduced the expected agreement for norm referenced interpretations of test scores within classical test theory framework. The paper presents the context and assumptions of randomly equivalent test forms that are necessary to develop the expected agreement coefficients.
In order to derive the expected agreement coefficient within the context of classical test theory, we need to first introduce the concept of randomly equivalent test forms instead of the classical equivalent test forms. Randomly equivalent test forms is evident when the test developer is able to build a very large or infinite number of different test forms from a large pool
of items measuring the psychological construct.
The components of all expressions of the expected agreement/reliability coefficients have the form of expected value of some terms over different random sets of items from the domain of
items and over different random samples of examinees from the population of examinees.
This result supports what Glaser and Nitko5 and Popham and Husek6 argued that reliability coefficients in the classical test theory such as coefficient alpha and KR-20 are appropriate for norm-referenced tests. The error scores associated with coefficient alpha is the relative error score variance that is defined as the difference between individual examinee’s performance and the performance of his/her peers who took the test.
Psychol Cogn Sci Open J. 2016; 2(1): 11-14. doi: 10.17140/PCSOJ-2-110