Measuring interrater reliability among multiple raters: an example of methods for nominal data

Stat Med. 1990 Sep;9(9):1103-15. doi: 10.1002/sim.4780090917.

Abstract

This paper reviews and critiques various approaches to the measurement of reliability among multiple raters in the case of nominal data. We consider measurement of the overall reliability of a group of raters (using kappa-like statistics) as well as the reliability of individual raters with respect to a group. We introduce modifications of previously published estimators appropriate for measurement of reliability in the case of stratified sampling frames and we interpret these measures in view of standard errors computed using the jackknife. Analyses of a set of 48 anaesthesia case histories in which 42 anaesthesiologists independently rated the appropriateness of care on a nominal scale serve as an example.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Reproducibility of Results*
  • Statistics as Topic*