Definition
Inter-rater reliability is the extent to which two or more raters (or observers, coders, examiners) agree. It addresses the issue of consistency of the implementation of a rating system. Inter-rater reliability can be evaluated by using a number of different statistics. Some of the more common statistics include: percentage agreement, kappa, product–moment correlation, and intraclass correlation coefficient. High inter-rater reliability values refer to a high degree of agreement between two examiners. Low inter-rater reliability values refer to a low degree of agreement between two examiners. Examples of the use of inter-rater reliability in neuropsychology include (a) the evaluation of the consistency of clinician’s neuropsychological diagnoses, (b) the evaluation of scoring parameters on drawing tasks such as the Rey Complex Figure Test or Visual Reproduction subtest, and (c) the...
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References and Readings
Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.) Upper Saddle River, NJ: Prentice Hall.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media, LLC
About this entry
Cite this entry
Lange, R.T. (2011). Inter-rater Reliability. In: Kreutzer, J.S., DeLuca, J., Caplan, B. (eds) Encyclopedia of Clinical Neuropsychology. Springer, New York, NY. https://doi.org/10.1007/978-0-387-79948-3_1203
Download citation
DOI: https://doi.org/10.1007/978-0-387-79948-3_1203
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-79947-6
Online ISBN: 978-0-387-79948-3
eBook Packages: Behavioral Science