Abstract
Intra-observer (within observers) and inter-observer (between observers) variability of the Oxford Clinical Cataract Classification and Grading System were studied. Twenty cataracts were examined and scored independently by four observers. On a separate occasion two of the observers repeated the assessments of the same cataracts in the absence of information from the initial observations. The chance corrected and weighted kappa statistics for observer agreement, both for inter-observer and intra-observer variability demonstrated satisfactory repeatibility of the cataract grading system. The overall intra-observer mean weighted kappa was χw = +0.68 (range SE χ = 0.012–0.052) and the overall inter-observer mean weighted kappa was χw = +0.55 (range SE χ = 0.011–0.043).
Similar content being viewed by others
References
Chylack LT, Lee MR, Tung WH, Cheng HM. Classification of human senile cataractous change by the American cooperative cataract research group (CCRG) method, 1. Instrumentation and technique. Invest Opththalmol Vis Sci 1983; 24: 424–431.
Chylack LT, Leske MC. Validity and reliability of photoderived human cataract classification. Invest Ophthalmol Vis Sci 1986; 27 (Suppl - ARVO abstracts): 44.
Cicchetti DV, Sharma Y, Cotlier E. Assessment of observer variability in the classification of human cataract. Yale J Biol Med 1982; 55: 81–88.
Cicchetti DV, Sparrow SS. Development criteria for establishing inter-rater reliability of specific items: Applications to assessment of adaptive behaviour. Am J Ment Def 1981; 86: 127–137.
Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin 1968; 70: 213–220.
Fleiss JL. Statistical Methods for Rates and Proportions, 2nd Ed. New York: John Wiley & Sons, 1981.
Gibson RA, Sanderson HF. Observer variation in ophthalmology. Brit J Ophthalmol 1980; 64: 457–460.
Hall JN. Inter-rater reliability of ward rating scales. Brit J Psychiat 1974; 125: 248–255.
Hill AR. Making decisions in ophthalmology (Chapter 8). In: Chader G, Osborne N, eds. Progress in retinal research, Vol 6. Oxford: Pergamon Press, 1986; P 207–244.
Hiller R, Sperduto RD, Edered F. Epidemiologic associations with cataract in the 1971–1972 national health and nutrition examination survey. Am J Epidemiol 1983; 118: 239–249.
Kahn HA. Diagnostic standardization. Clinical Pharmacol Ther 1979; 25: 703–711.
Kahn HA, Leibowitz H, Ganley JP, Kini M, Colton T, Nickerson R, Dawber TR. Standardizing diagnostic procedures. Am J Ophthalmol 1975; 79: 768–775.
Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159–174.
Leibowitz HM, Krueger DE, Maunder LR, Milton RC, Kini MM, Kahn HA, Nickerson RJ, Pool J, Colton TL, Ganley JP, Loewenstein J.I, Dawber TR. The Framingham eye study monograph. Survey Ophthalmol 1980; 24 (Suppl): 336–610.
Sparrow JM, Bron AJ, Brown NAP, Ayliffe W, Hill AR. The Oxford clinical cataract classification and grading system. International Ophthalmology 1986; 9: 207–225.
West S, Taylor H, Newland H. Evaluation of photographic method for grading lens opacities for epidemiological research. Invest Ophthalmol Vis Sci 1986; 27 (Suppl - ARVO abstracts): 44.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Sparrow, J.M., Ayliffe, W., Bron, A.J. et al. Inter-observer and intra-observer variability of the Oxford clinical cataract classification and grading system. Int Ophthalmol 11, 151–157 (1988). https://doi.org/10.1007/BF00130616
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/BF00130616