Abstract
The quality of a measurement tool for the evaluation of professional competence and competence development depends largely on the question to what extent the ratings of the individual solutions of the participants by the evaluators (raters) converge or diverge (interrater reliability).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Asendorpf, J., & Wallbott, H. G. (1979). Maße der Beobachterübereinstimmung: Ein systematischer Vergleich. Zeitschrift für Sozialpsychologie, 10, 243–252.
Bortz, J., & Döring, N. (2002). Forschungsmethoden und Evaluation. Berlin: Springer.
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420–428.
Wirtz, M., & Caspar, F. (2002). Beurteilerübereinstimmung und Beurteilerreliabilität. Göttingen: Hogrefe.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Rauner, F., Heinemann, L., Maurer, A., Haasler, B., Erdwien, B., Martens, T. (2013). The COMET Rating Procedure in Practice: Some Conclusions. In: Competence Development and Assessment in TVET (COMET). Technical and Vocational Education and Training: Issues, Concerns and Prospects, vol 16. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-4725-8_9
Download citation
DOI: https://doi.org/10.1007/978-94-007-4725-8_9
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-007-4724-1
Online ISBN: 978-94-007-4725-8
eBook Packages: Humanities, Social Sciences and LawEducation (R0)