Skip to main content

Advertisement

Log in

Design of peer assessment rubrics for ICT topics

  • Published:
Journal of Computing in Higher Education Aims and scope Submit manuscript

Abstract

Peer evaluation consists of the evaluation of students by their peers following criteria or rubrics provided by the teacher, where the way to evaluate students is specified so that they achieve the desired competencies. The quality of the measurement instrument must meet two essential criteria: validity and reliability. In this research, we explored the educational value of peer evaluation rubrics by analyzing the quality of the rubric through the study of content validity, reliability, and internal consistency. Our main purpose was to design an appropriate rubric to grade tasks in the field of information engineering, as well as performing content validation through a group of experts. It was carried out in three phases: 1) construction of a rubric, with its criteria, characteristics, and levels of achievement; 2) content validation by five experts in the field, and 3) application of the rubric to ascertain students' perceptions and satisfaction with its validity. The relevance of the criteria and the definition of their characteristics obtained a score higher than 3.75/4 on a Likert scale. The content validity values (CVR), content validity index (CVI), and general content validity index (GIVC) gave maximum values of + 1. The results obtained indicate that the rubric is adequate, with Aiken’s V higher than V 0.87 in all its criteria. The rubric was applied to 326 students of 4 subjects. Cronbach's alpha was used to calculate the reliability of the rubric, obtaining a value of 0.839. The students' perception of validity and satisfaction with the rubric was higher than 0.78. As future work, we intend to design a rubric validation engine according to the applied procedure.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aiken, L. R. (1980). Content Validity and Reliability of Single Items or Questionnaires. Educational and Psychological Measurement, 40(4), 955–959. https://doi.org/10.1177/001316448004000419

    Article  Google Scholar 

  • Almanasreh, E., Moles, R., & Chen, T. F. (2019). Evaluation of methods used for estimating content validity. Research in Social and Administrative Pharmacy, 15(2), 214–221. https://doi.org/10.1016/j.sapharm.2018.03.066

    Article  Google Scholar 

  • Alsina, Á., Ayllón, S., Colomer, J., Fernández-Peña, R., Fullana, J., Pallisera, M., & Serra, L. (2017). Improving and evaluating reflective narratives: A rubric for higher education students. Teaching and Teacher Education, 63(2000), 148–158. https://doi.org/10.1016/j.tate.2016.12.015

    Article  Google Scholar 

  • ArchMiller, A., Fieberg, J., Walker, J. D., & Holm, N. (2017). Group peer assessment for summative evaluation in a graduate-level statistics course for ecologists. Assessment and Evaluation in Higher Education, 42(8), 1208–1220. https://doi.org/10.1080/02602938.2016.1243219

    Article  Google Scholar 

  • Ashton, S., & Davies, R. S. (2015). Using scaffolded rubrics to improve peer assessment in a MOOC writing course. Distance Education, 36(3), 312–334. https://doi.org/10.1080/01587919.2015.1081733

    Article  Google Scholar 

  • Ayre, C., & Scally, A. J. (2014). Critical values for Lawshe’s content validity ratio: Revisiting the original methods of calculation. Measurement and Evaluation in Counseling and Development, 47(1), 79–86. https://doi.org/10.1177/0748175613513808

    Article  Google Scholar 

  • Bernat-Adell, M. D., Moles-Julio, P., Esteve-Clavero, A., & Collado-Boira, E. J. (2019). Psychometric evaluation of a rubric to assess basic performance during simulation in nursing. Nursing Education Perspectives, 40(2), E3–E6. https://doi.org/10.1097/01.NEP.0000000000000436

    Article  Google Scholar 

  • Bowen, L., Pinargote, M., Meza, J., & Ventura, S. (2020). Trends the use of Artificial intelligence techniques for peer assessment. ACM International Conference Proceeding Series. https://doi.org/10.1145/3410352.3410837

    Article  Google Scholar 

  • Cabero Almenara, J., del Llorente, M., & C. . (2013). La Aplicación del juicio de experto como técnica de evaluación de las tecnologías de la información y comunicación (TIC). Journal of Chemical Information and Modeling, 53(9), 1689–1699.

    Google Scholar 

  • Capuano, N., Caballé, S., Percannella, G., & Ritrovato, P. (2020). FOPA-MC: Fuzzy multi-criteria group decision making for peer assessment. Soft Computing, 24(23), 17679–17692. https://doi.org/10.1007/s00500-020-05155-5

    Article  Google Scholar 

  • Chia, D., Kulkarni, C., Cheng, J., Wei, K. P., Klemmer, S. R., Le, H., & Papadopoulos, K. (2014). Peer and self assessment in massive online classes. ACM Transactions on Computer-Human Interaction, 20(6), 1–31. https://doi.org/10.1145/2505057

    Article  Google Scholar 

  • Escobar-Pérez, J., & Cuervo-Martínez, Á. (2008). Validez de contenido y Juicio de Expertos: Una aproximación a su utilización. Avances En Medición, 6, 27–36. https://doi.org/10.1016/0032-3861(78)90049-6

    Article  Google Scholar 

  • Espinoza Fernández, M. (2018). La evaluación de competencias clínicas en estudiantes de enfermería, un nuevo paradigma. Universitat Jaume I.

    Google Scholar 

  • Fernández-Gómez, E., Martín-Salvador, A., Luque-Vara, T., Sánchez-Ojeda, M. A., Navarro-Prado, S., & Enrique-Mirón, C. (2020). Content validation through expert judgement of an instrument on the nutritional knowledge, beliefs, and habits of pregnant women. Nutrients, 12(4). https://doi.org/10.3390/nu12041136

  • Galicia Alarcón, L. A., Balderrama Trápaga, J. A., & Edel Navarro, R. (2017). Validez de contenido por juicio de expertos: propuesta de una herramienta virtual Content validity by experts judgment: Proposal for a virtual tool. Apert. https://doi.org/10.32870/Ap.v9n2.993

  • García Duarte, A. (2017). La rúbrica, en la evaluación eficiente, de las asignaturas de programación de la carrera de Ingeniería en Sistemas de Información (UNAN-Managua, FAREM-Estelí.; 151). Retrieved from http://repositorio.unan.edu.ni/id/eprint/4204

  • García-Ros, R. (2011). Análisis y validación de una rúbrica para evaluar habilidades de presentación oral en contextos universitarios. Electronic Journal of Research in Educational Psychology, 9(3), 1043–1062. https://doi.org/10.25115/ejrep.v9i25.1468

  • Grant, J. S., & Davis, L. L. (1997). Focus on Quantitative Methods: Selection and Use of Content Experts for Instrument Development. Research in Nursing and Health, 20(3), 269–274. https://doi.org/10.1002/(sici)1098-240x(199706)20:3%3c269::aid-nur9%3e3.3.co;2-3

    Article  Google Scholar 

  • Guevara Rodriguez, G., Veytia Bucheli, M. G., & Sánches Macías, A. (2020). Validez y confiabilidad para evaluar la rúbrica analítica socioformativa del diseño de secuencias didácticas. Revistaespacios.Com, 41(2019), 12. Retrieved from https://www.revistaespacios.com/a20v41n09/20410912.html

  • Hernández Sampieri, R., Fernández-Collado, C., & Baptista-Lucio, P. (2014). Metodología de la Investigación (Sexta; S. de CV, Ed.). México: Mc Graw Hill.

  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144. https://doi.org/10.1016/j.edurev.2007.05.002

    Article  Google Scholar 

  • Ketonen, L., Hähkiöniemi, M., Nieminen, P., & Viiri, J. (2020). Pathways through peer assessment: implementing peer assessment in a lower secondary physics classroom. International Journal of Science and Mathematics Education, 18(8), 1465–1484. https://doi.org/10.1007/s10763-019-10030-3

    Article  Google Scholar 

  • Lacave Rodero, C., Molina Díaz, A., Fernández Guerrero, M., & Redondo Duque, M. (2016). Análisis de la fiabilidad y validez de un cuestionario docente. ReVisión, 9(1), 2.

    Google Scholar 

  • Lawshe, C. H. (1975). A Quantitative Approach to Content Validity. Personnel Psychology, 28(4), 563–575. https://doi.org/10.1111/j.1744-6570.1975.tb01393.x

    Article  Google Scholar 

  • Lee, J. E., Recker, M., & Yuan, M. (2020). The validity and instructional value of a rubric for evaluating online course quality: An empirical study. Online Learning Journal, 24(1), 245–263. https://doi.org/10.24059/olj.v24i1.1949

  • Lindblom-Ylänne, S., Pihlajamäki, H., & Kotkas, T. (2014). Self-, peer- and teacher-assessment of student essays. 51–62. https://doi.org/10.1177/1469787406061148

  • Liu, J., Guo, X., Gao, R., Fram, P., Ling, Y., Zhang, H., & Wang, J. (2019). Students’ learning outcomes and peer rating accuracy in compulsory and voluntary online peer assessment. Assessment and Evaluation in Higher Education, 44(6), 835–847. https://doi.org/10.1080/02602938.2018.1542659

    Article  Google Scholar 

  • Luaces, O., Díez, J., & Bahamonde, A. (2018). A peer assessment method to provide feedback, consistent grading and reduce students’ burden in massive teaching settings. Computers and Education, 126(March), 283–295. https://doi.org/10.1016/j.compedu.2018.07.016

    Article  Google Scholar 

  • Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35(6), 382–386. https://doi.org/10.1097/00006199-198611000-00017

    Article  Google Scholar 

  • Magdaleno Arreola, L., Dino-Morales, L. I., & Davis Vega, L. (2021). Diseño y validación de un instrumento para evaluar la práctica profesional docente desde la socioformación. Atenas , Revista Cientifico Pedagógico, 2 (54), 85–99. Retrieved from http://www.uajournals.com/campusvirtuales/journal/19/3.pdf

  • Montagner, I. S. (2019). An experience with peer assessment in the context of a Computer Systems course. IEEE Frontiers in Education Conference (FIE), 2019-Octob, 1–5. https://doi.org/10.1109/FIE43999.2019.9028508

  • Muñiz, J., & Fonseca-Pedrero, E. (2019). Ten steps for test development. Psicothema, 31(1), 7–16. https://doi.org/10.7334/psicothema2018.291

    Article  Google Scholar 

  • Nicol, D., Thomson, A., & Breslin, C. (2013). Rethinking feedback in higher education: A peer reviewpersepective. Assessment and Evaluation in Higher Education., 39(1), 102–122.

    Article  Google Scholar 

  • Nisbet, G., Jorm, C., Roberts, C., Gordon, C. J., & Chen, T. F. (2017). Content validation of an interprofessional learning video peer assessment tool. BMC Medical Education, 17(1), 1–10. https://doi.org/10.1186/s12909-017-1099-5

    Article  Google Scholar 

  • Panadero, E., & Brown, G. T. L. (2017). Teachers’ reasons for using peer assessment: Positive experience predicts use. European Journal of Psychology of Education, 32(1), 133–156. https://doi.org/10.1007/s10212-015-0282-5

    Article  Google Scholar 

  • Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129–144. https://doi.org/10.1016/j.edurev.2013.01.002

    Article  Google Scholar 

  • Pedrosa, I., Suárez-Álvarez, J., & García-Cueto, E. (2013). Content Validity Evidences: Theoretical Advances and Estimation Methods. Acción Psicológica, 10(2), 3–18. Retrieved from http://dx.doi.org/https://doi.org/10.5944/ap.10.2.11820

  • Pedrosa, I., Suárez Álvarez, J., & García Cueto, E. (2013b). Evidencias sobre la Validez de Contenido: Avances Teóricos y Métodos para su Estimación. Acción Psicológica, 10(2), 4–11.

    Google Scholar 

  • Planas Lladó, A., Feliu Soley, L., Fraguell Sansbelló, R. M., Arbat Pujolras, G., Pujol Planella, J., Roura-Pascual, N., & Montoro Moreno, L. (2014). Student perceptions of peer assessment: An interdisciplinary study. Assessment and Evaluation in Higher Education, 39(5), 592–610. https://doi.org/10.1080/02602938.2013.860077

    Article  Google Scholar 

  • Polit, D. F., Hungler, B. P., Palacios Martinez, R., & Feher de la Torre, G. (2000). Investigación científica en ciencias de la salud : Principios y métodos (6a ed.). McGraw-Hill Interamericana.

    Google Scholar 

  • Puerta Sierra, L., & Marín Vargas, M. E. (2015). Análisis de validez de contenido de un instrumento de transferencia de tecnología. XX Congreso Internacional de Contaduría, Administración e Informática.

  • Quintana, A. M. V., Rogado, A. B. G., Gavilán, A. B. R., Martín, I. R., Esteban, M. A. R., Zorrilla, T. A., & Izard, J. F. M. (2014). Application of new assessment tools in engineering studies: The rubric. Revista Iberoamericana De Tecnologias Del Aprendizaje, 9(4), 139–143. https://doi.org/10.1109/RITA.2014.2363008

    Article  Google Scholar 

  • Raposo-Rivas, M., & Gallegos-Arrufat, M.-J. (2016). University students’ perceptions of electronic rubric-based assessment. Digital Education Review, 30(2013–9144), 220–233.

    Google Scholar 

  • Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment and Evaluation in Higher Education, 35(4), 435–448. https://doi.org/10.1080/02602930902862859

    Article  Google Scholar 

  • Song, Y., Hu, Z., Guo, Y., & Gehringer, E. F. (2016). An experiment with separate formative and summative rubrics in educational peer assessment. Proceedings - Frontiers in Education Conference, FIE, 2016-Novem(October). https://doi.org/10.1109/FIE.2016.7757597

  • Topping, K. J. (2009). Peer assessment. Theory into Practice, 48(1), 20–27. https://doi.org/10.1080/00405840802577569

    Article  Google Scholar 

  • van Helvoort, J., Brand-Gruwel, S., Huysmans, F., & Sjoer, E. (2017). Reliability and validity test of a Scoring Rubric for Information Literacy. Journal of Documentation, 73(2), 305–316. https://doi.org/10.1108/JD-05-2016-0066

    Article  Google Scholar 

  • Vaughan, B., Yoxall, J., & Grace, S. (2019). Peer assessment of teamwork in group projects: Evaluation of a rubric. Issues in Educational Research, 29(3), 961–978.

    Google Scholar 

  • Vinces, L. (2018). Diseño y Validación de una e-Rúbrica para evaluar las Competencias Clínicas Específicas en Diagnóstico Diferencial en Pediatría Linna Vinces Balanzátegui. Universidad Casa Grande.

  • Weaver, K. F., Morales, V., Nelson, M., Weaver, P. F., Toledo, A., & Godde, K. (2016). The benefits of peer review and a multisemester capstone writing series on inquiry and analysis skills in an undergraduate thesis. CBE Life Sciences Education, 15(4), ar51.1-ar51.9. https://doi.org/10.1187/cbe.16-01-0072

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lorena Bowen-Mendoza.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Appendix N° 1. The Task rubric for peer assessment

See Table 14

SCAFFOLDING

DOCUMENT:

Cover page: The assignment should not contain the names of the group or students

Statement:

Few misspellings: less than 3

Some spelling errors: less than 10

Various spelling errors: more than 10

STRUCTURE

Elements that contain the entry conditions and the results are as requested

The syntax is adequate and is correctly prepared

Tidy, the structure must be ordered, each of its elements separated in a visible and non-confusing way, in the programming it must contain indentations

to differentiate its content

PROCESS

Procedures, you must understand all the options for execution, so that the correct answer always comes out

Logical Sequence, it must be designed with a logical sequence according to what is requested and correct semantics

FUNCTIONALITY

The solution has the correct functionality of all the described requirements

Show the results clearly in the different options

Table 14 The Task rubric for peer assessment

Appendix N° 2. Invitation to Expert Trial.

Expert.

From my consideration:

The reason for this is to extend an invitation to participate as an expert for the evaluation of the rubric that is being designed as an instrument in my Ph.D. project with the topic “Diffuse Model for peer evaluation“. The validation of the instrument is of great importance to ensure the quality of the results and that these are a valuable contribution to the research carried out.

Please enter your personal details below. They will only be used for the purpose of showing a profile of the evaluating experts:

Names and surnames:

Academic training:

Areas of professional experience:

Actual charge:

Years of experience at the university:

For the attention that you give to this, I am grateful.

Appendix N° 3. Expert Judgment Template.

Respected judge: You have been selected to evaluate the instrument “Task rubric for peer assessment” that is part of the doctoral project with the topic “Fuzzy model for peer assessment”. The evaluation of the instruments is of great relevance to ensure that they are valid and that the results obtained from them are used efficiently. I appreciate your valuable collaboration.

Research objectives: To design the artifacts of the fuzzy classification model in the peer evaluation.

Objectives of the expert judgment: Validate the construction of the designed rubric, considering the selected items, their quality, relevance, and relevance.

Test objective: The results obtained from the application of the instrument will reflect a higher quality rubric.

According to the following indicators, rate each of the items as appropriate.

1: Does not meet the criteria

2: Low Level

3: Moderate Level

4: High Level

Qualification

Category

1

2

3

4

Sufficiency: the items that belong to this criterion are enough to obtain the measurement of this

The items are not enough to measure the criterion

The items measure some aspect of the criterion but do not correspond to the total criterion

Some items must be increased in order to fully evaluate the criterion

Items are enough

Clarity: The items are easily understood, that is, their syntaxes and semantics are adequate

Items are not clear

The items require quite a few modifications or a very large modification in the use of the words according to their meaning or the ordering of them

A very specific modification of some of the terms of the items is required

The items are clear, have adequate semantics and syntax

Coherence: the items have a logical relationship with the dimension or indicator being measured

The items have no logical relationship with the criteria

The items have a tangential relationship with the criterion

Items have a moderate relationship with the criterion you are measuring

The items are completely related to the criterion you are measuring

Relevance: The item is essential or important, that is, it must be included

The item can be eliminated without affecting the measurement of the criterion

The item has some relevance, but another item may be including what this one measures

The item is relatively important

The item is very relevant and must be included

Please indicate in each criterion according to your evaluation: 1(Does not meet the criteria), 2 (Low Level), 3 (Moderate Level), 4 (High Level).

CRITERION

ITEM

SUFFICIENCY

CLARITY

COHERENCE

RELEVANCE

OBSERVACIÓN

Document

The title page

     

Statement

    

Structure

Element

     

Syntax

    

Order

    

Process

Processes

     

Logical sequence

    

Functionality

Functionality

     

Requirements

    

Results

    

Is there a dimension that is part of the construct and was not evaluated? ____.

Which? ________________________________________________________.

Appendix N° 4.

See Table 15

Scaffolding

Document:

Cover page: The assignment should not contain the names of the group or students

Statement:

Few misspellings: less than 3

Some spelling errors: less than 10

Various spelling errors: more than 10

Structure

Elements that contain the entry conditions and the results are as requested

The syntax is adequate and is correctly prepared

Tidy, the structure must be ordered, each of its elements separated in a visible and non-confusing way, in the programming it must contain

indentations to differentiate its content

PROCESS

Procedures, you must understand all the options for execution, so that the correct answer always comes out

Logical Sequence, it must be designed with a logical sequence according to what is requested and correct semantics

FUNCTIONALITY

The solution has the correct functionality of all the described requirements

Show the results clearly in the different options

Table 15 The rubric of Tasks for Peer Assessment, modified

Most

Some

Few

 

AT LEAST 80%

AT LEAST 50%

AT LEAST 30%

LESS THAN 30%

99—80

79—50

49—30

29—0

Appendix N° 5.

See Table 16

Table 16 Rubric validation questionnaire

Appendix N ° 6.

See Table 17

Table 17 Satisfaction questionnaire of the Rubric

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bowen-Mendoza, L., Pinargote-Ortega, M., Meza, J. et al. Design of peer assessment rubrics for ICT topics. J Comput High Educ 34, 211–241 (2022). https://doi.org/10.1007/s12528-021-09297-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12528-021-09297-9

Keywords

Navigation