Skip to main content

Advertisement

Log in

Profiling Teacher/Teaching Using Descriptors Derived from Qualitative Feedback: Formative and Summative Applications

  • Published:
Research in Higher Education Aims and scope Submit manuscript

Abstract

Considerable work has been done on student evaluation of teaching/teachers, but reservations remain about its use for summative purposes. Student ratings are not universally accepted as being reliable, nor can they provide really meaningful information. Qualitative comments can provide a better understanding but they tend not to be user-friendly from lack of structure and connectedness. This study attempts to devise a method for ‘quantifying’ students’ comments to increase their usefulness in complementing/confirming ratings. The quantified results enable the profile construction of what students regard as an effective/ineffective teacher, and enable identification of strengths and weaknesses. Our findings counter some commonly held assumptions, including those which held that high ratings are dependent on small class size and ‘dumbing down’ of courses and the consequent expectation of high grades. The findings also indicate that students value teaching quality more than teacher characteristics, suggesting their ability to make valid judgments about teaching effectiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Refer to Marsh (2007) for a list of Braskamp’s relevant articles.

  2. The faculties of Dentistry and Medicine at NUS do not follow the modular system and hence do not collect data at the end of each semester.

  3. Teachers who taught more than one module were identified to be in the highest or lowest 20% cohort based on the average of their score on overall teaching effectiveness rather than the average module score.

  4. Perry et al. (1974) suggested that prior expectations of teaching performance could influence ratings of professors, and that alternating certain kinds of information caused drastic changes in personality evaluations. However, research by Bejar and Doyle (1976) about this expectation-evaluation relationship showed that students were capable of rating their instructors indepedently of expectations held prior to the course,and that student interest in the subject matter was often independent of what instructors did.

  5. NUS has two university-level teaching awards: the Annual Teaching Excellence Award (ATEA) and the even more highly selective Outstanding Educator Award (OEA). The first recognizes excellence in teaching in the year under review as evidenced by high quality teaching practices (including classroom teaching, assessment practices, module development, range of teaching, supervision of student projects) and activities in professional self-development. The OEA recognizes someone who is not only a consistently excellent teacher but who has made exceptional contributions to the educational culture and practices at NUS. In addition to teaching practices and professional development, the OEA also considers educational leadership (within NUS), educational impact (national/international), scholarship in teaching, teaching materials adopted by external parties, other relevant activities). Nominations may be made by self or peers and the selection process is a very rigourous one involving both Faculty Teaching Excellence Committees and the University Teaching Excellence Committee, based on information provided by student feedback, peer reviews and self reports through the Teaching Portfolio.

  6. See Marsh (2007) for a summary of relations between student evaluation of teaching and potential biases.

References

  • Abrami, P. C., d’Apollonia, S., & Rosenfield, S. (1996). The dimensionality of student ratings of instruction: What we know and what we do not. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (pp. 213–264). New York: Agathon Press.

    Google Scholar 

  • Abrami, P. C., d’Apollonia, S., & Rosenfield, S. (2007). The dimensionality of student ratings of instruction: What we know and what we do not. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 385–456). New York: Springer.

    Chapter  Google Scholar 

  • Abrami, P. C., d’Appolonia, S., & Cohen, P. A. (1990). The validity of student ratings of instruction: what we know and what we don’t. Journal of Educational Psychology, 82, 219–231.

    Article  Google Scholar 

  • Abrami, et al. (1980). Do teacher standards for assigning grades affect student evaluations of instruction? Journal of Educational Psychology, 72, 107–118.

    Article  Google Scholar 

  • Aleamoni, L. M. (1987). Typical faculty concerns about student evaluation of teaching. In L. M. Aleamoni (Ed.), Techniques for evaluation and improving instruction, new directions for teaching and learning (No. 31, pp. 25–31). San Francisco: Jossey-Bass.

  • Aleamoni, L. M., & Hexner, P. Z. (1980). A review of the research on student evaluation and a report on the effect of different sets of instructions on student course and instructor evaluation. Instructional Science, 9, 67–84.

    Article  Google Scholar 

  • Apodaca, P., & Grad, H. (2005). The dimensionality of student ratings of teaching: integration of uni- and multidimensional models. Studies in Higher Education, 30(6), 723–748.

    Article  Google Scholar 

  • Arreola, R. A. (2000). Developing a comprehensive faculty evaluation system (2nd ed.). Bolton, MA: Anker Publishing Company.

    Google Scholar 

  • Bejar, I. I., & Doyle, K. O. (1976). The effect of prior expectations on the structure of student ratings of instruction. Journal of Educational Measurement, 13(2), 151–154.

    Article  Google Scholar 

  • Cashin, W. E. (1992). Student ratings: The need for comparative data. Instructional Evaluation and Faculty Development, 12(2), 1–6.

    Google Scholar 

  • Centra, J. A. (1981). Determining faculty effectiveness. The Journal of Higher Education, 52(3), 328–330.

    Article  Google Scholar 

  • Centra, J. A. (1987). Formative and summative evaluation: Parody or paradox? In L. M. Aleamoni (Ed.), Techniques for evaluation and improving instruction, new directions for teaching and learning (No. 31, pp. 47–55). San Francisco: Jossey-Bass.

  • Centra, J. A. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. San Francisco: Jossey-Bass.

    Google Scholar 

  • Chiu, S. (1999). Use of the unbalanced nested ANOVA to exam factors influencing student ratings of instructional quality, unpublished doctoral dissertation, University of Illinois at Urbana-Champaign.

  • Emery, C., Kramer, T. & Tian, R. (2003). Return to academic standards: Challenge the student evaluation of teaching effectiveness. Retrieved May 6, 2007, from www.bus.lsu.edu/academics/accounting/faculty/lcrumbley/stu_rat_of_%20instr.htm.

  • Erdle, S., Murray, H. G., & Rushton, J. P. (1985). Personality, classroom behavior and student ratings of college teaching effectiveness: A path analysis. Journal of Educational Psychology, 77(4), 394–407.

    Article  Google Scholar 

  • Feldman, K. A. (1976). Grades and college students’ evaluations of their courses and teachers. Research in Higher Education, 4, 69–111.

    Article  Google Scholar 

  • Feldman, K. A. (1997). Identifying exemplary teachers and teaching: Evidence from student ratings. In R. P. Perry & J. C. Smart (Eds.), Effective teaching in higher education: Research and practice (pp. 368–408). New York: Agathon Press.

    Google Scholar 

  • Feldman, K. A. (2007). Identifying exemplary teachers and teaching: Evidence from student ratings. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 93–143). New York: Springer.

    Chapter  Google Scholar 

  • Franklin, J. & Theall, M. (1989). Who reads ratings: knowledge, attitude, and practice of users of student ratings of instruction. Paper presented at the 1988 annual Meeting of the American Education Research association, San Francisco, CA.

  • Gallagher, T. (2000). Embracing student evaluations of teaching: A case study. Teaching Sociology, 28(2), 140–147.

    Article  Google Scholar 

  • Greenwald, A. G., & Gillmore, G. M. (1997). Grading leniency is a removable contaminant of student ratings. American Psychologist, 52, 1209–1217.

    Article  Google Scholar 

  • Guthrie, E. R. (1954). The evaluation of teaching: A progress report. Seattle: University of Washington.

    Google Scholar 

  • Harrison, P. D., Douglas, D. K., & Burdsal, C. A. (2004). The relative merits of different types of overall evaluations of teaching effectiveness. Research in Higher Education, 45(3), 311–323.

    Article  Google Scholar 

  • Howard, G. S., & Maxwell, S. E. (1980). The correlation between student satisfaction and grades: A case of mistaken causation. Journal of Educational Psychology, 72, 810–820.

    Article  Google Scholar 

  • Johnson, T. & Sorenson, L. (2004). Online student ratings of instruction, new directions for teaching and learning (No. 97). San Francisco: Jossey Bass.

  • Kember, D., & Wong, A. (2000). Implications for evaluation from a study of students’ perceptions of good and poor teaching. Higher Education, 40(1), 69–97.

    Article  Google Scholar 

  • Lewis, K. G. (Ed.). (2001). Techniques and strategies for interpreting student evaluations, new directions for teaching and learning (No. 87). San Francisco: Jossey Bass.

  • Lin, Y., McKeachie, W. J., & Tucker, D. G. (1984). The use of student ratings in promotion decisions. Journal of Higher Education, 55, 583–589.

    Article  Google Scholar 

  • Lowman, J. (1996). Characteristics of exemplary teachers, new directions for teaching and learning (No. 65, pp. 33–50). San Francisco: Jossey-Bass.

  • Marsh, H. W. (1982). SEEQ: A reliable, valid, and useful instrument for collecting students’ evaluations of university teaching. British Journal of Educational Psychology, 52(1), 77–95.

    Google Scholar 

  • Marsh, H. W. (1983). Multidimensional ratings of teaching effectiveness by students for different academic settings and their relationship to student/course/instructor characteristics. Journal of Educational Psychology, 75(1), 150–166.

    Article  Google Scholar 

  • Marsh, H. W. (1984). Students’ evaluations of university teaching; Dimensionality, reliability, validity, potential biases and utility. Journal of Educational Psychology, 76, 707–754.

    Article  Google Scholar 

  • Marsh, H. W. (1987). Students’ evaluations of university teaching: Research findings, methodological issues, and directions for future research. International Journal of Educational Research, 11, 253–388.

    Article  Google Scholar 

  • Marsh, H. W. (1991). Multidimensional student’s evaluation of teaching effectiveness: A test of alternative higher-order structures. Journal of Educational Psychology, 83, 285–296.

    Article  Google Scholar 

  • Marsh, H. W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 319–383). New York: Springer.

    Chapter  Google Scholar 

  • Marsh, H. W., & Bailey, M. (1993). Multidimensionality of students’ evaluations of teaching effectiveness: A profile analysis. Journal of Higher Education, 64, 1–18.

    Article  Google Scholar 

  • Marsh, H. W., & Dunkin, M. J. (1997). Students’ evaluation of university teaching: A multidimensional perspective. In R. P. Perry & J. C. Smart (Eds.), Effective teaching in higher education: Research and practice (pp. 241–320). New York: Agathon Press.

    Google Scholar 

  • McKeachie, W. J. (1979). Student ratings of faculty: A reprise. Academe, 65, 384–397.

    Google Scholar 

  • Millman, J. (Ed.). (1981). Handbook of teacher evaluation. Beverly Hills, CA: Sage Publications.

    Google Scholar 

  • Ory, J. C. (2000). Teaching evaluation: Past, present, and future teaching, new directions for teaching & learning (No. 83, pp. 13–18). San Francisco: Jossey-Bass.

  • Ory, J. C., & Braskamp, L. A. (1981). Faculty perceptions of the quality and usefulness of three types of evaluative information. Research in Higher Education, 15, 271–282.

    Article  Google Scholar 

  • Ory, J. C., & Ryan, K. (2001). How do student ratings measure up to a new validity framework? In M. Theall, P. Abrami, & L. Mets (Eds.), The student ratings debate: Are they valid? How can we best use them? New directions for institutional research (No. 109, pp. 27–44). San Francisco: Jossey-Bass.

  • Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd ed.). Newbury Park, CA: Sage Publications.

    Google Scholar 

  • Perry, R. P., Niemi, R. R., & Jones, K. (1974). Effect of prior teaching evaluations and lecture presentation on ratings of teaching performance. Journal of Educational Psychology, 66(6), 851–856.

    Article  Google Scholar 

  • Remmers, H. H. (1949). Are student ratings of instructors related to their grades? Student achievement and instructor evaluation in Chemistry. Studies in Higher Evaluation, 66, 18–26.

    Google Scholar 

  • Ryan, J. M., & Harrison, P. D. (1995). The relationship between individual instructional characteristics and the overall assessment of teaching effectiveness across different instructional contexts. Research in Higher Education, 36, 213–228.

    Article  Google Scholar 

  • Theall, M. & Franklin, J. (Eds.). (1990). Student ratings of instruction: issues for improving practice, new directions for teaching and learning (No. 43). San Francisco: Jossey-Bass.

  • Theall, M., & Franklin, J. (1999). Faculty thinking about the design and evaluation of instruction. In P. Goodyear & N. Hativa (Eds.), Teacher thinking, beliefs, and knowledge in higher education. The Netherlands: Kluwer Academic Publishers.

    Google Scholar 

  • Wirtz, J. (2004). Student feedback collection tools that can help to continuously improve your teaching. Retrieved May, 6, 2007, from http://www.cdtl.nus.edu.sg/link/mar2004/cover.htm.

  • Wulff, D. H., & Nyquist, J. D. (2001). Using qualitative methods to generate data for instructional development. In K. G. Lewis & J. P. Lunde (Eds.), Face to face: A sourcebook of individual consultation techniques for faculty/instructional developers (2nd ed.). Stillwater, OK: New forums Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daphne Pan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pan, D., Tan, G.S.H., Ragupathi, K. et al. Profiling Teacher/Teaching Using Descriptors Derived from Qualitative Feedback: Formative and Summative Applications. Res High Educ 50, 73–100 (2009). https://doi.org/10.1007/s11162-008-9109-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11162-008-9109-4

Keywords

Navigation