Skip to main content

Automatic Scoring of an Analytical Response-To-Text Assessment

  • Conference paper
Intelligent Tutoring Systems (ITS 2014)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 8474))

Included in the following conference series:

Abstract

In analytical writing in response to text, students read a complex text and adopt an analytic stance in their writing about it. To evaluate this type of writing at scale, an automated approach for Response to Text Assessment (RTA) is needed. With the long-term goal of producing informative feedback for students and teachers, we design a new set of interpretable features that operationalize the Evidence rubric of RTA. When evaluated on a corpus of essays written by students in grades 4-6, our results show that our features outperform baselines based on well-performing features from other types of essay assessments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Attali, Y., Burstein, J.: Automated essay scoring with e-rater v.2. Journal of Technology, Learning, and Assessment 4(3) (2006)

    Google Scholar 

  2. Bacha, N.: Writing evaluation: What can analytic versus holistic essay scoring tell us? System 29, 371–383 (2001)

    Article  Google Scholar 

  3. Burstein, J.C., Braden-Harder, L., Chodorow, M., Hua, S., Kaplan, B., Kukich, K., Lu, C., Nolan, J., Rock, D., Wolff, S.: Computer analysis of essay content for automated score prediction. TOEFL Monograph Series Report No. 13 (1999)

    Google Scholar 

  4. Correnti, R., Matsumura, L.C., Hamilton, L.H., Wang, E.: Assessing students’ skills at writing in response to texts. Elementary School Journal 114(2), 142–177 (2013)

    Article  Google Scholar 

  5. Crossley, S.A., Varner, L.K., Roscoe, R.D., McNamara, D.S.: Using automated indices of cohesion to evaluate an intelligent tutoring system and an automated writing evaluation system. In: Lane, H.C., Yacef, K., Mostow, J., Pavlik, P. (eds.) AIED 2013. LNCS (LNAI), vol. 7926, pp. 269–278. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  6. Deane, P., Williams, F., Weng, V., Trapani, C.S.: Automated essay scoring in innovative assessments of writing from sources. Writing Assessment 6 (2013)

    Google Scholar 

  7. Elliot, S.: Intellimetric: from here to validity. In: Shermis, M.D., Burstein, J. (eds.) Automated Essay Scoring: A Cross Disciplinary Perspective (2003)

    Google Scholar 

  8. Foltz, P.W., Streeter, L.A., Lochbaum, K.E., Landauer, T.K.: Implementation and applications of the intelligent essay assessor. In: Shermis, M.D., Burstein, J. (eds.) A Handbook of Automated Essay Evaluation: Current Applications and New Directions, pp. 68–88 (2013)

    Google Scholar 

  9. Landauer, T.K., Foltz, P.W., Laham, D.: An introduction to latent semantic analysis. Discourse Processes 25, 259–284 (1998)

    Article  Google Scholar 

  10. Lee, Y.W., Gentile, C., Kantor, R.: Analytic scoring of toefl cbt essays: Scores from humans and e-rater. TOEFL Research Report No. RR 81 (2008)

    Google Scholar 

  11. Mayfield, E., Rose, C.: Lightside: Open source machine learning for text. In: Shermis, M.D., Burstein, J. (eds.) A Handbook of Automated Essay Evaluation: Current Applications and New Directions, pp. 124–135 (2013)

    Google Scholar 

  12. Miller, T.: Essay assessment with latent semantic analysis. Journal of Educational Computing Research 28(3) (2003)

    Google Scholar 

  13. Page, E.B.: Project essay grade: Peg. In: Shermis, M.D., Burstein, J. (eds.) Automated Essay Scoring: A Cross Disciplinary Perspective, pp. 43–54 (2003)

    Google Scholar 

  14. Shermis, M.D., Burstein, J.: Automated essay scoring: A cross disciplinary perspective (2003)

    Google Scholar 

  15. Shermis, M.D., Hammer, B.: Contrasting state-of-the-art automated scoring of essays: Analysis. In: Annual National Council on Measurement in Education Meeting (2012)

    Google Scholar 

  16. Weigle, S.C.: Assessing writing. Cambridge University Press, New York (2002)

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Rahimi, Z., Litman, D.J., Correnti, R., Matsumura, L.C., Wang, E., Kisa, Z. (2014). Automatic Scoring of an Analytical Response-To-Text Assessment. In: Trausan-Matu, S., Boyer, K.E., Crosby, M., Panourgia, K. (eds) Intelligent Tutoring Systems. ITS 2014. Lecture Notes in Computer Science, vol 8474. Springer, Cham. https://doi.org/10.1007/978-3-319-07221-0_76

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-07221-0_76

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-07220-3

  • Online ISBN: 978-3-319-07221-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics