Skip to main content

Comparing Student Models in Different Formalisms by Predicting Their Impact on Help Success

  • Conference paper
Artificial Intelligence in Education (AIED 2013)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7926))

Included in the following conference series:

Abstract

We describe a method to evaluate how student models affect ITS decision quality – their raison d’être. Given logs of randomized tutorial decisions and ensuing student performance, we train a classifier to predict tutor decision outcomes (success or failure) based on situation features, such as student and task. We define a decision policy that selects whichever tutor action the trained classifier predicts in the current situation is likeliest to lead to a successful outcome. The ideal but costly way to evaluate such a policy is to implement it in the tutor and collect new data, which may require months of tutor use by hundreds of students. Instead, we use historical data to simulate a policy by extrapolating its effects from the subset of randomized decisions that happened to follow the policy. We then compare policies based on alternative student models by their simulated impact on the success rate of tutorial decisions. We test the method on data logged by Project LISTEN’s Reading Tutor, which chooses randomly which type of help to give on a word. We report the cross-validated accuracy of predictions based on four types of student models, and compare the resulting policies’ expected success and coverage. The method provides a utility-relevant metric to compare student models expressed in different formalisms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anderson, J.R., Gluck, K.: What role do cognitive architectures play in intelligent tutoring systems. In: Cognition and Instruction: Twenty-Five Years of Progress, pp. 227–262. Lawrence Erlbaum, Mahwah (2001)

    Google Scholar 

  2. Balacheff, N., Gaudin, N.: Students conceptions: an introduction to a formal characterization. Cahier Leibniz 65, 1–21 (2002)

    Google Scholar 

  3. Barnes, T., Stamper, J., Lehman, L., Croy, M.: A pilot study on logic proof tutoring using hints generated from historical student data. In: Procs. of the 1st International Conference on Educational Data Mining, Montréal, Canada, pp. 552–557 (2008)

    Google Scholar 

  4. Beck, J.E., Woolf, B.P., Beal, C.R.: ADVISOR: a machine-learning architecture for intelligent tutor construction. In: Procs. of the 17th AAAI Conference on Artificial Intelligence, Boston, MA, pp. 552–557 (2000)

    Google Scholar 

  5. Cen, H., Koedinger, K., Junker, B.: Comparing two IRT models for conjunctive skills. In: Woolf, B.P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 796–798. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  6. Chi, M., VanLehn, K., Litman, D., Jordan, P.: Inducing effective pedagogical strategies using learning context features. In: De Bra, P., Kobsa, A., Chin, D. (eds.) UMAP 2010. LNCS, vol. 6075, pp. 147–158. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  7. Cohen, W.W.: Fast Effective Rule Induction. In: Procs. of the 12th International Conference on Machine Learning, Tahoe City, CA, pp. 115–123 (1995)

    Google Scholar 

  8. Corbett, A.T., Anderson, J.R.: Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modelling and User-Adapted Interaction 4, 253–278 (1995)

    Article  Google Scholar 

  9. Desmarais, M.C.: Performance comparison of item-to-item skills models with the IRT single latent trait model. In: Konstan, J.A., Conejo, R., Marzo, J.L., Oliver, N. (eds.) UMAP 2011. LNCS, vol. 6787, pp. 75–86. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  10. Frank, E., Witten, I.H.: Generating accurate rule sets without global optimization. In: Procs. of the 15th International Conference on Machine Learning, Madison, WI, pp. 144–151 (1998)

    Google Scholar 

  11. Gertner, A.S., Conati, C., VanLehn, K.: Procedural help in Andes: Generating hints using a Bayesian network student model. In: Procs. of the 15th National Conference on Artificial Intelligence, Madison, WI, pp. 106–111 (1998)

    Google Scholar 

  12. Gong, Y., Beck, J.E., Heffernan, N.T.: How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis. International Journal of Artificial Intelligence in Education 21(1), 27–46 (2011)

    Google Scholar 

  13. Heiner, C., Beck, J., Mostow, J.: Improving the help selection policy in a Reading Tutor that listens. In: Procs. of the InSTIL/ICALL 2004 Symposium on NLP and Speech Technologies in Advanced Language Learning Systems, Venice, Italy, pp. 195–198 (2004)

    Google Scholar 

  14. Le, N.-T., Pinkwart, N.: Can Soft Computing Techniques Enhance the Error Diagnosis Accuracy for Intelligent Tutors? In: Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K. (eds.) ITS 2012. LNCS, vol. 7315, pp. 320–329. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  15. McNemar, Q.: Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 12(2), 153–157 (1947)

    Article  Google Scholar 

  16. Minh Chieu, V., Luengo, V., Vadcard, L., Tonetti, J.: Student modeling in complex domains: Exploiting symbiosis between temporal Bayesian networks and fine-grained didactical analysis. International Journal of Artificial Intelligence in Education 20(3), 269–301 (2010)

    Google Scholar 

  17. Mitrovic, A., Koedinger, K., Martin, B.: A comparative analysis of cognitive tutoring and constraint-based modeling. In: Procs. of the 9th International Conference on User Modeling, Johnstown, PA, pp. 313–322 (2003)

    Google Scholar 

  18. Mitrovic, A., Ohlsson, S.: Evaluation of a Constraint-Based Tutor for a Database. International Journal of Artificial Intelligence in Education 10(3-4), 238–256 (1999)

    Google Scholar 

  19. Mostow, J., Aist, G.: Evaluating tutors that listen: An overview of Project LISTEN. In: Smart Machines in Education: The Coming Revolution in Educational Technology, pp. 169–234. MIT/AAAI Press, Cambridge, MA (2001)

    Google Scholar 

  20. Ohlsson, S.: Constraint-based student modeling. NATO ASI Series F Computer and Systems Sciences, vol. 125, pp. 167–189 (1994)

    Google Scholar 

  21. Pavlik, P.I., Cen, H., Koedinger, K.: Performance Factors Analysis–A New Alternative to Knowledge Tracing. In: Procs. of the 15th International Conference on Artificial Intelligence in Education, Auckland, New Zealand, pp. 531–538 (2009)

    Google Scholar 

  22. Razzaq, L., Heffernan, N.T.: Scaffolding vs. Hints in the Assistment System. In: Ikeda, M., Ashley, K.D., Chan, T.-W. (eds.) ITS 2006. LNCS, vol. 4053, pp. 635–644. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  23. Stamper, J., Barnes, T., Lehmann, L., Croy, M.: The hint factory: Automatic generation of contextualized help for existing computer aided instruction. In: Procs. of the International 9th Conference on Intelligent Tutoring Systems Young Researchers Track, Montréal, Canada, pp. 71–78 (2008)

    Google Scholar 

  24. Swets, J.A.: Measuring the accuracy of diagnostic systems. Science 240(4857), 1285–1293 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  25. VanLehn, K.: Student modeling. In: Foundations of Intelligent Tutoring Systems, pp. 55–78. Lawrence Erlbaum, Mahwah (1988)

    Google Scholar 

  26. Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Duval, E.: Dataset-driven research for improving recommender systems for learning. In: Procs. of the 1st International Conference on Learning Analytics and Knowledge, Banff, Canada, pp. 44–53 (2011)

    Google Scholar 

  27. Xu, Y., Mostow, J.: Comparison of methods to trace multiple subskills: Is LR-DBN best? In: Procs. of the 5th International Conference on Educational Data Mining, Chania, Greece, pp. 41–48 (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lallé, S., Mostow, J., Luengo, V., Guin, N. (2013). Comparing Student Models in Different Formalisms by Predicting Their Impact on Help Success. In: Lane, H.C., Yacef, K., Mostow, J., Pavlik, P. (eds) Artificial Intelligence in Education. AIED 2013. Lecture Notes in Computer Science(), vol 7926. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39112-5_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-39112-5_17

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-39111-8

  • Online ISBN: 978-3-642-39112-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics