Skip to main content

Deep (Un)Learning: Using Neural Networks to Model Retention and Forgetting in an Adaptive Learning System

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11625))

Included in the following conference series:

Abstract

ALEKS, which stands for “Assessment and LEarning in Knowledge Spaces”, is a web-based, artificially intelligent, adaptive learning and assessment system. Previous work has shown that student knowledge retention within the ALEKS system exhibits the characteristics of the classic Ebbinghaus forgetting curve. In this study, we analyze in detail the factors affecting the retention and forgetting of knowledge within ALEKS. From a dataset composed of over 3.3 million ALEKS assessment questions, we first identify several informative variables for predicting the knowledge retention of ALEKS problem types (where each problem type covers a discrete unit of an academic course). Based on these variables, we use an artificial neural network to build a comprehensive model of the retention of knowledge within ALEKS. In order to interpret the results of this neural network model, we apply a technique called permutation feature importance to measure the relative importance of each feature to the model. We find that while the details of a student’s learning activity are as important as the time that has passed from the initial learning event, the most important information for our model resides in the specific problem type under consideration.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Averell, L., Heathcote, A.: The form of the forgetting curve and the fate of memories. J. Math. Psychol. 55, 25–35 (2011)

    Article  MathSciNet  Google Scholar 

  2. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  Google Scholar 

  3. Cho, K., van Merrienboer, B., Gülçehre, Ç., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR abs/1406.1078 (2014). http://arxiv.org/abs/1406.1078

  4. Doble, C., Matayoshi, J., Cosyn, E., Uzun, H., Karami, A.: A data-based simulation study of reliability for an adaptive assessment based on knowledge space theory. Int. J. Artif. Intell. Educ. (2019). https://doi.org/10.1007/s40593-019-00176-0

    Article  Google Scholar 

  5. Doignon, J.P., Falmagne, J.C.: Spaces for the assessment of knowledge. Int. J. Man-Mach. Stud. 23, 175–196 (1985)

    Article  Google Scholar 

  6. Ebbinghaus, H.: Memory: A Contribution to Experimental Psychology. Originally published by Teachers College, Columbia University, New York (1885). Translated by Ruger, H.A., Bussenius, C.E (1913)

    Google Scholar 

  7. Falmagne, J.C., Albert, D., Doble, C., Eppstein, D., Hu, X. (eds.): Knowledge Spaces: Applications in Education. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-35329-1

    Book  MATH  Google Scholar 

  8. Falmagne, J.C., Doignon, J.P.: Learning Spaces. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-01039-2

    Book  MATH  Google Scholar 

  9. Gal, Y., Ghahramani, Z.: A theoretically grounded application of dropout in recurrent neural networks. In: Advances in Neural Information Processing Systems, vol. 29 (2016)

    Google Scholar 

  10. González-Brenes, J., Huang, Y., Brusilovsky, P.: General features in knowledge tracing to model multiple subskills, temporal item response theory, and expert knowledge. In: Proceedings of the 7th International Conference on Educational Data Mining, pp. 84–91 (2014)

    Google Scholar 

  11. Graves, A.: Supervised Sequence Labelling with Recurrent Neural Networks. Studies in Computational Intelligence. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-24797-2

    Book  MATH  Google Scholar 

  12. Grayce, C.: A commercial implementation of knowledge space theory in college general chemistry. In: Falmagne, J.C., Albert, D., Doble, C., Eppstein, D., Hu, X. (eds.) Knowledge Spaces: Applications in Education, pp. 93–114. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  13. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)

    Article  Google Scholar 

  14. Huang, X., Craig, S., Xie, J., Graesser, A., Hu, X.: Intelligent tutoring systems work as a math gap reducer in 6th grade after-school program. Learn. Individ. Differ. 47, 258–265 (2016)

    Article  Google Scholar 

  15. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)

    Google Scholar 

  16. Lindsey, R.V., Shroyer, J.D., Pashler, H., Mozer, M.C.: Improving students long-term knowledge retention through personalized review. Psychol. Sci. 25(3), 639–647 (2014)

    Article  Google Scholar 

  17. Matayoshi, J., Granziol, U., Doble, C., Uzun, H., Cosyn, E.: Forgetting curves and testing effect in an adaptive learning and assessment system. In: Proceedings of the 11th International Conference on Educational Data Mining, pp. 607–612 (2018)

    Google Scholar 

  18. McGraw-Hill Education/ALEKS Corporation: What is ALEKS? https://www.aleks.com/about_aleks

  19. Pardos, Z.A., Heffernan, N.T.: KT-IDEM: introducing item difficulty to the knowledge tracing model. In: Konstan, J.A., Conejo, R., Marzo, J.L., Oliver, N. (eds.) UMAP 2011. LNCS, vol. 6787, pp. 243–254. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22362-4_21

    Chapter  Google Scholar 

  20. Prechelt, L.: Early stopping—but when? In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 53–67. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_5

    Chapter  Google Scholar 

  21. Qiu, Y., Qi, Y., Lu, H., Pardos, Z.A., Heffernan, N.T.: Does time matter? modeling the effect of time with Bayesian knowledge tracing. In: Proceedings of the 4th International Conference on Educational Data Mining, pp. 139–148 (2011)

    Google Scholar 

  22. Reddy, A., Harper, M.: Mathematics placement at the University of Illinois. PRIMUS 23, 683–702 (2013)

    Article  Google Scholar 

  23. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1968 (2014)

    MathSciNet  MATH  Google Scholar 

  24. Strobl, C., Boulesteix, A.L., Kneib, T., Augustin, T., Zeileis, A.: Conditional variable importance for random forests. BMC Bioinform. 9(1), 307 (2008)

    Article  Google Scholar 

  25. Strobl, C., Boulesteix, A.L., Zeileis, A., Hothorn, T.: Bias in random forest variable importance measures: illustrations, sources and a solution. BMC Bioinform. 8(1), 25 (2007)

    Article  Google Scholar 

  26. Taagepera, M., Arasasingham, R.: Using knowledge space theory to assess student understanding of chemistry. In: Falmagne, J.C., Albert, D., Doble, C., Eppstein, D., Hu, X. (eds.) Knowledge Spaces: Applications in Education, pp. 115–128. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-35329-1_7

    Chapter  Google Scholar 

  27. Wang, Y., Heffernan, N.: Towards modeling forgetting and relearning in ITS: preliminary analysis of ARRS data. In: Proceedings of the 4th International Conference on Educational Data Mining, pp. 351–352 (2011)

    Google Scholar 

  28. Wang, Y., Beck, J.E.: Incorporating factors influencing knowledge retention into a student model. In: Proceedings of the 5th International Conference on Educational Data Mining (2012)

    Google Scholar 

  29. Xiong, X., Li, S., Beck, J.E.: Will you get it right next week: Predict delayed performance in enhanced ITS mastery cycle. In: The Twenty-Sixth International FLAIRS Conference (2013)

    Google Scholar 

  30. Xiong, X., Wang, Y., Beck, J.B.: Improving students’ long-term retention performance: a study on personalized retention schedules. In: Proceedings of the Fifth International Conference on Learning Analytics and Knowledge, pp. 325–329. ACM (2015)

    Google Scholar 

  31. Yang, Y., Leung, H., Yue, L., Deng, L.: Automatic dance lesson generation. IEEE Trans. Learn. Technol. 5, 191–198 (2012)

    Article  Google Scholar 

  32. Yudelson, M.: Individualizing Bayesian knowledge tracing. Are skill parameters more important than student parameters? In: Proceedings of the 9th International Conference on Educational Data Mining (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeffrey Matayoshi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Matayoshi, J., Uzun, H., Cosyn, E. (2019). Deep (Un)Learning: Using Neural Networks to Model Retention and Forgetting in an Adaptive Learning System. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds) Artificial Intelligence in Education. AIED 2019. Lecture Notes in Computer Science(), vol 11625. Springer, Cham. https://doi.org/10.1007/978-3-030-23204-7_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23204-7_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-23203-0

  • Online ISBN: 978-3-030-23204-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics