Skip to main content

A Hybrid Engineering Process for Semi-automatic Item Generation

  • Conference paper
  • First Online:
Technology Enhanced Assessment (TEA 2016)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 653))

Included in the following conference series:

Abstract

Test authors can generate test items (semi-) automatically with different approaches. On the one hand, bottom-up approaches consist of generating items from sources such as texts or domain models. However, relating the generated items to competence models, which define required knowledge and skills on a proficiency scale remains a challenge. On the other hand, top-down approaches use cognitive models and competence constructs to specify the knowledge and skills to be assessed. Unfortunately, on this high abstraction level it is impossible to identify which item elements can actually be generated automatically. In this paper we present a hybrid process which integrates both approaches. It aims at securing a traceability between the specification levels and making it possible to influence item generation during runtime, i.e., after designing all the intermediate models. In the context of the European project EAGLE, we use this process to generate items for information literacy with a focus on text comprehension.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.eagle-learning.eu/.

References

  1. Gierl, M.J., Haladyna, T.M.: Automatic Item Generation - Theory and Practice. Routledge, New York (2013)

    Google Scholar 

  2. Luecht, R.M.: An introduction to assessment engineering for automatic item generation. In: Gierl, M.J., Haladyna, T.M. (eds.) automatic item generation. Routledge, New York (2013)

    Google Scholar 

  3. Gierl, M.J., Lai, H.: Using weak and strong theory to create item models for automatic item generation. In: Gierl, M.J., Haladyna, T.M. (eds.) Automatic Item Generation. Routledge, New York (2013)

    Google Scholar 

  4. Gierl, M.J., Lai, H.: Generating items under the assessment engineering framework. In: Gierl, M.J., Haladyna, T.M. (eds.) Automatic Item Generation. Routledge, New York (2013)

    Google Scholar 

  5. Gierl, M.J., Lai, H.: Methods for creating and evaluating the item model structure used in automatic item generation, pp. 1–30. Centre for Research in Applied Measurement and Evaluation, University of Alberta, Alberta (2012)

    Google Scholar 

  6. Karamanis, N., Ha, L.A., Mitkov, R.: Generating multiple-choice test items from medical text: a pilot study. In: Fourth International Conference Natural Language Generation Sydney, Australia, pp. 111–113 (2006)

    Google Scholar 

  7. Moser, J.R., Gütl, C., Liu, W.: Refined distractor generation with LSA and Stylometry for automated multiple choice question generation. In: Thielscher, M., Zhang, D. (eds.) AI 2012. LNCS, vol. 7691, pp. 95–106. Springer, Heidelberg (2012). doi:10.1007/978-3-642-35101-3_9

    Chapter  Google Scholar 

  8. Foulonneau, M., Grouès, G.: Common vs. expert knowledge: making the semantic web an educational model. In: 2nd International Workshop on Learning and Education with the Web of Data (LiLe-2012 at WWW-2012 Conference), vol. 840, Lyon, France (2012)

    Google Scholar 

  9. Linnebank, F., Liem, J., Bredeweg, B.: Question generation and answering. DynaLearn, Deliverable D3.3, EC FP7 STREP project 231526 (2010)

    Google Scholar 

  10. Liu, B.: SARAC: a framework for automatic item generation. In: Ninth IEEE International Conference on Advanced Learning Technologies (ICALT2009), Riga, Latvia, pp. 556–558 (2009)

    Google Scholar 

  11. Papadopoulos, Pantelis M., Demetriadis, Stavros N., Stamelos, Ioannis G.: The impact of prompting in technology-enhanced learning as moderated by students’ motivation and metacognitive skills. In: Cress, U., Dimitrova, V., Specht, M. (eds.) EC-TEL 2009. LNCS, vol. 5794, pp. 535–548. Springer, Heidelberg (2009). doi:10.1007/978-3-642-04636-0_49

    Chapter  Google Scholar 

  12. Zoumpatianos, K., Papasalouros, A., Kotis, K.: Automated transformation of SWRL rules into multiple-choice questions. In: FLAIRS Conference, Palm Beach, FL, USA, vol. 11, pp. 570–575 (2011)

    Google Scholar 

  13. Mitkov, R., Ha, L.A., Karamanis, N.: A computer-aided environment for generating multiple-choice test items. Nat. Lang. Eng. 12, 177–194 (2006)

    Article  Google Scholar 

  14. TAO platform. https://www.taotesting.com/. Accessed 16 Dec 2016

  15. Foulonneau, M., Ras, E.: Automatic Item Generation - New prospectives using open educational resources and the semantic web. Int. J. e-Assessment (IJEA) 1(1) (2014)

    Google Scholar 

  16. Leighton, J.P., Gierl, M.J.: The Learning Sciences in Educational Assessment - The role of Cognitive Models. Cambridge University Press, New York (2011)

    Book  Google Scholar 

  17. Wilson, M.: Measuring progressions: assessment structures underlying a learning progression. Res. Sci. Teach. 46, 716–730 (2009)

    Article  Google Scholar 

  18. Lai, H., Gierl, M.J.: Generating items under the assessment engineering framework. In: Gierl, M.J., Haladyna, T.M. (eds.) Automatic Item Generation. Routledge, New York (2013)

    Google Scholar 

  19. Anderson, L.W., Krathwohl, D.R.: A Taxonomy for Learning, Teaching, and Assessing: a Revision of Bloom’s Taxonomy of Educational Objectives. Longman, New York (2001)

    Google Scholar 

  20. Sonnleitner, P.: Using the LLTM to evaluate an item-generating system for reading comprehension. Psychol. Sci. 50, 345–362 (2008)

    Google Scholar 

  21. ACRL framework (2016). http://www.ala.org/acrl. Accessed 6 Dec 2016

  22. Object Management Group Business Process Modeling and Notation v2 (BPMN). http://www.bpmn.org/. Accessed 19 Dec 2016

  23. Text Encoding Initiative (TEI) (2016)

    Google Scholar 

  24. Höhn, S., Ras, E.: Designing formative and adaptive feedback using incremental user models. In: Chiu, D.K.W., Marenzi, I., Nanni, U., Spaniol, M., Temperini, M. (eds.) ICWL 2016. LNCS, vol. 10013, pp. 172–177. Springer, Cham (2016). doi:10.1007/978-3-319-47440-3_19

    Chapter  Google Scholar 

Download references

Acknowledgements

This work was carried out in the context of the EAGLE funded by the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement No. 619347.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric Ras .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Ras, E., Baudet, A., Foulonneau, M. (2017). A Hybrid Engineering Process for Semi-automatic Item Generation. In: Joosten-ten Brinke, D., Laanpere, M. (eds) Technology Enhanced Assessment. TEA 2016. Communications in Computer and Information Science, vol 653. Springer, Cham. https://doi.org/10.1007/978-3-319-57744-9_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-57744-9_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-57743-2

  • Online ISBN: 978-3-319-57744-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics