Abstract
Test authors can generate test items (semi-) automatically with different approaches. On the one hand, bottom-up approaches consist of generating items from sources such as texts or domain models. However, relating the generated items to competence models, which define required knowledge and skills on a proficiency scale remains a challenge. On the other hand, top-down approaches use cognitive models and competence constructs to specify the knowledge and skills to be assessed. Unfortunately, on this high abstraction level it is impossible to identify which item elements can actually be generated automatically. In this paper we present a hybrid process which integrates both approaches. It aims at securing a traceability between the specification levels and making it possible to influence item generation during runtime, i.e., after designing all the intermediate models. In the context of the European project EAGLE, we use this process to generate items for information literacy with a focus on text comprehension.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Gierl, M.J., Haladyna, T.M.: Automatic Item Generation - Theory and Practice. Routledge, New York (2013)
Luecht, R.M.: An introduction to assessment engineering for automatic item generation. In: Gierl, M.J., Haladyna, T.M. (eds.) automatic item generation. Routledge, New York (2013)
Gierl, M.J., Lai, H.: Using weak and strong theory to create item models for automatic item generation. In: Gierl, M.J., Haladyna, T.M. (eds.) Automatic Item Generation. Routledge, New York (2013)
Gierl, M.J., Lai, H.: Generating items under the assessment engineering framework. In: Gierl, M.J., Haladyna, T.M. (eds.) Automatic Item Generation. Routledge, New York (2013)
Gierl, M.J., Lai, H.: Methods for creating and evaluating the item model structure used in automatic item generation, pp. 1–30. Centre for Research in Applied Measurement and Evaluation, University of Alberta, Alberta (2012)
Karamanis, N., Ha, L.A., Mitkov, R.: Generating multiple-choice test items from medical text: a pilot study. In: Fourth International Conference Natural Language Generation Sydney, Australia, pp. 111–113 (2006)
Moser, J.R., Gütl, C., Liu, W.: Refined distractor generation with LSA and Stylometry for automated multiple choice question generation. In: Thielscher, M., Zhang, D. (eds.) AI 2012. LNCS, vol. 7691, pp. 95–106. Springer, Heidelberg (2012). doi:10.1007/978-3-642-35101-3_9
Foulonneau, M., Grouès, G.: Common vs. expert knowledge: making the semantic web an educational model. In: 2nd International Workshop on Learning and Education with the Web of Data (LiLe-2012 at WWW-2012 Conference), vol. 840, Lyon, France (2012)
Linnebank, F., Liem, J., Bredeweg, B.: Question generation and answering. DynaLearn, Deliverable D3.3, EC FP7 STREP project 231526 (2010)
Liu, B.: SARAC: a framework for automatic item generation. In: Ninth IEEE International Conference on Advanced Learning Technologies (ICALT2009), Riga, Latvia, pp. 556–558 (2009)
Papadopoulos, Pantelis M., Demetriadis, Stavros N., Stamelos, Ioannis G.: The impact of prompting in technology-enhanced learning as moderated by students’ motivation and metacognitive skills. In: Cress, U., Dimitrova, V., Specht, M. (eds.) EC-TEL 2009. LNCS, vol. 5794, pp. 535–548. Springer, Heidelberg (2009). doi:10.1007/978-3-642-04636-0_49
Zoumpatianos, K., Papasalouros, A., Kotis, K.: Automated transformation of SWRL rules into multiple-choice questions. In: FLAIRS Conference, Palm Beach, FL, USA, vol. 11, pp. 570–575 (2011)
Mitkov, R., Ha, L.A., Karamanis, N.: A computer-aided environment for generating multiple-choice test items. Nat. Lang. Eng. 12, 177–194 (2006)
TAO platform. https://www.taotesting.com/. Accessed 16 Dec 2016
Foulonneau, M., Ras, E.: Automatic Item Generation - New prospectives using open educational resources and the semantic web. Int. J. e-Assessment (IJEA) 1(1) (2014)
Leighton, J.P., Gierl, M.J.: The Learning Sciences in Educational Assessment - The role of Cognitive Models. Cambridge University Press, New York (2011)
Wilson, M.: Measuring progressions: assessment structures underlying a learning progression. Res. Sci. Teach. 46, 716–730 (2009)
Lai, H., Gierl, M.J.: Generating items under the assessment engineering framework. In: Gierl, M.J., Haladyna, T.M. (eds.) Automatic Item Generation. Routledge, New York (2013)
Anderson, L.W., Krathwohl, D.R.: A Taxonomy for Learning, Teaching, and Assessing: a Revision of Bloom’s Taxonomy of Educational Objectives. Longman, New York (2001)
Sonnleitner, P.: Using the LLTM to evaluate an item-generating system for reading comprehension. Psychol. Sci. 50, 345–362 (2008)
ACRL framework (2016). http://www.ala.org/acrl. Accessed 6 Dec 2016
Object Management Group Business Process Modeling and Notation v2 (BPMN). http://www.bpmn.org/. Accessed 19 Dec 2016
Text Encoding Initiative (TEI) (2016)
Höhn, S., Ras, E.: Designing formative and adaptive feedback using incremental user models. In: Chiu, D.K.W., Marenzi, I., Nanni, U., Spaniol, M., Temperini, M. (eds.) ICWL 2016. LNCS, vol. 10013, pp. 172–177. Springer, Cham (2016). doi:10.1007/978-3-319-47440-3_19
Acknowledgements
This work was carried out in the context of the EAGLE funded by the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement No. 619347.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Ras, E., Baudet, A., Foulonneau, M. (2017). A Hybrid Engineering Process for Semi-automatic Item Generation. In: Joosten-ten Brinke, D., Laanpere, M. (eds) Technology Enhanced Assessment. TEA 2016. Communications in Computer and Information Science, vol 653. Springer, Cham. https://doi.org/10.1007/978-3-319-57744-9_10
Download citation
DOI: https://doi.org/10.1007/978-3-319-57744-9_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-57743-2
Online ISBN: 978-3-319-57744-9
eBook Packages: Computer ScienceComputer Science (R0)