Skip to main content

Challenges for Automated, Model-Based Test Scenario Generation

  • Conference paper
  • First Online:
Information and Software Technologies (ICIST 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1078))

Included in the following conference series:

Abstract

This paper focuses on challenges to automatic test suite generation from formal models of software systems. Popular tools and methods and their limitations are discussed. Data cohesion, meaningfulness of derived behavior, usefulness for debugging, coverage evenness, coverage overlap, fault detection ability, and size of the generated test suite are considered as quality indicators for generated tests. A novel composite weight-based heuristic method for improving the quality of automatically generated test scenarios is proposed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Utting, M., Legeard, B.: Practical Model-Based Testing: A Tools Approach, p. 456. Morgan-Kaufmann, San Francisco (2010)

    Google Scholar 

  2. Dssouli, R., et al.: Testing the control-flow, data-flow, and time aspects of communication systems: a survey. Adv. Comput. 107, 95–155 (2017)

    Article  Google Scholar 

  3. Gay, G., Staats, M., Whalen, M., Heimdahl, M.: The risks of coverage-directed test case generation. IEEE Trans. Softw. Eng. 41, 803–819 (2015)

    Article  Google Scholar 

  4. Fraser, G., Arcuri, A.: 1600 faults in 100 projects: automatically finding faults while achieving high coverage with Evosuite. Empir. Softw. Eng. 20(3), 611–639 (2015)

    Article  Google Scholar 

  5. Cseppento, L., Micskei, Z.: Evaluating symbolic execution-based test tools. In: IEEE Conference on Software Testing, Verification and Validation, pp. 1–10 (2015). http://doi.org/10.1109/ICST.2015.7102587

  6. Chilenski, J., Millner, S.: Applicability of modified condition/decision coverage to software testing. Softw. Eng. J. 9, 193–200 (1994)

    Article  Google Scholar 

  7. Inozemtseva, L., Holmes, R.: Coverage is not strongly correlated with test suite effectiveness. In: Proceedings of ACM ICSE 2014, pp. 435–445 (2015). http://doi.org/10.1145/2568225.2568271

  8. Heimdahl, M., Devaraj, G.: Specification test coverage adequacy criteria = specification test generation inadequacy criteria? In: IEEE Computer Society, HASE, pp. 178–186 (2004)

    Google Scholar 

  9. Rushby, J.: Automated test generation and verified software. In: Meyer, B., Woodcock, J. (eds.) VSTTE 2005. LNCS, vol. 4171, pp. 161–172. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69149-5_18

    Chapter  Google Scholar 

  10. Staats, M., Gay, G., Whalen, M., Heimdahl, M.: On the danger of coverage directed test case generation. In: de Lara, J., Zisman, A. (eds.) FASE 2012. LNCS, vol. 7212, pp. 409–424. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-28872-2_28

    Chapter  Google Scholar 

  11. Kolchin, A.: Interactive method for cumulative analysis of software formal models behavior. In: Proceedings of the 11th International Conference of Programming UkrPROG 2018, vol. 2139–2018. pp. 115–123. CEUR-WS (2018)

    Google Scholar 

  12. Chekam, T., et. al.: An empirical study on mutation, statement and branch coverage fault revelation that avoids unreliable clean program assumption. In: IEEE-ACM 39th International Conference on Software Engineering, 12 p. (2017). http://doi.org/10.1109/ICSE.2017.61

  13. Mustafa, A., et al.: Comparative evaluation of the state-of-art requirements-based test case generation approaches. Int. J. Adv. Sci. Eng. Inf. Technol. 7, 1567–1573 (2017)

    Article  Google Scholar 

  14. Ceccato, M., et al.: Do automatically generated test cases make debugging easier? an experimental assessment of debugging effectiveness and efficiency. ACM Trans. Softw. Eng. Methodol. 25(1), 1–38 (2015). https://doi.org/10.1145/2768829

    Article  Google Scholar 

  15. Groce, A., et al.: Cause reduction: delta debugging, even without bugs. Softw. Test. Verif. Reliab., 1–30 (2015). http://doi.org/10.1002/stvr.1574

    Article  Google Scholar 

  16. Vivanti, M., et al.: Search-based data-flow test generation. In: IEEE International Symposium on Software Reliability Engineering, vol. 10 (2013). http://doi.org/10.1109/ISSRE.2013.6698890

  17. Palomba, F., et al.: Automatic test case generation: what if test code quality matters? In: Proceedings of International Symposium on Software Testing and Analysis, pp. 130–141 (2016)

    Google Scholar 

  18. Cadar, C., Sen, K.: Symbolic execution for software testing: three decades later. Commun. ACM 56(2), 82–90 (2013). https://doi.org/10.1145/2408776.2408795

    Article  Google Scholar 

  19. Kolchin, A.V.: An automatic method for the dynamic construction of abstractions of states of a formal model. Cybern. Syst. Anal. 46(4), 583–601 (2010). https://doi.org/10.1007/s10559-010-9235-9

    Article  MathSciNet  MATH  Google Scholar 

  20. Hong, H.S., Ural, H.: Dependence testing: extending data flow testing with control dependence. In: Khendek, F., Dssouli, R. (eds.) TestCom 2005. LNCS, vol. 3502, pp. 23–39. Springer, Heidelberg (2005). https://doi.org/10.1007/11430230_3

    Chapter  Google Scholar 

  21. Rayadurgam, S., Heimdahl, M.: Coverage based test-case generation using model checkers. In: Proceedings of IEEE International Conference on the Engineering of Computer Based Systems, pp. 83–91 (2001)

    Google Scholar 

  22. Kolchin, A.: A novel algorithm for attacking path explosion in model-based test generation for data flow coverage. In: Proceedings of IEEE 1st International Conference on System Analysis and Intelligent Computing, SAIC 2018, 5 p. (2018). http://doi.org/10.1109/SAIC.2018.8516824

  23. Rapps, S., Weyuker, E.: Data flow analysis techniques for test data selection. In: Proceedings of International Conference of Software Engineering, pp. 272–278 (1982)

    Google Scholar 

  24. Morell, L.J.: A theory of fault-based testing. IEEE Trans. Softw. Eng. 16(8), 844–857 (1990). https://doi.org/10.1109/32.57623

    Article  Google Scholar 

  25. Kotlyarov, V., Drobintsev, P., Voinov, N., Selin, I., Tolstoles, A.: Technology and tools for developing industrial software test suites based on formal models and implementing scalable testing process on supercomputer. In: Itsykson, V., Scedrov, A., Zakharov, V. (eds.) TMPA 2017. CCIS, vol. 779, pp. 51–63. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-71734-0_5

    Chapter  Google Scholar 

  26. Tallam, S., Gupta, N.: A concept analysis inspired greedy algorithm for test suite minimization. ACM Softw. Eng. Notes 31(1), 35–42 (2006). https://doi.org/10.1145/1108768.1108802

    Article  Google Scholar 

  27. Lei, Y., Andrews, J.: Minimization of randomized unit test cases. In: International Symposium on Software Reliability Engineering, pp. 267–276 (2005). http://doi.org/10.1109/ISSRE.2005.28

  28. Namin, A., Andrews, J.: The influence of size and coverage on test suite effectiveness. In: Proceedings of International Symposium on Software Testing, pp. 57–68 (2009). http://doi.org/10.1145/1572272.1572280

  29. Heimdahl, M., et al.: Test-suite reduction for model based tests: effects on test quality and implications for testing. In: ASE Conference, pp. 176–185 (2004). http://doi.org/10.1109/ASE.2004.1342735

  30. Kolchin, A., et al.: An approach to creating concretized test scenarios within test automation technology for industrial software projects. Autom. Control Comput. Sci., pp. 433–442 (2013). http://doi.org/10.3103/S0146411613070213

    Article  Google Scholar 

  31. Myers, G.J.: The Art Of Software Testing, 254 p. Wiley, New York (2004)

    Google Scholar 

  32. Herman, P.M.: A data flow analysis approach to program testing. Aust. Comput. J. 8(3), 92–97 (1976)

    MATH  Google Scholar 

  33. Su, T. et. al. A survey on data-flow testing. ACM Comput. Surv. 50, 35 p. (2017)

    Article  Google Scholar 

  34. Beer, I., et al.: Explaining counterexamples using causality. Formal Methods Syst. Des. 40(1), 20–40 (2012). https://doi.org/10.1007/s10703-011-0132-2

    Article  MATH  Google Scholar 

  35. Neetu, J., Rabins, P.: Automated test data generation applying heuristic approaches—a survey. Softw. Eng., pp. 699–708 (2019). http://doi.org/10.1007/978-981-10-8848-3_68

    Google Scholar 

  36. Barr, E., et al.: The oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41, 507–525 (2015). https://doi.org/10.1109/TSE.2014.2372785

    Article  Google Scholar 

  37. Li, N., Offut J.: An experimental comparison of four unit test criteria: mutation, edge-pair, all-uses and prime path coverage. In: IEEE International Conference on Software Testing, Verification and Validation, pp. 220–229 (2009). http://doi.org/10.1109/ICSTW.2009.30

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Kolchin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kolchin, A., Potiyenko, S., Weigert, T. (2019). Challenges for Automated, Model-Based Test Scenario Generation. In: Damaševičius, R., Vasiljevienė, G. (eds) Information and Software Technologies. ICIST 2019. Communications in Computer and Information Science, vol 1078. Springer, Cham. https://doi.org/10.1007/978-3-030-30275-7_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30275-7_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30274-0

  • Online ISBN: 978-3-030-30275-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics