Skip to main content

State Coverage: Software Validation Metrics beyond Code Coverage

  • Conference paper
SOFSEM 2012: Theory and Practice of Computer Science (SOFSEM 2012)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7147))

Abstract

Currently, testing is still the most important approach to reduce the amount of software defects. Software quality metrics help to prioritize where additional testing is necessary by measuring the quality of the code. Most approaches to estimate whether some unit of code is sufficiently tested are based on code coverage, which measures what code fragments are exercised by the test suite. Unfortunately, code coverage does not measure to what extent the test suite checks the intended functionality.

We propose state coverage, a metric that measures the ratio of state updates that are read by assertions with respect to the total number of state updates, and we present efficient algorithms to measure state coverage. Like code coverage, state coverage is simple to understand and we show that it is effective to measure and easy to aggregate. During a preliminary evaluation on several open-source libraries, state coverage helped to identify multiple unchecked properties and detect several bugs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barnett, G., Del Tongo, L.: Data Structures and Algorithms: Annotated Reference with Examples. NETSlackers (2008)

    Google Scholar 

  2. Barnett, G., Del Tongo, L.: Data structures and algorithms, dsa (2008), http://dsa.codeplex.com/

  3. Catal, C., Diri, B.: A systematic review of software fault prediction studies. Expert Systems with Applications 36(4), 7346–7354 (2009)

    Article  Google Scholar 

  4. Chang, J., Richardson, D.J., Sankar, S.: Structural specification-based testing with adl. In: Proceedings of the 1996 ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 1996, New York, NY, USA, pp. 62–70 (1996)

    Google Scholar 

  5. Chidamber, S.R., Kemerer, C.F.: A metrics suite for object oriented design. IEEE Trans. Softw. Eng. 20(6), 476–493 (1994)

    Article  Google Scholar 

  6. Dadeau, F., Ledru, Y., du Bousquet, L.: Measuring a java test suite coverage using jml specifications. Electronic Notes in Theoretical Computer Science 190(2), 21–32 (2007); Proceedings of the Third Workshop on Model Based Testing

    Article  Google Scholar 

  7. de Halleux, J.: Quickgraph: A 100% c# graph library with graphviz support (2007), http://www.codeproject.com/KB/miscctrl/quickgraph.aspx

  8. DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Hints on test data selection: Help for the practicing programmer. Computer 11(4), 34–41 (1978)

    Article  Google Scholar 

  9. Fähndrich, M., Barnett, M., Logozzo, F.: Embedded contract languages. In: SAC 2010: Proceedings of the 2010 ACM Symposium on Applied Computing, New York, NY, USA, pp. 2103–2110 (2010)

    Google Scholar 

  10. Floyd, R.W.: Assigning meanings to programs. Mathematical Aspects of Computer Science 19(19-32), 1 (1967)

    MATH  Google Scholar 

  11. Heimdahl, M.P., George, D., Weber, R.: Specification test coverage adequacy criteria = specification test generation inadequacy criteria? In: IEEE International Symposium on High-Assurance Systems Engineering, pp. 178–186 (2004)

    Google Scholar 

  12. Hoare, C.A.R.: Assertions: A personal perspective. IEEE Ann. Hist. Comput. 25(2), 14–25 (2003)

    Article  MathSciNet  Google Scholar 

  13. King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  14. Koster, K., Kao, D.: State coverage: a structural test adequacy criterion for behavior checking. In: The 6th Joint Meeting on European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering: Companion Papers, ESEC-FSE Companion 2007, New York, NY, USA, pp. 541–544 (2007)

    Google Scholar 

  15. Kudrjavets, G., Nagappan, N., Ball, T.: Assessing the relationship between software assertions and faults: An empirical investigation. In: ISSRE 2006: Proceedings of the 17th International Symposium on Software Reliability Engineering, pp. 204–212. IEEE Computer Society, Washington, DC, USA (2006)

    Chapter  Google Scholar 

  16. McCabe, T.J.: A complexity measure. IEEE Trans. Softw. Eng. 2(4), 308–320 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  17. N.I. of Standards and technology. The economic impacts of inadequate infrastructure for software testing. Planning Report 02-3 (2002)

    Google Scholar 

  18. Osherove, R.: The Art of Unit Testing with examples in .NET. Manning Publications Co. (2009)

    Google Scholar 

  19. Rapps, S., Weyuker, E.J.: Selecting software test data using data flow information. IEEE Trans. Softw. Eng. 11, 367–375 (1985)

    Article  MATH  Google Scholar 

  20. Rosenblum, D.: A practical approach to programming with assertions. IEEE Transactions on Software Engineering 21(1), 19–31 (1995)

    Article  Google Scholar 

  21. Sabelfeld, A., Myers, A.C.: Language-based information-flow security. IEEE Journal on Selected Areas in Communications 21(1), 5–19 (2003)

    Article  Google Scholar 

  22. Song, Y., Thummalapenta, S., Xie, T.: Unitplus: assisting developer testing in eclipse. In: Eclipse 2007: Proceedings of the 2007 OOPSLA Workshop on Eclipse Technology Exchange, New York, NY, USA, pp. 26–30 (2007)

    Google Scholar 

  23. Taylor, R.N.: Assertions in programming languages. SIGPLAN Not. 15(1), 105–114 (1980)

    Article  Google Scholar 

  24. Tillmann, N., de Halleux, J.: Pex–White Box Test Generation for .NET. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  25. Vanoverberghe, D., de Halleux, J., Tillmann, N., Piessens, F.: State coverage: Software validation metrics beyond code coverage - extended version (2011), http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW610.abs.html

  26. Zhu, H., Hall, P.A.V., May, J.H.R.: Software unit test coverage and adequacy. ACM Comput. Surv. 29, 366–427 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Vanoverberghe, D., de Halleux, J., Tillmann, N., Piessens, F. (2012). State Coverage: Software Validation Metrics beyond Code Coverage. In: Bieliková, M., Friedrich, G., Gottlob, G., Katzenbeisser, S., Turán, G. (eds) SOFSEM 2012: Theory and Practice of Computer Science. SOFSEM 2012. Lecture Notes in Computer Science, vol 7147. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-27660-6_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-27660-6_44

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-27659-0

  • Online ISBN: 978-3-642-27660-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics