Skip to main content

A MOF-Based Framework for Defining Metrics to Measure the Quality of Models

  • Conference paper
Modelling Foundations and Applications (ECMFA 2014)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 8569))

Included in the following conference series:

  • 767 Accesses

Abstract

Controlled experiments in model-based software engineering, especially those involving human subjects performing modeling tasks, often require comparing models produced by experiment subjects with reference models, which are considered to be correct and complete. The purpose of such comparison is to assess the quality of models produced by experiment subjects so that experiment hypotheses can be accepted or rejected. The quality of models is typically measured quantitatively based on metrics. Manually defining such metrics for a rich modeling language is often cumbersome and error-prone. It can also result in metrics that do not systematically consider relevant details and in turn may produce biased results. In this paper, we present a framework to automatically generate quality metrics for MOF-based metamodels, which in turn can be used to measure the quality of models (instances of the MOF-based metamodels). This framework was evaluated by comparing its results with manually derived quality metrics for UML class and sequence diagrams and it has been used to derive metrics for measuring the quality of UML state machine diagrams. Results show that it is more efficient and systematic to define quality metrics with the framework than doing it manually.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Claes Wohlin, P.R.: Martin Höst: Experimentation in software engineering: An introduction. Kluwer Acdamic Publisher, London (2000)

    Book  Google Scholar 

  • Yue, T., Briand, L., Labiche, Y.: Facilitating the Transition from Use Case Models to Analysis Models: Approach and Experiments. ACM Transactions on Software Engineering and Methodology (TOSEM) 22 (2013)

    Google Scholar 

  • Yue, T., Briand, L.C., Labiche, Y.: A Use Case Modeling Approach to Facilitate the Transition Towards Analysis Models: Concepts and Empirical Evaluation. In: Schürr, A., Selic, B. (eds.) MODELS 2009. LNCS, vol. 5795, pp. 484–498. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  • OMG: MOF 2.0 Core Specification (formal/2006-01-01).

    Google Scholar 

  • OMG: UML 2.2 Superstructure Specification (formal/2009-02-04).

    Google Scholar 

  • Ali, S., Yue, T., Briand, L.: Assessing Quality and Effort of Applying Aspect State Machines for Robustness Testing: A Controlled Experiment. In: International Conference on Software Testing, Verification and Validation (2013)

    Google Scholar 

  • Ali, S., Yue, T.: Comprehensively Evaluating Conformance Error Rates of Applying Aspect State Machines for Robustness Testing. In: International Conference on Aspect-Oriented Software Development (AOSD 2012), pp. 155–166. ACM (2012)

    Google Scholar 

  • http://www.eclipse.org/modeling/emf/

  • Ali, S., Briand, L., Hemmati, H.: Modeling Robustness Behavior Using Aspect-Oriented Modeling to Support Robustness Testing of Industrial Systems (2010)

    Google Scholar 

  • http://www.sdmetrics.com/index.html

  • Briand, L., Melo, W., Wüst, J.: Assessing the applicability of fault-proneness models across object-oriented software projects. IEEE Transactions on Software Engineering, 706–720 (2002)

    Google Scholar 

  • Briand, L., Wüst, J.: Empirical studies of quality models in object-oriented systems. Advances in Computers 56, 97–166 (2002)

    Article  Google Scholar 

  • Chidamber, S.R., Kemerer, C.: Towards a Metrics Suite for Object Oriented design. In: Object-Oriented Programming: Systems, Languages and Applications (OOPSLA 1991), pp. 197–211. SIGPLAN Notices (1991)

    Google Scholar 

  • Bieman, J., Kang, B.: Cohesion and reuse in an object-oriented system. In: The 1995 Symposium on Software Reusability, pp. 259–262. ACM (1995)

    Google Scholar 

  • Briand, L., Bunse, C., Daly, J.: A controlled experiment for evaluating quality guidelines on the maintainability of object-oriented designs. IEEE Transactions on Software Engineering 27, 513–530 (2002)

    Article  Google Scholar 

  • Harrison, R., Counsell, S., Nithi, R.: Experimental assessment of the effect of inheritance on the maintainability of object-oriented systems. Journal of Systems and Software 52, 173–179 (2000)

    Article  Google Scholar 

  • Lange, C.F.J.: Assessing and improving the quality of modeling: A series of empirical studies about the UML. Technische Universiteit Eindhoven (2007)

    Google Scholar 

  • Xu, B., Kang, D., Lu, J.: A Structural Complexity Measure for UML Class Diagrams. In: Bubak, M., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2004. LNCS, vol. 3036, pp. 421–424. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  • Reißing, R.: Towards a model for object-oriented design measurement. In: ECOOP Workshop on Quantative Approaches in Object-Oriented Software Engineering, pp. 71–84 (2001)

    Google Scholar 

  • Marchesi, M.: OOA metrics for the Unified Modeling Language. In: The 2nd Euromicro Conference on Software Maintenance and Reengineering, pp. 67–73 (1998)

    Google Scholar 

  • Genero, M., Piattini, M., Calero, C.: Early measures for UML class diagrams. L’Objet 6, 489–505 (2000)

    Google Scholar 

  • Genero, M., Piattini, M., Calero, C.: Empirical validation of class diagram metrics. In: International Symposium on Empirical Software Engineering, pp. 195–203. IEEE (2002)

    Google Scholar 

  • Kim, H., Boldyreff, C.: Developing software metrics applicable to UML models. In: The 6th International Workshop on Quantitative Approaches in Object–Oriented Software Engineering. Citeseer (2002)

    Google Scholar 

  • Lavazza, L., Agostini, A.: Automated Measurement of UML Models: an open toolset approach. Journal of Object Technology 4, 114–134 (2005)

    Article  Google Scholar 

  • Carbone, M., Santucci, G.: Fast&&Serious: a UML based metric for effort estimation. In: 6th ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering. Citeseer (2011)

    Google Scholar 

  • Cruz-Lemus, J., Maes, A., Genero, M., Poels, G., Piattini, M.: The impact of structural complexity on the understandability of UML statechart diagrams. Information Sciences 180, 2209–2220 (2010)

    Article  MathSciNet  Google Scholar 

  • OMG: OMG Knowledge Discovery Meta-Model (KDM) (formal/2009-01-02)

    Google Scholar 

  • OMG: OMG Business Procee Definition Metamodel (BPDM) (formal/2008-11-03)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Yue, T., Ali, S. (2014). A MOF-Based Framework for Defining Metrics to Measure the Quality of Models. In: Cabot, J., Rubin, J. (eds) Modelling Foundations and Applications. ECMFA 2014. Lecture Notes in Computer Science, vol 8569. Springer, Cham. https://doi.org/10.1007/978-3-319-09195-2_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-09195-2_14

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-09194-5

  • Online ISBN: 978-3-319-09195-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics