Skip to main content

An Alternative to ROC and AUC Analysis of Classifiers

  • Conference paper
Advances in Intelligent Data Analysis X (IDA 2011)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 7014))

Included in the following conference series:

Abstract

Performance evaluation of classifiers is a crucial step for selecting the best classifier or the best set of parameters for a classifier. The misclassification rate of a classifier is often too simple because it does not take into account that misclassification for different classes might have more or less serious consequences. On the other hand, it is often difficult to specify exactly the consequences or costs of misclassifications. ROC and AUC analysis try to overcome these problems, but have their own disadvantages and even inconsistencies. We propose a visualisation technique for classifier performance evaluation and comparison that avoids the problems of ROC and AUC analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hand, D.: Measuring classifier performance: a coherent alternative to the area under the ROC curve. Machine Learning 77, 103–123 (2009)

    Article  Google Scholar 

  2. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, pp. 1137–1143. Morgan Kaufmann, San Mateo (1995)

    Google Scholar 

  3. Obuchowsky, N., Lieber, M., Wians Jr., F.: ROC curves in clinical chemistry: Uses, misuses and possible solutions. Clinical Chemistry 50, 1118–1125 (2004)

    Article  Google Scholar 

  4. Søreide, K.: Receiver-operating characteristic (ROC) curve analysis in diagnostic, prognostic and predictive biomarker research. Clinical Pathology 62, 1–5 (2009)

    Article  Google Scholar 

  5. Berthold, M., Borgelt, C., Höppner, F., Klawonn, F.: Guide to Intelligent Data Analysis: How to Intelligently Make Sense of Real Data. Springer, London (2010)

    Book  MATH  Google Scholar 

  6. Hand, D., Mannila, H., Smyth, P.: Principles of Data Mining. MIT Press, Cambridge (2001)

    Google Scholar 

  7. Provost, F., Fawcett, T., Kohavi, R.: The case against accuracy estimation for comparing induction algorithms. In: Proceedings of the 15th International Conference on Machine Learning (1998)

    Google Scholar 

  8. Mossman, D.: Three-way ROCs. Medical Decision Making 19, 78–89 (1999)

    Article  Google Scholar 

  9. Hand, D., Till, R.: A simple generalisation of the area under the ROC curve for multiple class classification problems. Machine Learning 45, 171–186 (2001)

    Article  MATH  Google Scholar 

  10. Li, J., Fine, J.: ROC analysis with multiple classes and multiple tests: methodology and its application in microarray studies. Biostatistics 9, 566–576 (2008)

    Article  MATH  Google Scholar 

  11. Adams, N., Hand, D.: Comparing classifiers when the misallocation costs are uncertain. Pattern Recognition 32, 1139–1147 (1999)

    Article  Google Scholar 

  12. Drummond, C., Holte, R.: Explicitly representing expected cost: An alternative to ROC representation. In: Proc. Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 198–207. ACM Press, New York (2000)

    Chapter  Google Scholar 

  13. Drummond, C., Holte, R.: Cost curves: An improved method for visualizing classifier performance. Machine Learning 65, 95–130 (2006)

    Article  Google Scholar 

  14. Hernández-Orallo, J., Flach, P., Ferri, C.: Brier curves: a new cost-based visualisation of classifier performance. In: Getoor, L., Scheffer, T. (eds.) Proc. 28th International Conference on Machine Learning (ICML 2011), pp. 585–592. ACM, New York (2011)

    Google Scholar 

  15. Turney, P.: Cost-sensitive classification: Empirical evaluation of a hybrid genetic decision tree induction algorithm. Journal of Artificial Intelligence Research 2, 369–409 (1995)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Klawonn, F., Höppner, F., May, S. (2011). An Alternative to ROC and AUC Analysis of Classifiers. In: Gama, J., Bradley, E., Hollmén, J. (eds) Advances in Intelligent Data Analysis X. IDA 2011. Lecture Notes in Computer Science, vol 7014. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24800-9_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24800-9_21

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24799-6

  • Online ISBN: 978-3-642-24800-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics