Skip to main content

A Meta Classifier by Clustering of Classifiers

  • Conference paper
Nature-Inspired Computation and Machine Learning (MICAI 2014)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8857))

Included in the following conference series:

  • 2208 Accesses

Abstract

To learn any problem, many classifiers have been introduced so far. Each of these classifiers has many strengths (positive aspects) and weaknesses (negative aspects) that make it suitable for some specific problems. But there is no powerful solution to indicate which classifier is the best classifier (or at least a good one) for a special problem. Fortunately the ensemble learning provides us with a powerful approach to prepare a near-to-optimum classifying system for any given problem. How to create a suitable ensemble of base classifiers is the most challenging problem in classifier ensemble. An ensemble vitally needs diversity. It means that if a pool of classifiers wants to be successful as an ensemble, they must be diverse enough to cover the errors of each other. So during creation of an ensemble, we need a mechanism to guarantee the ensemble classifiers are diversity. Sometimes this mechanism is to select/remove a subset of the produced base classifiers with the aim of maintaining the diversity among the ensemble. This paper proposes an innovative ensemble creation named the Classifier Selection Based on Clustering (CSBC). The CSBC guarantees the necessary diversity among ensemble classifiers, using the clustering of classifiers technique. It uses bagging as generator of the base classifiers. After producing a large number of the base classifiers, CSBC partitions them using a clustering algorithm. After that by selecting one classifier from each cluster, CSBC produces the final ensemble. The weighted majority vote method is taken as aggregator function of the ensemble. Here it is probed how the cluster number affects the performance of the CSBC method and how we can choose a good approximate value for cluster number in any dataset adaptively. We expand our studies on a large number of real datasets of UCI repository to reach a decisive conclusion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

  2. Breiman, L.: Bagging Predictors. Journal of Machine Learning 24(2), 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  3. Breiman, L.: Random Forests. Machine Learning 45(1), 5–32 (2001)

    Article  MATH  Google Scholar 

  4. Freund, Y., Schapire, R.E.: A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Journal Computer Syst. Sci. 55(1), 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  5. Giacinto, G., Roli, F.: An approach to the automatic design of multiple classifier systems. Pattern Recognition Letters 22, 25–33 (2001)

    Article  MATH  Google Scholar 

  6. Günter, S., Bunke, H.: Creation of Classifier Ensembles for Handwritten Word Recognition Using Feature Selection Algorithms. In: Proceedings of the Eighth International Workshop on Frontiers in Handwriting Recognition, p. 183 (2002)

    Google Scholar 

  7. Kuncheva, L.I.: Combining Pattern Classifiers, Methods and Algorithms. Wiley, New York (2005)

    Google Scholar 

  8. Minaei-Bidgoli, B., Parvin, H., Alinejad-Rokny, H., Alizadeh, H., Punch, W.F.: Effects of resampling method and adaptation on clustering ensemble efficacy. AIR 41(1), 27–48 (2014)

    Google Scholar 

  9. Parvin, H., Minaei-Bidgoli, B., Shahpar, H.: Classifier Selection by Clustering. In: Martínez-Trinidad, J.F., Carrasco-Ochoa, J.A., Ben-Youssef Brants, C., Hancock, E.R. (eds.) MCPR 2011. LNCS, vol. 6718, pp. 60–66. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  10. Parvin, H., Alinejad-Rokny, H., Minaei-Bidgoli, B., Parvin, S.: A new classifier ensemble methodology based on subspace learning. JETAI 25(2), 227–250 (2013)

    Google Scholar 

  11. Parvin, H., Minaei-Bidgoli, B., Alinejad-Rokny, H., Punch, W.F.: Data weighing mechanisms for clustering ensembles. CEE 39(5), 1433–1450 (2013)

    Google Scholar 

  12. Parvin, H., Minaei-Bidgoli, B.: A clustering ensemble framework based on elite selection of weighted clusters. Adv. Data Analysis and Classification 7(2), 181–208 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  13. Parvin, H., Beigi, A., Mozayani, N.: A Clustering Ensemble Learning Method Based on the Ant Colony Clustering Algorithm. An International Journal of Applied and Computational Mathematics 11, 286–302 (2012)

    MathSciNet  Google Scholar 

  14. Parvin, H., MirnabiBaboli, M., Alinejad, H.: Proposing a Classifier Ensemble Framework based on Classifier Selection and Decision Tree. In: Engineering Applications of Artificial Intelligence, EAAI, pp. 34–42 (2014)

    Google Scholar 

  15. Peña, J.M.: Finding Consensus Bayesian Network Structures. Journal of Artificial Intelligence Research 42, 661–687 (2011)

    MATH  MathSciNet  Google Scholar 

  16. Khashei, M., Bijari, M.: An Artificial Neural Network (p, d, q) Model for Timeseries Forecasting. Expert Systems with Applications 37, 479–489 (2010)

    Article  Google Scholar 

  17. Pazos, A.B.P., Gonzalez, A.A., Pazos, F.M.: Artificial NeuroGlial Networks. In: Encyclopedia of Artificial Intelligence, New York, pp. 167–171 (2009)

    Google Scholar 

  18. Kuncheva, L.I., Whitaker, C.: Measures of diversity in classifier ensembles and their relationship with ensemble accuracy. Machine Learning, 181–207 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Jamnejad, M.I., Parvin, S., Heidarzadegan, A., Moshki, M. (2014). A Meta Classifier by Clustering of Classifiers. In: Gelbukh, A., Espinoza, F.C., Galicia-Haro, S.N. (eds) Nature-Inspired Computation and Machine Learning. MICAI 2014. Lecture Notes in Computer Science(), vol 8857. Springer, Cham. https://doi.org/10.1007/978-3-319-13650-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-13650-9_13

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-13649-3

  • Online ISBN: 978-3-319-13650-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics