Skip to main content
Log in

Making Diversity Enhancement Based on Multiple Classifier System by Weight Tuning

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This article presents a new method to construct multiple classifier system by making diverse base classifiers using weight tuning. In the method presented, base classifiers are multilayer perceptions which creates diverse base classifiers using a three-step procedure. In the first step, base classifiers are trained for acceptable accuracy. In the second step, a weight tuning process tunes their weights such that each one can distinguish one class of input data from the others with highest possible accuracy. An evolutionary method is used to optimize efficiency of each base classifier to distinguish one class of input data in this step. In the third step, a new method combines the results of the base classifiers. As diversity is measured and monitored throughout the entire procedure, it is measured using a confusion matrix. Superiority of the proposed method is discussed using several known classifier fusion methods and known benchmark datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley, Hoboken

    Book  MATH  Google Scholar 

  2. Partridge D, Griffith N (2002) Multiple classifier systems: software engineered, automatically modular leading to a taxonomic overview. Pattern Anal Appl 5(2): 180–188

    Article  MATH  MathSciNet  Google Scholar 

  3. Alexandre LA, Campilho AC, Kamel M (2001) On combining classifiers using sum and product rules. Pattern Recognit Lett 22: 1283–1289

    Article  MATH  Google Scholar 

  4. Kittler J et al (1998) On combining classifiers. IEEE Trans Pattern Anal Mach Intell 20(3): 226–239

    Article  Google Scholar 

  5. Chen L, Kamel MS (2009) A generalized adaptive ensemble generation and aggregation approach for multiple classifier systems. Pattern Recognit 42: 629–644

    Article  MATH  Google Scholar 

  6. Valdovinos RM, S nchez JS, Barandela R (2005) Dynamic and static weighing in classifier fusion. Pattern recognition and image analysis. Springer-Verlag, Berlin, pp, pp 59–66

  7. Dasarathy BV, Sheela BV (1979) Composite classifier system design: concepts and methodology. Proc IEEE 67(5): 708–713

    Article  Google Scholar 

  8. Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12(10): 993–1001

    Article  Google Scholar 

  9. Turner K, Ghosh J (1996) Error correlation and error reduction in ensemble classifiers. Connect Sci 8(3): 385–403

    Article  Google Scholar 

  10. Krogh A, Vedelsby J (1995) Neural network ensembles, cross validation and active learning. Adv Neural Inform Process Syst 7: 231–238

    Google Scholar 

  11. Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles. Mach Learn 51: 181–207

    Article  MATH  Google Scholar 

  12. Kuncheva LI (2003) That elusive diversity in classifier ensembles. In: IbPRIA, S.n., Mallorca, pp 1126–1138

  13. Aksela M, Laaksonen J (2006) Using diversity of errors for selecting members of a committee classifier. Pattern Recognit 39: 608–623

    Article  MATH  Google Scholar 

  14. Kohavi R, Wolpert DH (1996) Bias plus variance decomposition for zero–one loss functions. In: 13th international conference on machine learning, Edinburgh, pp 275–283

  15. Windeatt T (2005) Diversity measures for multiple classifier system analysis and design. Inform Fusion 6: 21–36

    Article  Google Scholar 

  16. Geman S, Bienenstock E, Doursat R (1992) Neural networks and the bias/variance dilemma. Neural Comput 4(1): 1–58

    Article  Google Scholar 

  17. Kong Z, Cai Z (2007) Advances of research in fuzzy integral for classifier’s fusion. In: 8th ACIS international conference on software engineering, artificial intelligence, networking and parallel/distributed computing, Qingdao, pp 809–814

  18. Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: 13th international conference on machine learning, Edinburgh, pp 48–156

  19. Breiman L (1996) Bagging predictors. Mach Learn 24(2): 123–140

    MATH  MathSciNet  Google Scholar 

  20. Sharkey AJC (1999) Combining artificial neural nets: ensemble and modular multi-net systems. Multi-net systems. Springer-Verlag, Berlin, pp, pp 1–30

    Google Scholar 

  21. Sharkey AJC et al (2000) The test and select approach to ensemble combination. In: International workshop on multiple classifier systems, LNCS, S.n., Calgiari, pp 30–44

  22. Duin RPW, Tax DMJ (2000) Experiments with classifier combining rules. In: International workshop on multiple classifier systems, LNCS, S.n., Calgiari, pp 16–29

  23. Partridge D, Yates WB (1996) Engineering multiversion neural-net systems. Neural Comput 8(4): 869–893

    Article  Google Scholar 

  24. Breiman L (2001) Random forests. Mach Learn 45(1): 5–32

    Article  MATH  Google Scholar 

  25. Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8): 832–844

    Article  Google Scholar 

  26. Rodriguez JJ, Kuncheva LI (2006) Rotation forest: a new classifier ensemble method. IEEE Trans Pattern Anal Mach Intell 28(10): 1619–1630

    Article  Google Scholar 

  27. Zhang H, Lu J (2009) Creating ensembles of classifiers via fuzzy clustering and deflection. Fuzzy Sets Syst 161(13): 1790–1802

    Article  MathSciNet  Google Scholar 

  28. Tsymbal A, Pechenizkiy M, Cunningham P (2005) Diversity in search strategies for ensemble feature selection. Inform Fusion 1(1):83–98

    Article  Google Scholar 

  29. Blake CL, Merz CJ. UCI repository of machine learning databases. Department of Information and Computer Sciences, University of California. Irvine, s.n., 2007, http://www.ics.uci.edu/~mlearn/MLRepository.html

  30. Schapire RE et al (1998) Boosting the margin: a new explanation for the effectiveness of voting method. Ann stat 25(5): 1651–1686

    MathSciNet  Google Scholar 

  31. Brown G, Wyatt JL, Tino P (2005) Managing diversity in regression ensembles. J Mach Learn Res 6: 1621–1650

    MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehdi Salkhordeh Haghighi.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Salkhordeh Haghighi, M., Vahedian, A. & Sadoghi Yazdi, H. Making Diversity Enhancement Based on Multiple Classifier System by Weight Tuning. Neural Process Lett 35, 61–80 (2012). https://doi.org/10.1007/s11063-011-9204-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-011-9204-y

Keywords

Navigation