Abstract
This article presents a new method to construct multiple classifier system by making diverse base classifiers using weight tuning. In the method presented, base classifiers are multilayer perceptions which creates diverse base classifiers using a three-step procedure. In the first step, base classifiers are trained for acceptable accuracy. In the second step, a weight tuning process tunes their weights such that each one can distinguish one class of input data from the others with highest possible accuracy. An evolutionary method is used to optimize efficiency of each base classifier to distinguish one class of input data in this step. In the third step, a new method combines the results of the base classifiers. As diversity is measured and monitored throughout the entire procedure, it is measured using a confusion matrix. Superiority of the proposed method is discussed using several known classifier fusion methods and known benchmark datasets.
Similar content being viewed by others
References
Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley, Hoboken
Partridge D, Griffith N (2002) Multiple classifier systems: software engineered, automatically modular leading to a taxonomic overview. Pattern Anal Appl 5(2): 180–188
Alexandre LA, Campilho AC, Kamel M (2001) On combining classifiers using sum and product rules. Pattern Recognit Lett 22: 1283–1289
Kittler J et al (1998) On combining classifiers. IEEE Trans Pattern Anal Mach Intell 20(3): 226–239
Chen L, Kamel MS (2009) A generalized adaptive ensemble generation and aggregation approach for multiple classifier systems. Pattern Recognit 42: 629–644
Valdovinos RM, S nchez JS, Barandela R (2005) Dynamic and static weighing in classifier fusion. Pattern recognition and image analysis. Springer-Verlag, Berlin, pp, pp 59–66
Dasarathy BV, Sheela BV (1979) Composite classifier system design: concepts and methodology. Proc IEEE 67(5): 708–713
Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12(10): 993–1001
Turner K, Ghosh J (1996) Error correlation and error reduction in ensemble classifiers. Connect Sci 8(3): 385–403
Krogh A, Vedelsby J (1995) Neural network ensembles, cross validation and active learning. Adv Neural Inform Process Syst 7: 231–238
Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles. Mach Learn 51: 181–207
Kuncheva LI (2003) That elusive diversity in classifier ensembles. In: IbPRIA, S.n., Mallorca, pp 1126–1138
Aksela M, Laaksonen J (2006) Using diversity of errors for selecting members of a committee classifier. Pattern Recognit 39: 608–623
Kohavi R, Wolpert DH (1996) Bias plus variance decomposition for zero–one loss functions. In: 13th international conference on machine learning, Edinburgh, pp 275–283
Windeatt T (2005) Diversity measures for multiple classifier system analysis and design. Inform Fusion 6: 21–36
Geman S, Bienenstock E, Doursat R (1992) Neural networks and the bias/variance dilemma. Neural Comput 4(1): 1–58
Kong Z, Cai Z (2007) Advances of research in fuzzy integral for classifier’s fusion. In: 8th ACIS international conference on software engineering, artificial intelligence, networking and parallel/distributed computing, Qingdao, pp 809–814
Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: 13th international conference on machine learning, Edinburgh, pp 48–156
Breiman L (1996) Bagging predictors. Mach Learn 24(2): 123–140
Sharkey AJC (1999) Combining artificial neural nets: ensemble and modular multi-net systems. Multi-net systems. Springer-Verlag, Berlin, pp, pp 1–30
Sharkey AJC et al (2000) The test and select approach to ensemble combination. In: International workshop on multiple classifier systems, LNCS, S.n., Calgiari, pp 30–44
Duin RPW, Tax DMJ (2000) Experiments with classifier combining rules. In: International workshop on multiple classifier systems, LNCS, S.n., Calgiari, pp 16–29
Partridge D, Yates WB (1996) Engineering multiversion neural-net systems. Neural Comput 8(4): 869–893
Breiman L (2001) Random forests. Mach Learn 45(1): 5–32
Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8): 832–844
Rodriguez JJ, Kuncheva LI (2006) Rotation forest: a new classifier ensemble method. IEEE Trans Pattern Anal Mach Intell 28(10): 1619–1630
Zhang H, Lu J (2009) Creating ensembles of classifiers via fuzzy clustering and deflection. Fuzzy Sets Syst 161(13): 1790–1802
Tsymbal A, Pechenizkiy M, Cunningham P (2005) Diversity in search strategies for ensemble feature selection. Inform Fusion 1(1):83–98
Blake CL, Merz CJ. UCI repository of machine learning databases. Department of Information and Computer Sciences, University of California. Irvine, s.n., 2007, http://www.ics.uci.edu/~mlearn/MLRepository.html
Schapire RE et al (1998) Boosting the margin: a new explanation for the effectiveness of voting method. Ann stat 25(5): 1651–1686
Brown G, Wyatt JL, Tino P (2005) Managing diversity in regression ensembles. J Mach Learn Res 6: 1621–1650
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Salkhordeh Haghighi, M., Vahedian, A. & Sadoghi Yazdi, H. Making Diversity Enhancement Based on Multiple Classifier System by Weight Tuning. Neural Process Lett 35, 61–80 (2012). https://doi.org/10.1007/s11063-011-9204-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-011-9204-y