Skip to main content

Pre-emphasizing Binarized Ensembles to Improve Classification Performance

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10305))

Included in the following conference series:

  • 1824 Accesses

Abstract

Machine ensembles are learning architectures that offer high expressive capacities and, consequently, remarkable performances. This is due to their high number of trainable parameters.

In this paper, we explore and discuss whether binarization techniques are effective to improve standard diversification methods and if a simple additional trick, consisting in weighting the training examples, allows to obtain better results. Experimental results, for three selected classification problems, show that binarization permits that standard direct diversification methods (bagging, in particular) achieve better results, obtaining even more significant performance improvements when pre-emphasizing the training samples. Some research avenues that this finding opens are mentioned in the conclusions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Sig. Syst. 2, 303–314 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  2. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989)

    Article  Google Scholar 

  3. Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000). doi:10.1007/3-540-45014-9_1

    Chapter  Google Scholar 

  4. Breiman, L.: Bagging predictors. Mach. Learn. 4, 123–140 (1996)

    MATH  Google Scholar 

  5. Hinton, G.E., Osindero, S., Teh, Y.: A fast learning algorithm for deep belief networks. Neural Comput. 18, 1527–1554 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  6. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  7. Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Mach. Learn. 37, 297–336 (1999)

    Article  MATH  Google Scholar 

  8. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixture of local experts. Neural Comput. 3, 79–87 (1991)

    Article  Google Scholar 

  9. Omari, A., Figueiras-Vidal, A.R.: Feature combiners with gate-generated weights for classification. IEEE Trans. Neural Netw. Learn. Syst. 24, 158–163 (2013)

    Article  Google Scholar 

  10. Leisch, F., Hornik, K.: Combining neural network voting classifiers and error correcting output codes. In: MEASURMENT 1997 (1997)

    Google Scholar 

  11. Cemre, Z., Windeatt, T., Yanikogl, B.: Bias-variance analysis of ECOC and bagging using neural nets. In: Okun, O., Valentini, G., Re, M. (eds.) Ensembles in Machine Learning Applications. Studies in Computational Intelligence, vol. 373, pp. 59–73. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  12. Gómez-Verdejo, V., Ortega-Moral, M., Arenas-García, J., Figueiras-Vidal, A.R.: Boosting by weighting critical and erroneous samples. Neurocomput. 69, 679–685 (2006)

    Article  Google Scholar 

  13. Gómez-Verdejo, V., Arenas-García, J., Figueiras-Vidal, A.R.: A dynamically adjusted mixed emphasis method for building boosting ensembles. IEEE Trans. Neural Netw. 19, 3–17 (2008)

    Article  Google Scholar 

  14. Alvear-Sandoval, R.F., Figueiras-Vidal, A.R.: An experiment in pre-emphasizing diversified deep neural networks. In: 24th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, pp. 527–532 (2016)

    Google Scholar 

  15. Rokach, L.: Pattern Classification Using Ensemble Methods. World Scientific, Singapore (2010)

    MATH  Google Scholar 

  16. Dietterich, T.G., Bakiri, G.: Solving multi-class learning problems via error-correcting output codes. J. Artif. Intell. Res. 2, 263–286 (1995)

    MATH  Google Scholar 

  17. Vurkaç, M.: Clave-direction analysis: a new arena for educational and creative of music technology. J. Music Technol. Educ. 4, 27–46 (2011)

    Article  Google Scholar 

  18. Giannakopoulos, X., Karhunen, J., Oja, E.: An experimental comparison of neural algorithms for independent component analysis and blind separation. Int. J. Neural Syst. 9, 99–114 (1999)

    Article  Google Scholar 

  19. Siebert, J.P.: Vehicle Recognition Using Rule Based Methods. Turing Institute Research Memorandum TIRM-87-018, Glasgow, Scotland (1987)

    Google Scholar 

  20. Lichman, M.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, CA (2013)

    Google Scholar 

Download references

Acknowledgments

This work has been partly supported by research grants CASI-CAM-CM (S2013/ICE-2845, DGUI-CM and FEDER) and Macro-ADOBE (TEC2015-67719-P, MINECO).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lorena Álvarez-Pérez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Álvarez-Pérez, L., Ahachad, A., Figueiras-Vidal, A.R. (2017). Pre-emphasizing Binarized Ensembles to Improve Classification Performance. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2017. Lecture Notes in Computer Science(), vol 10305. Springer, Cham. https://doi.org/10.1007/978-3-319-59153-7_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59153-7_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59152-0

  • Online ISBN: 978-3-319-59153-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics