Skip to main content

Combination of Linear Classifiers Using Score Function – Analysis of Possible Combination Strategies

  • Conference paper
  • First Online:
Progress in Computer Recognition Systems (CORES 2019)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 977))

Included in the following conference series:

Abstract

In this work, we addressed the issue of combining linear classifiers using their score functions. The value of the scoring function depends on the distance from the decision boundary. Two score functions have been tested and four different combination strategies were investigated. During the experimental study, the proposed approach was applied to the heterogeneous ensemble and it was compared to two reference methods – majority voting and model averaging respectively. The comparison was made in terms of seven different quality criteria. The result shows that combination strategies based on simple average, and trimmed average are the best combination strategies of the geometrical combination.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/ptrajdos/piecewiseLinearClassifiers/tree/master.

  2. 2.

    https://sci2s.ugr.es/keel/category.php?cat=clas.

  3. 3.

    https://github.com/ptrajdos/MLResults/blob/master/data/slDataFull.zip.

  4. 4.

    https://github.com/ptrajdos/MLResults/blob/master/Boundaries/bounds_hetero_15.01.2019E4_m_R.zip.

References

  1. Bergmann B, Hommel G (1988) Improvements of general multiple test procedures for redundant systems of hypotheses. In: Multiple hypothesenprüfung/multiple hypotheses testing. Springer, Heidelberg, pp 100–115. https://doi.org/10.1007/978-3-642-52307-6_8

    Chapter  Google Scholar 

  2. Britto AS, Sabourin R, Oliveira LE (2014) Dynamic selection of classifiers—a comprehensive review. Pattern Recogn 47(11):3665–3680

    Article  Google Scholar 

  3. Burduk R, Walkowiak K (2015) Static classifier selection with interval weights of base classifiers. In: Asian conference on intelligent information and database systems. Springer, pp 494–502

    Google Scholar 

  4. Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297. https://doi.org/10.1007/bf00994018

    Article  MATH  Google Scholar 

  5. Cyganek B (2012) One-class support vector ensembles for image segmentation and classification. J Math Imaging Vis 42(2–3):103–117

    Article  MathSciNet  Google Scholar 

  6. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  7. Devroye L, Györfi L, Lugosi G (1966) A probabilistic theory of pattern recognition. Springer, New York. https://doi.org/10.1007/978-1-4612-0711-5

    Book  Google Scholar 

  8. Drucker H, Cortes C, Jackel LD, LeCun Y, Vapnik V (1994) Boosting and other ensemble methods. Neural Comput 6(6):1289–1301

    Article  Google Scholar 

  9. Friedman M (1940) A comparison of alternative tests of significance for the problem of \(m\) rankings. Ann Math Stat 11(1):86–92. https://doi.org/10.1214/aoms/1177731944

    Article  MathSciNet  MATH  Google Scholar 

  10. Garcia S, Herrera F (2008) An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. J Mach Learn Res 9:2677–2694

    MATH  Google Scholar 

  11. Giacinto G, Roli F (2001) An approach to the automatic design of multiple classifier systems. Pattern Recogn Lett 22:25–33

    Article  Google Scholar 

  12. Gurney K (1997) An introduction to neural networks. Taylor & Francis, London. https://doi.org/10.4324/9780203451519

  13. Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The WEKA data mining software. SIGKDD Explor Newsl 11(1):10. https://doi.org/10.1145/1656274.1656278

    Article  Google Scholar 

  14. Hall MA (1999) Correlation-based feature selection for machine learning. Ph.D. thesis, The University of Waikato

    Google Scholar 

  15. Holm S (1979) A simple sequentially rejective multiple test procedure. Scand J Stat 6(2):65–70. https://doi.org/10.2307/4615733

    Article  MathSciNet  MATH  Google Scholar 

  16. Hüllermeier E, Fürnkranz J (2010) On predictive accuracy and risk minimization in pairwise label ranking. J Comput Syst Sci 76(1):49–62. https://doi.org/10.1016/j.jcss.2009.05.005

    Article  MathSciNet  MATH  Google Scholar 

  17. Ko AH, Sabourin R, Britto AS Jr (2008) From dynamic classifier selection to dynamic ensemble selection. Pattern Recogn 41(5):1718–1731

    Article  Google Scholar 

  18. Kuncheva L, Bezdek J (1998) Nearest prototype classification: clustering, genetic algorithms, or random search? IEEE Trans Syst Man Cybern: Part C (Appl Rev) 28(1):160–164. https://doi.org/10.1109/5326.661099

    Article  Google Scholar 

  19. Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms, 1st edn. Wiley-Interscience

    Google Scholar 

  20. Manning CD, Raghavan P, Schutze H (2008) Introduction to information retrieval. Cambridge University Press, New York. https://doi.org/10.1017/cbo9780511809071

    Book  MATH  Google Scholar 

  21. Markiewicz A, Forczmański P (2015) Detection and classification of interesting parts in scanned documents by means of adaboost classification and low-level features verification. In: International conference on computer analysis of images and patterns. Springer, pp 529–540

    Google Scholar 

  22. McLachlan GJ (1992) Discriminant analysis and statistical pattern recognition. Wiley series in probability and mathematical statistics: applied probability and statistics. A Wiley-Interscience Publication. https://doi.org/10.1002/0471725293

    Book  Google Scholar 

  23. Pearson K (1901) LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philos Mag J Sci 2(11):559–572. https://doi.org/10.1080/14786440109462720

    Article  Google Scholar 

  24. Ponti MP Jr (2011) Combining classifiers: from the creation of ensembles to the decision fusion. In: 2011 24th SIBGRAPI conference on graphics, patterns and images tutorials (SIBGRAPI-T). IEEE, pp 1–10

    Google Scholar 

  25. Przybyła-Kasperek M et al (2019) Three conflict methods in multiple classifiers that use dispersed knowledge. Int J Inf Tech Decis Making (IJITDM) 18(02):555–599

    Article  Google Scholar 

  26. Przybyła-Kasperek M, Wakulicz-Deja A (2017) Comparison of fusion methods from the abstract level and the rank level in a dispersed decision-making system. Int J Gener Syst 46(4):386–413

    Article  MathSciNet  Google Scholar 

  27. Reif M, Shafait F, Goldstein M, Breuel T, Dengel A (2014) Automatic classifier selection for non-experts. Pattern Anal Appl 17(1):83–96

    Article  MathSciNet  Google Scholar 

  28. Rejer I, Burduk R (2017) Classifier selection for motor imagery brain computer interface. In: IFIP international conference on computer information systems and industrial management. Springer, pp 122–130

    Google Scholar 

  29. Skurichina M, Duin RP (1998) Bagging for linear classifiers. Pattern Recogn 31(7):909–930. https://doi.org/10.1016/s0031-3203(97)00110-6

    Article  Google Scholar 

  30. Sokolova M, Lapalme G (2009) A systematic analysis of performance measures for classification tasks. Inf Process Manag 45(4). https://doi.org/10.1016/j.ipm.2009.03.002

    Article  Google Scholar 

  31. Trawiński B, Lasota T, Kempa O, Telec Z, Kutrzyński M (2017) Comparison of ensemble learning models with expert algorithms designed for a property valuation system. In: International conference on computational collective intelligence. Springer, pp 317–327

    Google Scholar 

  32. Tulyakov S, Jaeger S, Govindaraju V, Doermann D (2008) Review of classifier combination methods. In: Machine learning in document analysis and recognition. Springer, pp 361–386

    Google Scholar 

  33. Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics Bull 1(6):80. https://doi.org/10.2307/3001968

    Article  Google Scholar 

  34. Woźniak M, Graña M, Corchado E (2014) A survey of multiple classifier systems as hybrid systems. Inf Fusion 16:3–17

    Article  Google Scholar 

  35. Xu L, Krzyzak A, Suen CY (1992) Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans Syst Man Cybern 22(3):418–435

    Article  Google Scholar 

  36. Yekutieli D, Benjamini Y (2001) The control of the false discovery rate in multiple testing under dependency. Ann Stat 29(4):1165–1188. https://doi.org/10.1214/aos/1013699998

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Science Centre, Poland under the grant no. 2017/25/B/ST6/01750.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert Burduk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Trajdos, P., Burduk, R. (2020). Combination of Linear Classifiers Using Score Function – Analysis of Possible Combination Strategies. In: Burduk, R., Kurzynski, M., Wozniak, M. (eds) Progress in Computer Recognition Systems. CORES 2019. Advances in Intelligent Systems and Computing, vol 977. Springer, Cham. https://doi.org/10.1007/978-3-030-19738-4_35

Download citation

Publish with us

Policies and ethics