Skip to main content

Pushing the Limits Against the No Free Lunch Theorem: Towards Building General-Purpose (GenP) Classification Systems

  • Chapter
  • First Online:
Advances in Selected Artificial Intelligence Areas

Part of the book series: Learning and Analytics in Intelligent Systems ((LAIS,volume 24))

  • 444 Accesses

Abstract

In this chapter, we provide an overview of the tools our research group is exploiting to build general-purpose (GenP) classification systems. Although the “no free lunch” (NFL) theorem claims, in effect, that generating a universal classifier is impossible, the goals of GenP systems are more modest in requiring little to no parameter tuning for performing competitively across a range of tasks within a domain or with specific data types, such as images, that span across several fields. The tools outlined here for building GenP systems include methods for building ensembles, matrix representations of data treated as images, deep learning approaches, data augmentation, and classification within dissimilarity spaces. Each of these tools is explained in detail and illustrated with a few examples taken from our work building GenP systems, which spans nearly fifteen years. We note both our successes and some of our limitations. This chapter ends by pointing out some developments in quantum computing and quantum-inspired algorithms that may allow researchers to push the limits hypothesized by the NFL theorem even further.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Abbreviations

ACT:

Activation Layer

CBD:

Compact Binary Descriptor

CLASS:

Classification Layer

CLBP:

Complete LBP

CNN:

Convolutional Neural Network

CONV:

Convolutional Layer

DCT:

Discrete Cosine Transform

DT:

Decision Trees

FC:

Fully-Connected Layer

GenP:

General Purpose (classifier)

GOLD:

Gaussians of Local Descriptors

GWT:

Gabor Wavelet Transform

HASC:

Heterogeneous Auto-Similarities of Characteristics

IDE:

Input Decimated Ensemble

INC:

Inception Module

LBP:

Local Binary Pattern

LDA:

Linear Discriminant Analysis

LPQ:

Local Phase Quantization

LTP:

Local Ternary Pattern

ML:

Machine Learning

MRELBP:

Median Robust Extended LBP

NFL:

No Free Lunch (theorem)

PCA:

Principal Component Analysis

PCAN:

Principal Component Analysis Network

PDV:

Pixel Difference Vectors (generated in CBD)

POOL:

Pooling Layer

QC:

Quantum Computation

QI:

Quantum Inspired (algorithms)

RES:

Residual Layer

RF:

Rotation Forest

RICLBP:

Rotation Invariant Co-occurrence LBP

RLBP:

Rotated LBP

RS:

Random Subspace

STFT:

Short-Term Fourier Transform

SVM:

Support Vector Machine

TL:

Transfer Learning

References

  1. D.H. Wolpert, The supervised learning no-free-lunch theorems, in 6th Online World Conference on Soft Computing in Industrial Applications (2001), pp. 25–42

    Google Scholar 

  2. M. Delgado et al., Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15, 3133–3181 (2014)

    MathSciNet  MATH  Google Scholar 

  3. L. Nanni, S. Ghidoni, S. Brahnam, Ensemble of convolutional neural networks for bioimage classification. Appl. Comput. Inf. 17(1), 19–35 (2021)

    Google Scholar 

  4. L.K. Hansen, P. Salamon, Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12, 993–1001 (1990)

    Article  Google Scholar 

  5. D. Lu, Q. Weng, A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 28, 823–870 (2007)

    Article  Google Scholar 

  6. V. Vapnik, The support vector method, in Artificial Neural Networks ICANN’97. (Springer, Lecture Notes in Computer Science, 1997), pp. 261–271

    Chapter  Google Scholar 

  7. N. Cristianini, J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods (Cambridge University Press, Cambridge, UK, 2000)

    Google Scholar 

  8. R.O. Duda, P.E. Hart, D.G. Stork, Pattern Classification, 2nd edn. (Wiley, New York, 2000)

    MATH  Google Scholar 

  9. S. Brahnam, et al., (eds)., Local Binary Patterns: New Variants and Application. (Springer, Berlin, 2014)

    Google Scholar 

  10. T. Ojala, M. Pietikainen, T. Maeenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)

    Article  Google Scholar 

  11. X. Tan, B. Triggs, Enhanced local texture feature sets for face recognition under difficult lighting conditions. Anal. Model. Faces Gestures LNCS 4778, 168–182 (2007)

    Google Scholar 

  12. V. Ojansivu, J. Heikkila, Blur insensitive texture classification using local phase quantization, in ICISP (2008), pp. 236–243

    Google Scholar 

  13. R. Nosaka, C.H. Suryanto, K. Fukui, Rotation invariant co-occurrence among adjacent LBPs, in ACCV Workshops (2012), pp. 15–25

    Google Scholar 

  14. R. Mehta, K. Egiazarian, Dominant rotated local binary patterns (drlbp) for texture classification. Pattern Recogn. Lett. 71(1), 16–22 (2015)

    Google Scholar 

  15. Z. Guo, L. Zhang, D. Zhang, A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 19(6), 1657–1663 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  16. L. Liu, et al., Median robust extended local binary pattern for texture classification. IEEE Trans. Image Process. In press

    Google Scholar 

  17. M. San Biagio et al., Heterogeneous auto-similarities of characteristics (hasc): Exploiting relational information for classification, in IEEE Computer Vision (ICCV13). (Sydney, Australia, 2013), pp. 809–816

    Google Scholar 

  18. Y. Guo, G. Zhao, M. Pietikainen, Discriminative features for texture description. Pattern Recogn. Lett. 45, 3834–3843 (2012)

    Article  Google Scholar 

  19. L. Nanni, S. Brahnam, A. Lumini, Classifier ensemble methods, in Wiley Encyclopedia of Electrical and Electronics Engineering, ed by J. Webster (Wiley, New York, 2015), pp. 1–12

    Google Scholar 

  20. L. Breiman, Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    Article  MATH  Google Scholar 

  21. G. Martínez-Muñoz, A. Suárez, Switching class labels to generate classification ensembles. Pattern Recogn. 38(10), 1483–1494 (2005)

    Article  Google Scholar 

  22. G. Bologna, R.D. Appel, A comparison study on protein fold recognition. in The 9th International Conference on Neural Information Processing (Singapore, 2020)

    Google Scholar 

  23. P. Melville, R.J. Mooney, Creating diversity in ensembles using artificial, information fusion. Spec. Issue Divers. Multiclassifier Syst. 6(1), 99–111 (2005)

    Google Scholar 

  24. L. Nanni, A. Lumini, FuzzyBagging: a novel ensemble of classifiers. Pattern Recogn. 39(3), 488–490 (2006)

    Article  MATH  Google Scholar 

  25. Y. Freund, R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  26. T.K. Ho, The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)

    Article  Google Scholar 

  27. K. Tumer, N.C. Oza, Input decimated ensembles. Pattern Anal Appl 6, 65–77 (2003)

    Google Scholar 

  28. L. Nanni, Cluster-based pattern discrimination: a novel technique for feature selection. Pattern Recogn. Lett. 27(6), 682–687 (2006)

    Article  Google Scholar 

  29. J.J. Rodriguez, L.I. Kuncheva, C.J. Alonso, Rotation forest: a new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1619–1630 (2006)

    Article  Google Scholar 

  30. L. Breiman, Random forest. Mach. Learn. 45(1), 5–32 (2001)

    Article  MATH  Google Scholar 

  31. C.-X. Zhang, J.-S. Zhang, RotBoost: a technique for combining Rotation forest and AdaBoost. Pattern Recogn. Lett. 29(10), 1524–1536 (2008)

    Article  Google Scholar 

  32. L. Nanni, S. Brahnam, A. Lumini, Double committee adaBoost. J. King Saud Univ. 25(1), 29–37 (2013)

    Article  Google Scholar 

  33. L. Nanni, et al., Toward a general-purpose heterogeneous ensemble for pattern classification. Comput. Intell. Neurosci. Article ID 909123, 1–10 (2015)

    Google Scholar 

  34. A. Lumini, L. Nanni, Overview of the combination of biometric matchers. Inf. Fusion 33, 71–85 (2017)

    Article  Google Scholar 

  35. A. Lumini, L. Nanni, Deep learning and transfer learning features for plankton classification. Ecol. Inf. 51, 33–43 (2019)

    Article  Google Scholar 

  36. Z. Wang, et al., Pattern representation in feature extraction and classification-matrix versus vector. IEEE Trans. Neural Netw. 19(758–769) (2008)

    Google Scholar 

  37. R. Eustice et al., UWIT: Underwater image toolbox for optical image processing and mosaicking in MATLAB, in International Symposium on Underwater Technology. (Tokyo, Japan, 2002), pp. 141–145

    Google Scholar 

  38. J. Yang et al., Two-dimension pca: a new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26(1), 131–137 (2004)

    Article  Google Scholar 

  39. Z. Wang, S.C. Chen, Matrix-pattern-oriented least squares support vector classifier with AdaBoost. Pattern Recogn. Lett. 29, 745–753 (2008)

    Article  Google Scholar 

  40. L. Nanni, Texture descriptors for generic pattern classification problems. Expert Syst. Appl. 38(8), 9340–9345 (2011)

    Article  Google Scholar 

  41. L. Nanni, S. Brahnam, A. Lumini, Matrix representation in pattern classification. Exp. Syst. Appl. 39.3, 3031–3036 (2012)

    Google Scholar 

  42. G. Hinton, S. Osindero, Y.-W. Teh, A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  43. A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, in Advances In Neural Information Processing Systems. ed. by F. Pereira et al. (Curran Associates Inc., Red Hook, NY, 2012), pp. 1097–1105

    Google Scholar 

  44. C. Szegedy, et al., Going deeper with convolutions, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9

    Google Scholar 

  45. K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition (Cornell University, 2014). arXiv:1409.1556v6

  46. K. He et al., Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (IEEE, Las Vegas, NV, 2016), pp. 770–778

    Chapter  Google Scholar 

  47. G. Huang, et al., Densely connected convolutional networks. CVPR 1(2), 3 (2017)

    Google Scholar 

  48. J. Yosinski, et al., How Transferable are Features in Deep Neural Networks? (Cornell University, 2014) arXiv:1411.1792.

  49. T.-H. Chan et al., Pcanet: a simple deep learning baseline for image classification? IEEE Trans. Image Process. 24(12), 5017–5032 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  50. J. Deng, et al. ImageNet: a large-scale hierarchical image database. in CVPR (2009)

    Google Scholar 

  51. B. Athiwaratkun, K. Kang, Feature Representation in Convolutional Neural Networks (2015). arXiv:1507.02313.

  52. B. Yang, et al., Convolutional channel features, in IEEE International Conference on Computer Vision (ICCV) (2015)

    Google Scholar 

  53. C. Barat, C. Ducottet, String representations and distances in deep convolutional neural networks for image classification. Pattern Recogn. Bioinf. 54(June), 104–115 (2016)

    Article  Google Scholar 

  54. A.S. Razavian, et al., CNN features off-the-shelf: an astounding baseline for recognition. CoRR (2014). arXiv:1403.6382

  55. R.H.M. Condori, O.M. Bruno, Analysis of activation maps through global pooling measurements for texture classification. Inf. Sci. 555, 260–279 (2021)

    Article  MathSciNet  Google Scholar 

  56. L. Nanni, S. Ghidoni, S. Brahnam, Handcrafted versus non-handcrafted features for computer vision classification. Pattern Recogn. 71, 158–172 (2017)

    Article  Google Scholar 

  57. J. Lu, et al., Learning compact binary face descriptor for face recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2015)

    Google Scholar 

  58. H. Li, et al., Rethinking the Hyperparameters for Fine-Tuning (2020). arXiv:2002.11770

  59. R. Ribani, M. Marengoni, A survey of transfer learning for convolutional neural networks. in 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T) (2019), pp. 47–57

    Google Scholar 

  60. G. Maguolo, L. Nanni, S. Ghidoni, Ensemble of convolutional neural networks trained with different activation functions. Exp. Syst. Appl. 166, 114048 (2021)

    Google Scholar 

  61. X. Glorot, A. Bordes, Y. Bengio, Deep sparse rectifier neural networks. in AISTATS (2011)

    Google Scholar 

  62. V. Nair, G.E. Hinton, Rectified linear units improve restricted boltzmann machines, in 27th International Conference on Machine Learning. (Haifa, Israel, 2010), pp. 1–8

    Google Scholar 

  63. A.L. Maas, Rectifier nonlinearities improve neural network acoustic models (2013)

    Google Scholar 

  64. D.-A. Clevert, T. Unterthiner, S. Hochreiter, Fast and accurate deep network learning by exponential linear units (ELUs). CoRR (2015). arXiv:1511.07289

  65. G. Klambauer, et al., Self-normalizing neural networks, in 31st Conference on Neural Information Processing Systems (NIPS 2017) (Long Beach, CA, 2017)

    Google Scholar 

  66. K. He et al., Delving deep into rectifiers: surpassing human-level performance on imagenet classification. IEEE Int. Conf. Comput. Vis. (ICCV) 2015, 1026–1034 (2015)

    Google Scholar 

  67. F. Agostinelli, et al., Learning activation functions to improve deep neural networks. CoRR (2014).arXiv:1412.6830

  68. A. Lumini, et al., Image orientation detection by ensembles of Stochastic CNNs. Mach. Learn. Appl. 6, 100090 (2021)

    Google Scholar 

  69. L. Nanni, et al., Stochastic selection of activation layers for convolutional neural networks. Sensors (Basel, Switzerland) 20 (2020)

    Google Scholar 

  70. M. Hutter, Learning Curve Theory (2021). arXiv:2102.04074

  71. B. Sahiner et al., Deep learning in medical imaging and radiation therapy. Med. Phys. 46(1), e1–e36 (2019)

    Article  MathSciNet  Google Scholar 

  72. O. Ronneberger, P. Fischer, T. Brox, U-Net: convolutional networks for biomedical image segmentation, in MICCAI 2015 LNCS. ed. by N. Navab et al. (Springer, Cham, 2015), pp. 234–241

    Google Scholar 

  73. J. Shijie, et al., Research on data augmentation for image classification based on convolution neural networks, in Chinese Automation Congress (CAC) 2017 (Jinan, 2017), pp. 4165–4170

    Google Scholar 

  74. A. Dosovitskiy et al., Discriminative unsupervised feature Learning with exemplar convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(9), 1734–1747 (2016)

    Article  Google Scholar 

  75. A. Buslaev, et al., Albumentations: Fast and Flexible Image Augmentations (2018). arXiv:1809.06839

  76. L. Nanni, S. Brahnam, G. Maguolo, Data augmentation for building an ensemble of convolutional neural networks, in Smart Innovation, Systems and Technologies. ed. by Y.-W. Chen et al. (Springer Nature, Singapore, 2019), pp. 61–70

    Google Scholar 

  77. A. Tversky, Features of similarity. Psychol. Rev. 84(2), 327–352 (1977)

    Article  Google Scholar 

  78. E. Pękalska, R.P. Duin, The Dissimilarity Representation for Pattern Recognition - Foundations and Applications (World Scientific, Singapore, 2005)

    Book  MATH  Google Scholar 

  79. S. Belongie, J. Malik, J. Puzicha, Shape matching and object recongtiion using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(24), 509–522 (2002)

    Article  Google Scholar 

  80. Y. Rubner, C. Tomasi, L.J. Guibas, The earth mover’s distance as a metric for image retrieval. Int. J. Comput. Vision 40, 99–121 (2000)

    Article  MATH  Google Scholar 

  81. Y. Chen et al., Similarity-based classification: concepts and algorithms. J. Mach. Learn. Res. 10, 747–776 (2009)

    MathSciNet  MATH  Google Scholar 

  82. Y.M.G. Costa et al., The dissimilarity approach: a review. Artif. Intell. Rev. 53, 2783–2808 (2019)

    Article  Google Scholar 

  83. S. Cha, S. Srihari, Writer identification: statistical analysis and dichotomizer, in SSPR/SPR (2000)

    Google Scholar 

  84. E. Pękalska, R.P. Duin, Dissimilarity representations allow for building good classifiers. Pattern Recognit. Lett. 23, 943–956 (2002)

    Article  MATH  Google Scholar 

  85. R.H.D. Zottesso et al., Bird species identification using spectrogram and dissimilarity approach. Ecol. Inf. 48, 187–197 (2018)

    Article  Google Scholar 

  86. V.L.F. Souza, A. Oliveira, R. Sabourin, A writer-independent approach for offline signature verification using deep convolutional neural networks features. in 2018 7th Brazilian Conference on Intelligent Systems (BRACIS) (2018), pp. 212–217

    Google Scholar 

  87. J.G. Martins et al., Forest species recognition based on dynamic classifier selection and dissimilarity feature vector representation. Mach. Vis. Appl. 26, 279–293 (2015)

    Article  Google Scholar 

  88. E. Pękalska, R.P. Duin, P. Paclík, Prototype selection for dissimilarity-based classifiers. Pattern Recogn. 39, 189–208 (2006)

    Article  MATH  Google Scholar 

  89. M. Hernández-Durán, Y.P. Calaña, H.M. Vazquez, Low-resolution face recognition with deep convolutional features in the dissimilarity space. in IWAIPR (2018)

    Google Scholar 

  90. J. Bromley, et al. Signature verification using a “Siamese” time delay neural network. Int. J. Pattern Recognit. Artif. Intell. (1993)

    Google Scholar 

  91. D. Chicco, Siamese neural networks: an overview, in Artificial Neural Networks. Methods in Molecular Biology, ed. by H. Cartwright (Springer Protocols, Humana, New York, NY, 2020), pp. 73–94

    Google Scholar 

  92. L. Nanni et al., Experiments of image classification using dissimilarity spaces built with siamese networks. Sensors 21(1573), 2–18 (2021)

    Google Scholar 

  93. E. Gibney, Hello quantum world! Google publishes landmark quantum supremacy claim. Nature 574, 461–462 (2019)

    Article  Google Scholar 

  94. F. Arute et al., Hartree-Fock on a superconducting qubit quantum computer. Science 369, 1084–1089 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  95. Y.-H. Luo, et al., Quantum teleportation in high dimensions. Phys. Rev. Lett. 123(7), 070505 (2019)

    Google Scholar 

  96. F. Arute et al., Quantum supremacy using a programmable superconducting processor. Nature 574(7779), 505–510 (2019)

    Article  Google Scholar 

  97. L. Greenemeier, How close are we—really—to building a quantum computer. Sci. Am. (2018)

    Google Scholar 

  98. V. Havlícek et al., Supervised learning with quantum-enhanced feature spaces. Nature 567, 209–212 (2019)

    Article  Google Scholar 

  99. F. Tacchino, et al., An artificial neuron implemented on an actual quantum processor. NPJ Quantum Inf. 5, 1–8 (2018)

    Google Scholar 

  100. G. Acampora, Quantum machine intelligence. Quantum Mach. Intell. 1(1), 1–3 (2019)

    Article  Google Scholar 

  101. M. Schuld, F. Petruccione, Quantum ensembles of quantum classifiers. Sci. Rep. 8 (2018)

    Google Scholar 

  102. A. Abbas, M. Schuld, F. Petruccione, On quantum ensembles of quantum classifiers. Quantum Mach. Intell. 2, 1–8 (2020)

    Article  Google Scholar 

  103. K. Khadiev, L. Safina. The Quantum Version of Random Forest Model for Binary Classification Problem (2021)

    Google Scholar 

  104. D. Willsch, et al., Support vector machines on the D-Wave quantum annealer. Comput. Phys. Commun. 248, 107006 (2020)

    Google Scholar 

  105. A.A. Gily'en, Z. Song, E. Tang, An Improved Quantum-Inspired Algorithm for Linear Regression (2020). arXiv:2009.07268

  106. C. Ding, T. Bao, H.-L. Huang, Quantum-inspired support vector machine. IEEE Trans. Neural Netw. Learn. Syst. (2021)

    Google Scholar 

  107. D.M. Dias, M. Pacheco, Describing quantum-inspired linear genetic programming from symbolic regression problems. IEEE Congr. Evol. Comput. 2012, 1–8 (2012)

    Google Scholar 

  108. W. Deng et al., An improved quantum-inspired differential evolution algorithm for deep belief network. IEEE Trans. Instrum. Meas. 69, 7319–7327 (2020)

    Article  Google Scholar 

  109. L. Bai et al., A quantum-inspired similarity measure for the analysis of complete weighted graphs. IEEE Trans. Cybern. 50, 1264–1277 (2020)

    Article  Google Scholar 

  110. P. Tiwari, M. Melucci, Towards a quantum-inspired binary classifier. IEEE Access 7, 42354–42372 (2019)

    Article  Google Scholar 

  111. E. Bernstein, U. Vazirani, Quantum complexity theory. SIAM J. Comput. 26, 1411–1473 (1997)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sheryl Brahnam .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Lumini, A., Nanni, L., Brahnam, S. (2022). Pushing the Limits Against the No Free Lunch Theorem: Towards Building General-Purpose (GenP) Classification Systems. In: Virvou, M., Tsihrintzis, G.A., Jain, L.C. (eds) Advances in Selected Artificial Intelligence Areas. Learning and Analytics in Intelligent Systems, vol 24. Springer, Cham. https://doi.org/10.1007/978-3-030-93052-3_5

Download citation

Publish with us

Policies and ethics