Skip to main content

Scale-Space Theory, F-transform Kernels and CNN Realization

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2019)

Abstract

We present scale-space and F-transform inspired modification to convolutional neural networks. The proposed modification improves network classification accuracy using multi-scale image representation and F-transform kernels pre-training. We evaluate our model on two databases and show better performance than networks without F-transform pre-training.

Supported by Ostrava University.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    www.image-net.org/challenges/LSVRC/2014/.

  2. 2.

    Subsampling operation originates from Hubel and Wiesel [7]; comparison of pooling can be found in [21].

  3. 3.

    We have employed dropout to reduce network overfitting.

References

  1. Arora, S., Bhaskara, A., Ge, R., Ma, T.: Provable bounds for learning some deep representations. In: International Conference on Machine Learning, pp. 584–592 (2014)

    Google Scholar 

  2. Chen, T., Goodfellow, I., Shlens, J.: Net2Net: accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641 (2015)

  3. Cruz Jr, G.V., Du, Y., Taylor, M.E.: Pre-training neural networks with human demonstrations for deep reinforcement learning. arXiv preprint arXiv:1709.04083 (2017)

  4. Deng, L.: The mnist database of handwritten digit images for machine learning research (best of the web). IEEE Signal Process. Mag. 29(6), 141–142 (2012)

    Article  Google Scholar 

  5. Erhan, D., Bengio, Y., Courville, A., Manzagol, P.A., Vincent, P., Bengio, S.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 11(Feb), 625–660 (2010)

    MathSciNet  MATH  Google Scholar 

  6. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)

    Google Scholar 

  7. Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160(1), 106–154 (1962)

    Article  Google Scholar 

  8. Krähenbühl, P., Doersch, C., Donahue, J., Darrell, T.: Data-dependent initializations of convolutional neural networks. arXiv preprint arXiv:1511.06856 (2015)

  9. Krizhevsky, A., Nair, V., Hinton, G.: The cifar-10 dataset. http://www.cs.toronto.edu/kriz/cifar.html (2014)

  10. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)

    Article  Google Scholar 

  11. Lindeberg, T.: Scale-space theory: a basic tool for analyzing structures at different scales. J. Appl. Stat. 21(1–2), 225–270 (1994)

    Article  Google Scholar 

  12. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  13. Mishkin, D., Matas, J.: All you need is a good init. arXiv preprint arXiv:1511.06422 (2015)

  14. Molek, V., Perfilieva, I.: Convolutional neural networks with the F-transform kernels. In: Rojas, I., Joya, G., Catala, A. (eds.) IWANN 2017. LNCS, vol. 10305, pp. 396–407. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59153-7_35

    Chapter  Google Scholar 

  15. Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate \(\cal{O}(1/k^2)\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983). https://ci.nii.ac.jp/naid/10029946121/en/

    MathSciNet  Google Scholar 

  16. Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1717–1724 (2014)

    Google Scholar 

  17. Pal, K.K., Sudeep, K.: Preprocessing for image classification by convolutional neural networks. In: IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), pp. 1778–1781. IEEE (2016)

    Google Scholar 

  18. Perfilieva, I., Haldeeva, E.: Fuzzy transformation. In: Proceedings Joint 9th IFSA World Congress and 20th NAFIPS International Conference (Cat. No. 01TH8569), vol. 4, pp. 1946–1948. IEEE (2001)

    Google Scholar 

  19. Perfilieva, I., Holčapek, M., Kreinovich, V.: A new reconstruction from the F-transform components. Fuzzy Sets Syst. 288, 3–25 (2016)

    Article  MathSciNet  Google Scholar 

  20. Ruder, S.: An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016)

  21. Scherer, D., Müller, A., Behnke, S.: Evaluation of pooling operations in convolutional architectures for object recognition. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds.) ICANN 2010. LNCS, vol. 6354, pp. 92–101. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15825-4_10

    Chapter  Google Scholar 

  22. Serre, T., Wolf, L., Bileschi, S., Riesenhuber, M., Poggio, T.: Robust object recognition with cortex-like mechanisms. IEEE Trans. Pattern Anal. Mach. Intell. 3, 411–426 (2007)

    Article  Google Scholar 

  23. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806 (2014)

  24. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  25. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  26. Vlašánek, P., Perfilieva, I.: The F-transform in terms of image processing tools. J. Fuzzy Set Valued Anal. 2016(1), 54–62 (2016)

    Article  Google Scholar 

  27. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)

    Google Scholar 

  28. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

Download references

Acknowledgment

The work was supported by ERDF/ESF “Centre for the development of Artificial Intelligence Methods for the Automotive Industry of the region” (No. CZ.02.1.01/0.0/0.0/17_049/0008414).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vojtech Molek .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Molek, V., Perfilieva, I. (2019). Scale-Space Theory, F-transform Kernels and CNN Realization. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2019. Lecture Notes in Computer Science(), vol 11507. Springer, Cham. https://doi.org/10.1007/978-3-030-20518-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-20518-8_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-20517-1

  • Online ISBN: 978-3-030-20518-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics