Skip to main content

Learning Invariance in Deep Neural Networks

  • Conference paper
  • First Online:
Computational Science – ICCS 2021 (ICCS 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12744))

Included in the following conference series:

  • 1117 Accesses

Abstract

One of the long-standing difficulties in machine learning involves distortions present in data – different input feature vectors may represent the same entity. This observation has led to the introduction of invariant machine learning methods, for example techniques that ignore shifts, rotations, or light and pose changes in images. These approaches typically utilize pre-defined invariant features or invariant kernels, and require the designer to analyze what type of distortions are to be expected. While specifying possible sources of variance is straightforward for images, it is more difficult in other domains. Here, we focus on learning an invariant representation from data, without any information of what the distortions present in the data, only based on information whether any two samples are distorted variants of the same entity, or not. In principle, standard neural network architectures should be able to learn the invariance from data, given sufficient numbers of examples of it. We report that, somewhat surprisingly, learning to approximate even a simple types of invariant representation is difficult. We then propose a new type of layer, with a richer output representation, one that is better suited for learning invariances from data.

Supported by NSF grant IIS-1453658.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arodź, T.: Invariant object recognition using radon-based transform. Comput. Inform. 24(2), 183–199 (2012)

    MathSciNet  MATH  Google Scholar 

  2. Benton, G., Finzi, M., Izmailov, P., Wilson, A.G.: Learning invariances in neural networks. In: Advances in Neural Information Processing Systems, vol. 33, pp. 17605–17616 (2020)

    Google Scholar 

  3. Cheng, G., Han, J., Zhou, P., Xu, D.: Learning rotation-invariant and fisher discriminative convolutional neural networks for object detection. IEEE Trans. Image Process. 28(1), 265–278 (2018)

    Article  MathSciNet  Google Scholar 

  4. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2(4), 303–314 (1989). https://doi.org/10.1007/BF02551274

    Article  MathSciNet  MATH  Google Scholar 

  5. Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., Bouchachia, A.: A survey on concept drift adaptation. ACM Comput. Surv. 46(4), 1–37 (2014)

    Article  Google Scholar 

  6. Haasdonk, B., Burkhardt, H.: Invariant kernel functions for pattern analysis and machine learning. Mach. Learn. 68(1), 35–61 (2007)

    Article  Google Scholar 

  7. Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4(2), 251–257 (1991)

    Article  MathSciNet  Google Scholar 

  8. Khotanzad, A., Hong, Y.H.: Invariant image recognition by Zernike moments. IEEE Trans. Pattern Anal. Mach. Intell. 12(5), 489–497 (1990)

    Article  Google Scholar 

  9. Krawczyk, B.: Learning from imbalanced data: open challenges and future directions. Prog. Artif. Intell. 5(4), 221–232 (2016). https://doi.org/10.1007/s13748-016-0094-0

    Article  Google Scholar 

  10. Lai, J.H., Yuen, P.C., Feng, G.C.: Face recognition using holistic Fourier invariant features. Pattern Recogn. 34(1), 95–109 (2001)

    Article  Google Scholar 

  11. Laptev, D., Savinov, N., Buhmann, J.M., Pollefeys, M.: Ti-pooling: transformation-invariant pooling for feature learning in convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 289–297 (2016)

    Google Scholar 

  12. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  13. Lenc, K., Vedaldi, A.: Understanding image representations by measuring their equivariance and equivalence. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 991–999 (2015)

    Google Scholar 

  14. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  15. Panahi, A., Saeedi, S., Arodz, T.: word2ket: space-efficient word embeddings inspired by quantum entanglement. In: International Conference on Learning Representations (2019)

    Google Scholar 

  16. Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1532–1543 (2014)

    Google Scholar 

  17. Qi, G.J., Zhang, L., Lin, F., Wang, X.: Learning generalized transformation equivariant representations via autoencoding transformations. IEEE Trans. Pattern Anal. Mach. Intell. (2020, in press). https://www.computer.org/csdl/journal/tp/5555/01/09219238/1nMMelzChbO

  18. Shen, X., Tian, X., He, A., Sun, S., Tao, D.: Transform-invariant convolutional neural networks for image classification and search. In: Proceedings of the 24th ACM International Conference on Multimedia, pp. 1345–1354 (2016)

    Google Scholar 

  19. Wood, J.: Invariant pattern recognition: a review. Pattern Recogn. 29(1), 1–17 (1996)

    Article  Google Scholar 

  20. Zhao, H., Des Combes, R.T., Zhang, K., Gordon, G.: On learning invariant representations for domain adaptation. In: International Conference on Machine Learning, pp. 7523–7532. PMLR (2019)

    Google Scholar 

Download references

Acknowledgements

T.A. is supported by NSF grant IIS-1453658.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomasz Arodz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, H., Arodz, T. (2021). Learning Invariance in Deep Neural Networks. In: Paszynski, M., Kranzlmüller, D., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M. (eds) Computational Science – ICCS 2021. ICCS 2021. Lecture Notes in Computer Science(), vol 12744. Springer, Cham. https://doi.org/10.1007/978-3-030-77967-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77967-2_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77966-5

  • Online ISBN: 978-3-030-77967-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics