Skip to main content

Geometry of Deep Neural Networks

  • Chapter
  • First Online:
Geometry of Deep Learning

Part of the book series: Mathematics in Industry ((MATHINDUSTRY,volume 37))

Abstract

In this chapter, which is mathematically intensive, we will try to answer perhaps the most important questions of machine learning: what does the deep neural network learn? How does a deep neural network, especially a CNN, accomplish these goals? The full answer to these basic questions is still a long way off. Here are some of the insights we’ve obtained while traveling towards that destination. In particular, we explain why the classic approaches to machine learning such as single-layer perceptron or kernel machines are not enough to achieve the goal and why a modern CNN turns out to be a promising tool.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 29.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 39.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 39.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. R. J. Duffin and A. C. Schaeffer, “A class of nonharmonic Fourier series,” Transactions of the American Mathematical Society, vol. 72, no. 2, pp. 341–366, 1952.

    Article  MathSciNet  Google Scholar 

  2. B. Schölkopf, R. Herbrich, and A. J. Smola, “A generalized representer theorem,” in International conference on computational learning theory. Springer, 2001, pp. 416–426.

    Google Scholar 

  3. J. C. Ye and W. K. Sung, “Understanding geometry of encoder-decoder CNNs,” in International Conference on Machine Learning, 2019, pp. 7064–7073.

    Google Scholar 

  4. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.

    Google Scholar 

  5. J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM Journal on Imaging Sciences, vol. 11, no. 2, pp. 991–1048, 2018.

    Article  MathSciNet  Google Scholar 

  6. D. L. Donoho, “Compressed sensing,” IEEE Trans. Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.

    Article  MathSciNet  Google Scholar 

  7. G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control, Signals and Systems, vol. 2, no. 4, pp. 303–314, 1989.

    Article  MathSciNet  Google Scholar 

  8. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.

    Article  MathSciNet  Google Scholar 

  9. J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in Neural Information Processing Systems, 2012, pp. 341–349.

    Google Scholar 

  10. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, 2015.

    Article  Google Scholar 

  11. J. Kim, J. K. Lee, and K. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1646–1654.

    Google Scholar 

  12. M. Telgarsky, “Representation benefits of deep feedforward networks,” arXiv preprint arXiv:1509.08101, 2015.

    Google Scholar 

  13. R. Eldan and O. Shamir, “The power of depth for feedforward neural networks,” in 29th Annual Conference on Learning Theory, 2016, pp. 907–940.

    Google Scholar 

  14. M. Raghu, B. Poole, J. Kleinberg, S. Ganguli, and J. S. Dickstein, “On the expressive power of deep neural networks,” in Proceedings of the 34th International Conference on Machine Learning. JMLR, 2017, pp. 2847–2854.

    Google Scholar 

  15. D. Yarotsky, “Error bounds for approximations with deep ReLU networks,” Neural Networks, vol. 94, pp. 103–114, 2017.

    Article  Google Scholar 

  16. R. Arora, A. Basu, P. Mianjy, and A. Mukherjee, “Understanding deep neural networks with rectified linear units,” arXiv preprint arXiv:1611.01491, 2016.

    Google Scholar 

  17. S. Mallat, A wavelet tour of signal processing. Academic Press, 1999.

    MATH  Google Scholar 

  18. D. L. Donoho, “De-noising by soft-thresholding,” IEEE Transactions on Information Theory, vol. 41, no. 3, pp. 613–627, 1995.

    Article  MathSciNet  Google Scholar 

  19. Y. C. Eldar and M. Mishali, “Robust recovery of signals from a structured union of subspaces,” IEEE Transactions on Information Theory, vol. 55, no. 11, pp. 5302–5316, 2009.

    Article  MathSciNet  Google Scholar 

  20. R. Yin, T. Gao, Y. M. Lu, and I. Daubechies, “A tale of two bases: Local-nonlocal regularization on image patches with convolution framelets,” SIAM Journal on Imaging Sciences, vol. 10, no. 2, pp. 711–750, 2017.

    Article  MathSciNet  Google Scholar 

  21. J. C. Ye, J. M. Kim, K. H. Jin, and K. Lee, “Compressive sampling using annihilating filter-based low-rank interpolation,” IEEE Transactions on Information Theory, vol. 63, no. 2, pp. 777–801, 2016.

    Article  MathSciNet  Google Scholar 

  22. K. H. Jin and J. C. Ye, “Annihilating filter-based low-rank Hankel matrix approach for image inpainting,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3498–3511, 2015.

    Article  MathSciNet  Google Scholar 

  23. K. H. Jin, D. Lee, and J. C. Ye, “A general framework for compressed sensing and parallel MRI using annihilating filter based low-rank Hankel matrix,” IEEE Transactions on Computational Imaging, vol. 2, no. 4, pp. 480–495, 2016.

    Article  MathSciNet  Google Scholar 

  24. J.-F. Cai, B. Dong, S. Osher, and Z. Shen, “Image restoration: total variation, wavelet frames, and beyond,” Journal of the American Mathematical Society, vol. 25, no. 4, pp. 1033–1089, 2012.

    Article  MathSciNet  Google Scholar 

  25. N. Lei, D. An, Y. Guo, K. Su, S. Liu, Z. Luo, S.-T. Yau, and X. Gu, “A geometric understanding of deep learning,” Engineering, 2020.

    Google Scholar 

  26. B. Hanin and D. Rolnick, “Complexity of linear regions in deep networks,” in International Conference on Machine Learning. PMLR, 2019, pp. 2596–2604.

    Google Scholar 

  27. B. Hanin and D. Rolnick. “Deep ReLU networks have surprisingly few activation patterns,” Advances in Neural Information Processing Systems, vol. 32, pp. 361–370, 2019.

    Google Scholar 

  28. X. Zhang and D. Wu, “Empirical studies on the properties of linear regions in deep neural networks,” arXiv preprint arXiv:2001.01072, 2020.

    Google Scholar 

  29. G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio, “On the number of linear regions of deep neural networks,” in Advances in Neural Information Processing Systems, 2014, pp. 2924–2932.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ye, J.C. (2022). Geometry of Deep Neural Networks. In: Geometry of Deep Learning. Mathematics in Industry, vol 37. Springer, Singapore. https://doi.org/10.1007/978-981-16-6046-7_10

Download citation

Publish with us

Policies and ethics