Abstract
In this chapter, which is mathematically intensive, we will try to answer perhaps the most important questions of machine learning: what does the deep neural network learn? How does a deep neural network, especially a CNN, accomplish these goals? The full answer to these basic questions is still a long way off. Here are some of the insights we’ve obtained while traveling towards that destination. In particular, we explain why the classic approaches to machine learning such as single-layer perceptron or kernel machines are not enough to achieve the goal and why a modern CNN turns out to be a promising tool.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
R. J. Duffin and A. C. Schaeffer, “A class of nonharmonic Fourier series,” Transactions of the American Mathematical Society, vol. 72, no. 2, pp. 341–366, 1952.
B. Schölkopf, R. Herbrich, and A. J. Smola, “A generalized representer theorem,” in International conference on computational learning theory. Springer, 2001, pp. 416–426.
J. C. Ye and W. K. Sung, “Understanding geometry of encoder-decoder CNNs,” in International Conference on Machine Learning, 2019, pp. 7064–7073.
O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.
J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM Journal on Imaging Sciences, vol. 11, no. 2, pp. 991–1048, 2018.
D. L. Donoho, “Compressed sensing,” IEEE Trans. Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control, Signals and Systems, vol. 2, no. 4, pp. 303–314, 1989.
K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.
J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in Neural Information Processing Systems, 2012, pp. 341–349.
C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, 2015.
J. Kim, J. K. Lee, and K. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1646–1654.
M. Telgarsky, “Representation benefits of deep feedforward networks,” arXiv preprint arXiv:1509.08101, 2015.
R. Eldan and O. Shamir, “The power of depth for feedforward neural networks,” in 29th Annual Conference on Learning Theory, 2016, pp. 907–940.
M. Raghu, B. Poole, J. Kleinberg, S. Ganguli, and J. S. Dickstein, “On the expressive power of deep neural networks,” in Proceedings of the 34th International Conference on Machine Learning. JMLR, 2017, pp. 2847–2854.
D. Yarotsky, “Error bounds for approximations with deep ReLU networks,” Neural Networks, vol. 94, pp. 103–114, 2017.
R. Arora, A. Basu, P. Mianjy, and A. Mukherjee, “Understanding deep neural networks with rectified linear units,” arXiv preprint arXiv:1611.01491, 2016.
S. Mallat, A wavelet tour of signal processing. Academic Press, 1999.
D. L. Donoho, “De-noising by soft-thresholding,” IEEE Transactions on Information Theory, vol. 41, no. 3, pp. 613–627, 1995.
Y. C. Eldar and M. Mishali, “Robust recovery of signals from a structured union of subspaces,” IEEE Transactions on Information Theory, vol. 55, no. 11, pp. 5302–5316, 2009.
R. Yin, T. Gao, Y. M. Lu, and I. Daubechies, “A tale of two bases: Local-nonlocal regularization on image patches with convolution framelets,” SIAM Journal on Imaging Sciences, vol. 10, no. 2, pp. 711–750, 2017.
J. C. Ye, J. M. Kim, K. H. Jin, and K. Lee, “Compressive sampling using annihilating filter-based low-rank interpolation,” IEEE Transactions on Information Theory, vol. 63, no. 2, pp. 777–801, 2016.
K. H. Jin and J. C. Ye, “Annihilating filter-based low-rank Hankel matrix approach for image inpainting,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3498–3511, 2015.
K. H. Jin, D. Lee, and J. C. Ye, “A general framework for compressed sensing and parallel MRI using annihilating filter based low-rank Hankel matrix,” IEEE Transactions on Computational Imaging, vol. 2, no. 4, pp. 480–495, 2016.
J.-F. Cai, B. Dong, S. Osher, and Z. Shen, “Image restoration: total variation, wavelet frames, and beyond,” Journal of the American Mathematical Society, vol. 25, no. 4, pp. 1033–1089, 2012.
N. Lei, D. An, Y. Guo, K. Su, S. Liu, Z. Luo, S.-T. Yau, and X. Gu, “A geometric understanding of deep learning,” Engineering, 2020.
B. Hanin and D. Rolnick, “Complexity of linear regions in deep networks,” in International Conference on Machine Learning. PMLR, 2019, pp. 2596–2604.
B. Hanin and D. Rolnick. “Deep ReLU networks have surprisingly few activation patterns,” Advances in Neural Information Processing Systems, vol. 32, pp. 361–370, 2019.
X. Zhang and D. Wu, “Empirical studies on the properties of linear regions in deep neural networks,” arXiv preprint arXiv:2001.01072, 2020.
G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio, “On the number of linear regions of deep neural networks,” in Advances in Neural Information Processing Systems, 2014, pp. 2924–2932.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Ye, J.C. (2022). Geometry of Deep Neural Networks. In: Geometry of Deep Learning. Mathematics in Industry, vol 37. Springer, Singapore. https://doi.org/10.1007/978-981-16-6046-7_10
Download citation
DOI: https://doi.org/10.1007/978-981-16-6046-7_10
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-6045-0
Online ISBN: 978-981-16-6046-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)