Skip to main content

The Challenge of Classification Confidence Estimation in Dynamically-Adaptive Neural Networks

  • Conference paper
  • First Online:
Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS 2021)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13227))

Included in the following conference series:

  • 939 Accesses

Abstract

An emerging trend to improve the power efficiency of neural network computations consists of dynamically adapting the network architecture or parameters to different inputs. In particular, many such dynamic network models are able to output ’easy’ samples at early exits if a certain confidence-based criterion is satisfied. Traditional methods to estimate inference confidence of a monitored neural network, or of intermediate predictions thereof, include the maximum element of the SoftMax output (score), or the difference between the largest and the second largest score values (score margin). Such methods only rely on a small and position-agnostic subset of the available information at the output of the monitored neural network classifier. For the first time, this paper reports on the lessons learned while trying to extrapolate confidence information from the whole distribution of the classifier outputs rather than from the top scores only. Our experimental campaign indicates that capturing specific patterns associated with misclassifications is nontrivial due to counterintuitive empirical evidence. Rather than disqualifying the approach, this paper calls for further fine-tuning to unfold its potential, and is a first step toward a systematic assessment of confidence-based criteria for dynamically-adaptive neural network computations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. CoRR abs/2010.11929 (2020). https://arxiv.org/abs/2010.11929

  2. Du, B.Z., Guo, Q., Zhao, Y., Zhi, T., Chen, Y., Xu, Z.: Self-aware neural network systems: a survey and new perspective. Proc. IEEE 108(7), 1047–1067 (2020). https://doi.org/10.1109/JPROC.2020.2977722

    Article  Google Scholar 

  3. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017. Proc. Mach. Learn. Res. 70, 1321–1330. PMLR (2017). http://proceedings.mlr.press/v70/guo17a.html

  4. Han, K., Xiao, A., Wu, E., Guo, J., Xu, C., Wang, Y.: Transformer in transformer. CoRR abs/2103.00112 (2021). https://arxiv.org/abs/2103.00112

  5. Han, Y., Huang, G., Song, S., Yang, L., Wang, H., Wang, Y.: Dynamic neural networks: a survey. CoRR abs/2102.04906 (2021). https://arxiv.org/abs/2102.04906

  6. Hein, M., Andriushchenko, M., Bitterwolf, J.: Why Relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 41–50 (2019)

    Google Scholar 

  7. Huang, G., Chen, D., Li, T., Wu, F., van der Maaten, L., Weinberger, K.Q.: Multi-scale dense networks for resource efficient image classification (2018)

    Google Scholar 

  8. Huang, G., Liu, S., Maaten, L.v.d., Weinberger, K.Q.: CondenseNet: an efficient denseNet using learned group convolutions. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2752–2761 (2018). https://doi.org/10.1109/CVPR.2018.00291

  9. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. NIPS 2016, pp. 4114–4122. Curran Associates Inc., Red Hook (2016)

    Google Scholar 

  10. Jayakodi, N.K., Chatterjee, A., Choi, W., Doppa, J.R., Pande, P.P.: Trading-off accuracy and energy of deep inference on embedded systems: a co-design approach. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 37(11), 2881–2893 (2018). https://doi.org/10.1109/TCAD.2018.2857338

    Article  Google Scholar 

  11. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  12. Li, W., Liewig, M.: A survey of AI accelerators for edge environment. In: Rocha, Á., Adeli, H., Reis, L., Costanzo, S., Orovic, I., Moreira, F. (eds.) WorldCIST 2020. AISC, vol. 1160, pp. 35–44. Springer, Heidelberg (2020). https://doi.org/10.1007/978-3-030-45691-7_4

    Chapter  Google Scholar 

  13. Lim, S., Liu, Y.P., Benini, L., Karnik, T., Chang, H.C.: F1: Striking the balance between energy efficiency flexibility: general-purpose vs special-purpose ML processors. In: 2021 IEEE International Solid- State Circuits Conference (ISSCC), vol. 64, pp. 513–516 (2021). https://doi.org/10.1109/ISSCC42613.2021.9365804

  14. Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2755–2763 (2017)

    Google Scholar 

  15. Park, E., Kim, D., Kim, S., Kim, Y.D., Kim, G., Yoon, S., Yoo, S.: Big/little deep neural network for ultra low power inference. In: 2015 International Conference on Hardware/Software Codesign and System Synthesis, CODES+ISSS 2015, pp. 124–132 (2015)

    Google Scholar 

  16. Tann, H., Hashemi, S., Bahar, R.I., Reda, S.: Runtime configurable deep neural networks for energy-accuracy trade-off. In: Proceedings of the IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis. CODES 2016. ACM, New York (2016)

    Google Scholar 

  17. Teerapittayanon, S., McDanel, B., Kung, H.T.: BranchyNet: fast inference via early exiting from deep neural networks. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 2464–2469 (2016)

    Google Scholar 

  18. Veit, A., Belongie, S.: Convolutional networks with adaptive inference graphs. In: Proceedings of the European Conference on Computer Vision (ECCV), September 2018

    Google Scholar 

  19. Wang, K., Zhang, D., Li, Y., Zhang, R., Lin, L.: Cost-effective active learning for deep image classification. CoRR abs/1701.03551 (2017). http://arxiv.org/abs/1701.03551

  20. Woźniak, M., Graña, M., Corchado, E.: A survey of multiple classifier systems as hybrid systems. Inf. Fusion 16, 3–17 (2014). https://doi.org/10.1016/j.inffus.2013.04.006, https://www.sciencedirect.com/science/article/pii/S156625351300047X, (Special issue on Inf. Fusion Hybrid Intell. Fusion Syst.)

  21. Yang, L., Han, Y., Chen, X., Song, S., Dai, J., Huang, G.: Resolution adaptive networks for efficient inference. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2366–2375 (2020). https://doi.org/10.1109/CVPR42600.2020.00244

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francesco Dall’Occo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dall’Occo, F., Bueno-Crespo, A., Abellán, J.L., Bertozzi, D., Favalli, M. (2022). The Challenge of Classification Confidence Estimation in Dynamically-Adaptive Neural Networks. In: Orailoglu, A., Jung, M., Reichenbach, M. (eds) Embedded Computer Systems: Architectures, Modeling, and Simulation. SAMOS 2021. Lecture Notes in Computer Science, vol 13227. Springer, Cham. https://doi.org/10.1007/978-3-031-04580-6_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-04580-6_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-04579-0

  • Online ISBN: 978-3-031-04580-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics