Skip to main content

Uncertainty Quantification and Calibration of Imitation Learning Policy in Autonomous Driving

  • Conference paper
  • First Online:
Trustworthy AI - Integrating Learning, Optimization and Reasoning (TAILOR 2020)

Abstract

Current state-of-the-art imitation learning policies in autonomous driving, despite having good driving performance, do not consider the uncertainty in their predicted action. Using such an unleashed action without considering the degree of confidence in a black-box machine learning system can compromise safety and reliability in safety-critical applications such as autonomous driving. In this paper, we propose three different uncertainty-aware policies, to capture epistemic and aleatoric uncertainty over the continuous control commands. More specifically, we extend a state-of-the-art policy with three common uncertainty estimation methods: heteroscedastic aleatoric, MC-Dropout and Deep Ensembles. To provide accurate and calibrated uncertainty, we further combine our agents with isotonic regression, an existing calibration method in regression task. We benchmark and compare the driving performance of our uncertainty-aware agents in complex urban driving environments. Moreover, we evaluate the quality of predicted uncertainty before and after recalibration. The experimental results show that our Ensemble agent combined with isotonic regression not only provides accurate uncertainty for its predictions but also significantly outperforms the state-of-the-art baseline in driving performance.

This research was supported by the German Federal Ministry for Education and Research (BMB+F) in the project REACT.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bojarski, M., et al.: End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016)

  2. Chen, C., Seff, A., Kornhauser, A., Xiao, J.: Deepdriving: learning affordance for direct perception in autonomous driving. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2722–2730 (2015). https://doi.org/10.1109/ICCV.2015.312

  3. Chen, D., Zhou, B., Koltun, V., Krähenbühl, P.: Learning by cheating. In: Conference on Robot Learning, pp. 66–75. PMLR (2020)

    Google Scholar 

  4. Codevilla, F., Müller, M., López, A., Koltun, V., Dosovitskiy, A.: End-to-end driving via conditional imitation learning. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–9. IEEE (2018). https://doi.org/10.1109/ICRA.2018.8460487

  5. Codevilla, F., Santana, E., López, A.M., Gaidon, A.: Exploring the limitations of behavior cloning for autonomous driving. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9329–9338 (2019). https://doi.org/10.1109/iccv.2019.00942

  6. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: Carla: an open urban driving simulator. arXiv preprint arXiv:1711.03938 (2017)

  7. Feng, D., Rosenbaum, L., Dietmayer, K.: Towards safe autonomous driving: capture uncertainty in the deep neural network for lidar 3d vehicle detection. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 3266–3273. IEEE (2018). https://doi.org/10.1109/ITSC.2018.8569814

  8. Feng, D., Rosenbaum, L., Glaeser, C., Timm, F., Dietmayer, K.: Can we trust you? On calibration of a probabilistic object detector for autonomous driving. arXiv preprint arXiv:1909.12358 (2019)

  9. Feng, D., Rosenbaum, L., Timm, F., Dietmayer, K.: Leveraging heteroscedastic aleatoric uncertainties for robust real-time lidar 3D object detection. In: 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 1280–1287. IEEE (2019). https://doi.org/10.1109/IVS.2019.8814046

  10. Gal, Y.: Uncertainty in Deep Learning. University of Cambridge, Cambridge, vol. 1, p. 3 (2016)

    Google Scholar 

  11. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: International Conference on Machine Learning, pp. 1050–1059 (2016)

    Google Scholar 

  12. Guafsson, F.K., Danelljan, M., Schon, T.B.: Evaluating scalable Bayesian deep learning methods for robust computer vision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 318–319 (2020). https://doi.org/10.1109/CVPRW50498.2020.00167

  13. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1321–1330. ICML 2017, JMLR (2017)

    Google Scholar 

  14. Harakeh, A., Smart, M., Waslander, S.L.: BayesOD: a Bayesian approach for uncertainty estimation in deep object detectors. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 87–93. IEEE (2020)

    Google Scholar 

  15. He, Y., Zhu, C., Wang, J., Savvides, M., Zhang, X.: Bounding box regression with uncertainty for accurate object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2888–2897 (2019). https://doi.org/10.1109/CVPR.2019.00300

  16. Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? In: Advances in Neural Information Processing Systems, pp. 5574–5584 (2017)

    Google Scholar 

  17. Kendall, A., Gal, Y., Cipolla, R.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7482–7491 (2018)

    Google Scholar 

  18. Kuleshov, V., Fenner, N., Ermon, S.: Accurate uncertainties for deep learning using calibrated regression. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, Sockholmsmässan, Stockholm, Sweden, vol. 80, pp. 2796–2804. PMLR, 10–15 July 2018. http://proceedings.mlr.press/v80/kuleshov18a.html

  19. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Advances in Neural Information Processing Systems, pp. 6402–6413 (2017)

    Google Scholar 

  20. McAllister, R., et al.: Concrete problems for autonomous vehicle safety: Advantages of bayesian deep learning. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, pp. 4745–4753 (2017). https://doi.org/10.24963/ijcai.2017/661

  21. Muller, U., Ben, J., Cosatto, E., Flepp, B., Cun, Y.L.: Off-road obstacle avoidance through end-to-end learning. In: Advances in Neural Information Processing Systems, pp. 739–746 (2006)

    Google Scholar 

  22. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035. Curran Associates, Inc. (2019). http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf

  23. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  24. Platt, J., et al.: Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv. Large Margin Classifiers 10(3), 61–74 (1999)

    Google Scholar 

  25. Sauer, A., Savinov, N., Geiger, A.: Conditional affordance learning for driving in urban environments. In: Conference on Robot Learning, pp. 237–252 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Farzad Nozarian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nozarian, F., Müller, C., Slusallek, P. (2021). Uncertainty Quantification and Calibration of Imitation Learning Policy in Autonomous Driving. In: Heintz, F., Milano, M., O'Sullivan, B. (eds) Trustworthy AI - Integrating Learning, Optimization and Reasoning. TAILOR 2020. Lecture Notes in Computer Science(), vol 12641. Springer, Cham. https://doi.org/10.1007/978-3-030-73959-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-73959-1_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-73958-4

  • Online ISBN: 978-3-030-73959-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics