Skip to main content

Adversarial Attacks on Deep Learning Models of Computer Vision: A Survey

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12454))

Abstract

Deep learning plays a significant role in academic and commercial fields. However, deep neural networks are vulnerable to adversarial attacks, which limits its applications in safety-critical areas, such as autonomous driving, surveillance, and drones and robotics. Due to the rapid development of adversarial examples in computer vision, many novel and interesting adversarial attacks are not covered by existing surveys and could not be categorized according to existing taxonomies. In this paper, we present an improved taxonomy for adversarial attacks, which subsumes existing taxonomies, and investigate and summarize the latest attacks in computer vision comprehensively with respect to the improved taxonomy. Finally, We also discuss some potential research directions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The other method SIM proposed by Lin et al.  [24] is discussed in Sect. 4.4.

References

  1. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  3. Chen, L., et al.: A survey of adversarial learning on graphs. arXiv preprint arXiv:2003.05730 (2020)

  4. Chen, P., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec@CCS 2017), pp. 15–26 (2017)

    Google Scholar 

  5. Croce, F., Hein, M.: Sparse and imperceivable adversarial attacks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4724–4732 (2019)

    Google Scholar 

  6. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9185–9193 (2018)

    Google Scholar 

  7. Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2019), pp. 4312–4321 (2019)

    Google Scholar 

  8. Engstrom, L., Tsipras, D., Schmidt, L., Madry, A.: A rotation and a translation suffice: fooling CNNs with simple transformations. CoRR abs/1712.02779 (2017)

    Google Scholar 

  9. Fawzi, A., Moosavi-Dezfooli, S., Frossard, P.: The robustness of deep networks: a geometrical perspective. IEEE Signal Process. Mag. 34(6), 50–62 (2017)

    Article  Google Scholar 

  10. Finlay, C., Pooladian, A., Oberman, A.M.: The logbarrier adversarial attack: making effective use of decision boundary information. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV 2019), pp. 4861–4869 (2019)

    Google Scholar 

  11. Giusti, A., et al.: A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robot. Autom. Lett. 1(2), 661–667 (2016)

    Article  MathSciNet  Google Scholar 

  12. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations (ICLR 2015) (2015)

    Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  14. Hu, S., Shang, X., Qin, Z., Li, M., Wang, Q., Wang, C.: Adversarial examples for automatic speech recognition: attacks and countermeasures. IEEE Commun. Mag. 57(10), 120–126 (2019)

    Article  Google Scholar 

  15. Huang, Q., Katsman, I., He, H., Gu, Z., Belongie, S., Lim, S.: Enhancing adversarial example transferability with an intermediate level attack. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV 2019), pp. 4732–4741 (2019)

    Google Scholar 

  16. Huang, Z., Zhang, T.: Black-box adversarial attack with transferable model-based embedding. In: 8th International Conference on Learning Representations (ICLR 2020) (2020)

    Google Scholar 

  17. Ibitoye, O., Abou-Khamis, R., Matrawy, A., Shafiq, M.O.: The threat of adversarial attacks on machine learning in network security-a survey. arXiv preprint arXiv:1911.02621 (2019)

  18. John, T.S., Thomas, T.: Adversarial attacks and defenses in malware detection classifiers. In: Handbook of Research on Cloud Computing and Big Data Applications in IoT, pp. 127–150. IGI global (2019)

    Google Scholar 

  19. Kanbak, C., Moosavi-Dezfooli, S., Frossard, P.: Geometric robustness of deep networks: analysis and improvement. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), pp. 4441–4449. IEEE Computer Society (2018)

    Google Scholar 

  20. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012, pp. 1106–1114 (2012)

    Google Scholar 

  21. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: 5th International Conference on Learning Representations (ICLR 2017). OpenReview.net (2017)

    Google Scholar 

  22. Laidlaw, C., Feizi, S.: Functional adversarial attacks. In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 (NeurIPS 2019), pp. 10408–10418 (2019)

    Google Scholar 

  23. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–44 (2015). https://doi.org/10.1038/nature1453910.1038/nature1453910.1038/nature14539

    Article  Google Scholar 

  24. Lin, J., Song, C., He, K., Wang, L., Hopcroft, J.E.: Nesterov accelerated gradient and scale invariance for adversarial attacks. In: 8th International Conference on Learning Representations (ICLR 2020). OpenReview.net (2020)

    Google Scholar 

  25. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: 5th International Conference on Learning Representations (ICLR 2017) (2017)

    Google Scholar 

  26. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: 6th International Conference on Learning Representations (ICLR 2018) (2018)

    Google Scholar 

  27. Martins, N., Cruz, J.M., Cruz, T., Abreu, P.H.: Adversarial machine learning applied to intrusion and malware scenarios: a systematic review. IEEE Access 8, 35403–35419 (2020)

    Article  Google Scholar 

  28. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  29. Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582 (2016)

    Google Scholar 

  30. Moosavidezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 86–94 (2017)

    Google Scholar 

  31. Najafabadi, M.M., Villanustre, F., Khoshgoftaar, T.M., Seliya, N., Wald, R., Muharemagic, E.: Deep learning applications and challenges in big data analytics. J. Big Data 2(1), 1–21 (2015). https://doi.org/10.1186/s40537-014-0007-7

    Article  Google Scholar 

  32. Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence o(1/k\(^{2}\)). Doklady AN USSR 269, 543–547 (1983)

    Google Scholar 

  33. Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, New York (2006). https://doi.org/10.1007/978-0-387-40065-5

    Book  MATH  Google Scholar 

  34. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519 (2017)

    Google Scholar 

  35. Papernot, N., McDaniel, P.D., Goodfellow, I.J.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. CoRR abs/1605.07277 (2016)

    Google Scholar 

  36. Ren, K., Wang, Q., Wang, C., Qin, Z., Lin, X.: The security of autonomous driving: threats, defenses, and future directions. Proc. IEEE 108(2), 357–372 (2019)

    Article  Google Scholar 

  37. Ru, B., Cobb, A., Blaas, A., Gal, Y.: BayesOpt adversarial attack. In: 8th International Conference on Learning Representations (ICLR 2020) (2020)

    Google Scholar 

  38. Serban, A.C., Poll, E., Visser, J.: Adversarial examples-a complete characterisation of the phenomenon. arXiv preprint arXiv:1810.01185 (2018)

  39. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Article  Google Scholar 

  40. Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: International Conference on Machine Learning, pp. 1139–1147 (2013)

    Google Scholar 

  41. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)

    Google Scholar 

  42. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations (ICLR 2014) (2014)

    Google Scholar 

  43. Tu, C., et al.: AutoZOOM: autoencoder-based zeroth order optimization method for attacking black-box neural networks. In: The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019), The Thirty-First Innovative Applications of Artificial Intelligence Conference (IAAI 2019), The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI 2019), pp. 742–749. AAAI Press (2019)

    Google Scholar 

  44. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  45. Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., Schmidhuber, J.: Natural evolution strategies. J. Mach. Learn. Res. 15(1), 949–980 (2014)

    MathSciNet  MATH  Google Scholar 

  46. Xiao, C., Zhu, J., Li, B., He, W., Liu, M., Song, D.: Spatially transformed adversarial examples. In: 6th International Conference on Learning Representations (ICLR 2018). OpenReview.net (2018)

    Google Scholar 

  47. Xie, C., Zhang, Z., Zhou, Y., Bai, S., Wang, J., Ren, Z., Yuille, A.L.: Improving transferability of adversarial examples with input diversity. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2019), pp. 2730–2739. Computer Vision Foundation/IEEE (2019)

    Google Scholar 

  48. Zhang, Z., Geiger, J., Pohjalainen, J., Mousa, A.E.D., Jin, W., Schuller, B.: Deep learning for environmentally robust speech recognition: an overview of recent developments. ACM Trans. Intell. Syst. Technol. (TIST) 9(5), 1–28 (2018)

    Article  Google Scholar 

  49. Zhou, Y., Han, M., Liu, L., He, J., Gao, X.: The adversarial attacks threats on computer vision: a survey. In: 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW), pp. 25–30. IEEE (2019)

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by the National Natural Science Foundation of China under Grants No. 61972260, 61772347, 61836005; Guangdong Basic and Applied Basic Research Foundation under Grant No. 2019A1515011577; and Guangdong Science and Technology Department under Grant No. 2018B010107004.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiwu Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ding, J., Xu, Z. (2020). Adversarial Attacks on Deep Learning Models of Computer Vision: A Survey. In: Qiu, M. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2020. Lecture Notes in Computer Science(), vol 12454. Springer, Cham. https://doi.org/10.1007/978-3-030-60248-2_27

Download citation

Publish with us

Policies and ethics