Skip to main content

Gradient-Based Adversarial Attacks Against Malware Detection by Instruction Replacement

  • Conference paper
  • First Online:
Wireless Algorithms, Systems, and Applications (WASA 2022)

Abstract

Deep learning plays a vital role in malware detection. The Malconv is a well-known deep learning-based open source malware detection framework and is trained on raw bytes for malware binary detection. Researchers propose adversarial example generation strategies to evade the Malconv by modifying the PE headers or the end of malware. However, these strategies that focus on non-executable portions can be easily pre-processed before classification. Therefore, we propose a new instructions replacement strategy to overcome these flaws. This paper reviews the research progress on adversarial example generation strategies for the Malconv in recent years, analyzes the reason why the Malconv can be evaded by adversarial examples and identifies two layers of the Malconv that can be attacked, and propose the gradient-based instructions replacement strategy named EFGSM that is an enhanced Fast Gradient Sign Method (FGSM), and sheds light on future work in adversarial example defense strategies for the Malconv. The paper assesses the performance of our EFGSM and existing adversarial example generation strategies upon 200 malware. The results of the evaluation show that our strategy improves the success rate from 68% to 81.5% and takes less time to generate malware examples. The paper also assesses the evasion performance of adversarial examples in three antiviruses. The results depict that our strategy is the state of the art.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    MD5:c37b02e060fa169e4d6f0c6e77ddb500.

References

  1. Kolosnjaji, B.: Adversarial malware binaries: evading deep learning for malware detection in executables. In: 2018 26th European Signal Processing Conference (EUSIPCO), pp. 533–537. IEEE (2018)

    Google Scholar 

  2. Luca, D., Biggio, B., Giovanni, L., Roli, F., Alessandro, A.: Explaining vulnerabilities of deep learning to adversarial malware binaries. In: 3rd Italian Conference on Cyber Security, ITASEC 2019, vol. 2315 (2019)

    Google Scholar 

  3. Suciu, O., Coull, S.E., Johns, J.: Exploring adversarial examples in malware detection. In: 2019 IEEE Security and Privacy Workshops (SPW), pp. 8–14. IEEE (2019)

    Google Scholar 

  4. Demetrio, L., Biggio, B., Lagorio, G., Roli, F., Armando, A.: Functionality-preserving black-box optimization of adversarial windows malware. IEEE Trans. Inf. Forensics Secur. 16, 3469–3478 (2021)

    Article  Google Scholar 

  5. Demetrio, L., Coull, S.E., Biggio, B., Lagorio, G., Armando, A., Roli, F.: Adversarial exemples: a survey and experimental evaluation of practical attacks on machine learning for windows malware detection. ACM Trans. Priv. Secur. (TOPS) 24(4), 1–31 (2021)

    Google Scholar 

  6. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  7. Raff, E., Barker, J., Sylvester, J., Brandon, R., Catanzaro, B., Nicholas, C.K.: Malware detection by eating a whole exe. In: Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  8. Szegedy, C.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  9. Anderson, H.S., Roth, P.: EMBER: an open dataset for training static PE malware machine learning models. arXiv preprint arXiv:1804.04637 (2018)

  10. Yuan, J., Zhou, S., Lin, L., Wang, F., Cui, J.: Black-box adversarial attacks against deep learning based malware binaries detection with GAN. In: ECAI 2020, pp. 2536–2542. IOS Press (2020)

    Google Scholar 

  11. Park, D., Khan, H., Yener, B.: Generation & evaluation of adversarial examples for malware obfuscation. In: 2019 18th IEEE International Conference on Machine Learning And Applications (ICMLA), pp. 1283–1290. IEEE (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiqiang Shi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhao, J. et al. (2022). Gradient-Based Adversarial Attacks Against Malware Detection by Instruction Replacement. In: Wang, L., Segal, M., Chen, J., Qiu, T. (eds) Wireless Algorithms, Systems, and Applications. WASA 2022. Lecture Notes in Computer Science, vol 13471. Springer, Cham. https://doi.org/10.1007/978-3-031-19208-1_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19208-1_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19207-4

  • Online ISBN: 978-3-031-19208-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics