Abstract
Over the past decade, adversarial examples have demonstrated an enhancing ability to fool neural networks. However, most adversarial examples can be easily detected, especially under statistical analysis. Ensuring undetectability is crucial for the success of adversarial examples in practice. In this paper, we borrow the idea of the embedding suitability map from steganography and employ it to modulate the adversarial perturbation. In this way, the adversarial perturbations are concentrated in the hard-to-detect areas and are attenuated in predictable regions. Extensive experiments show that the proposed scheme is compatible with various existing attacks and can significantly boost the undetectability of adversarial examples against both human inspection and statistical analysis of the same attack ability. The code is available at github.com/zengh5/Undetectable-attack.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (ICLR) (2014)
Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: ICLR (2017)
Li, M., et al.: Towards transferable targeted attack. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 638–646 (2020)
Moosavi-Dezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: CVPR, pp. 86–94 (2017)
Papernot, N., McDaniel, P., Wu, X., et al.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 582–597 (2016)
Dhillon, G.S., et al.: Stochastic activation pruning for robust adversarial defense. In: ICLR (2018)
Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: ICLR (2018)
Tramèr, F., et al.: Ensemble adversarial training: attacks and defenses. In: ICLR (2018)
Hendrycks, D.: Early methods for detecting adversarial images. In: ICLR (2017)
Li, X., Li, F.: Adversarial examples detection in deep networks with convolutional filter statistics. In: ICCV, pp. 5775–5783 (2017)
Liang, B., et al.: Detecting adversarial image examples in deep neural networks with adaptive noise reduction. IEEE Trans. Depend. Secure Comput. 18(1), 72–85 (2018)
Deng, K., Peng, A., Dong, W., Zeng, H.: Detecting C &W adversarial images based on noise addition-then-denoising. In: ICIP, pp. 3607–3611 (2021)
Liu, J., et al.: Detection based defense against adversarial examples from the steganalysis point of view. In: CVPR, pp. 4820–4829 (2019)
Fridrich, J., Kodovsky, J.: Rich models for steganalysis of digital images. IEEE Trans. Inf. Forens. Secur. 7(3), 868–882 (2012)
Peng, A., et al.: Gradient-based adversarial image forensics. In: the 27th International Conference on Neural Information Processing, pp. 417–428 (2020)
Zeng, H., et al.: How secure are the adversarial examples themselves? In: ICASSP, pp. 2879–2883 (2022)
Madry, A., et al.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2017)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: ICLR (2017)
Dong, Y., et al.: Boosting adversarial attacks with momentum. In: CVPR, pp. 9185–9193 (2018)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, pp. 39–57 (2017)
Xiao, C., et al.: Spatially transformed adversarial examples. In: ICLR (2018)
Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: CVPR, pp. 2730–2739 (2019)
Wang, X., He, X., Wang, J., He, K.: Admix: enhancing the transferability of adversarial attacks. In: IEEE International Conference on Computer Vision (2021)
Dong, Y., et al.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: CVPR, pp. 4312–4321 (2019)
Dong, X., et al.: Robust superpixel-guided attentional adversarial attack. In: CVPR, pp. 12892–12901 (2020)
Zhou, B., et al.: Learning deep features for discriminative localization. In: CVPR, pp. 2921–2929 (2016)
Zhong, N., Qian, Z., Zhang, X.: Undetectable adversarial examples based on microscopical regularization. In: IEEE International Conference on Multimedia and Expo, pp. 1–6 (2021)
Holub, V., Fridrich, J.: Designing steganographic distortion using directional filters. In: IEEE Workshop on Information Forensic and Security, pp. 234–239 (2012)
Holub, V., Fridrich, J., Denemark, T.: Universal distortion function for steganography in an arbitrary domain. EURASIP Journal on Information Security, (Section: SI: Revised Selected Papers of ACM IH and MMS2013), no. 1 (2014)
Li, B., Wang, M., Huang, J., Li, X.: A new cost function for spatial image steganography. In: ICIP, pp. 4206–4210 (2014)
Sedighi, V., Cogranne, R., Fridrich, J.: Content-adaptive steganography by minimizing statistical detectability. IEEE Trans. Inf. Forens. Secur. 11(2), 221–234 (2016)
https://github.com/cleverhans-lab/cleverhans/tree/master/cleverhans_v3.1.0/ examples/nips17_adversarial_competition. Accessed 20 May 2023
He, K., et al.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
Huang, G., Liu, Z., Laurens, V., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR, pp. 2261–2269 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for largescale image recognition. In: ICLR (2015)
https://github.com/MadryLab/robustness. Accessed 20 May 2023
Kodovsky, J., Fridrich, J., Holub, V.: Ensemble classifiers for steganalysis of digital media. IEEE Trans. Inf. Forens. Secur. 7(2), 432–444 (2012)
Zhou, W., Bovic, A.C.: A universal image quality index. IEEE Signal Process. Lett. 9(3), 81–84 (2002)
Acknowledgements
The work is supported by the opening project of guangdong province key laboratory of information security technology (no. 2020B1212-060078) and the network emergency management research special topic (no. WLYJGL2023ZD003).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zeng, H., Chen, B., Yang, R., Li, C., Peng, A. (2024). Towards Undetectable Adversarial Examples: A Steganographic Perspective. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Lecture Notes in Computer Science, vol 14450. Springer, Singapore. https://doi.org/10.1007/978-981-99-8070-3_14
Download citation
DOI: https://doi.org/10.1007/978-981-99-8070-3_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8069-7
Online ISBN: 978-981-99-8070-3
eBook Packages: Computer ScienceComputer Science (R0)