Skip to main content

Advertisement

Log in

RCA-IUnet: a residual cross-spatial attention-guided inception U-Net model for tumor segmentation in breast ultrasound imaging

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

The advancements in deep learning technologies have produced immense contributions to biomedical image analysis applications. With breast cancer being the common deadliest disease among women, early detection is the key means to improve survivability. Medical imaging like ultrasound presents an excellent visual representation of the functioning of the organs; however, for any radiologist analysing such scans is challenging and time consuming which delays the diagnosis process. Although various deep learning-based approaches are proposed that achieved promising results, the present article introduces an efficient residual cross-spatial attention-guided inception U-Net (RCA-IUnet) model with minimal training parameters for tumor segmentation using breast ultrasound imaging to further improve the segmentation performance of varying tumor sizes. The RCA-IUnet model follows U-Net topology with residual inception depth-wise separable convolution and hybrid pooling (max pooling and spectral pooling) layers. In addition, cross-spatial attention filters are added to suppress the irrelevant features and focus on the target structure. The segmentation performance of the proposed model is validated on two publicly available datasets using standard segmentation evaluation metrics, where it outperformed the other state-of-the-art segmentation models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. https://github.com/lsh1994/keras-segmentation.

  2. https://github.com/kannyjyk/Nested-UNet.

  3. https://github.com/ozan-oktay/Attention-Gated-Networks.

  4. https://github.com/clguo/Dense_Unet_Keras.

References

  1. Siegel, R.L., Miller, K.D., Jemal, A.: Cancer statistics. CA Cancer J. Clin. 69(1), 7–34 (2019)

    Article  Google Scholar 

  2. Cheng, H.-D., Shan, J., Ju, W., Guo, Y., Zhang, L.: Automated breast cancer detection and classification using ultrasound images: a survey. Pattern Recogn. 43(1), 299–317 (2010)

    Article  Google Scholar 

  3. Xian, M., Zhang, Y., Cheng, H.-D., Xu, F., Zhang, B., Ding, J.: Automatic breast ultrasound image segmentation: a survey. Pattern Recogn. 79, 340–355 (2018)

    Article  Google Scholar 

  4. Haque, I.R.I., Neubert, J.: Deep learning approaches to biomedical image segmentation. Inform. Med. Unlocked 18, 100297 (2020)

    Article  Google Scholar 

  5. Punn, N.S., Agarwal, S.: Inception u-net architecture for semantic segmentation to identify nuclei in microscopy cell images. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 16(1), 1–15 (2020)

    Article  Google Scholar 

  6. Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B. et al.: Attention u-net: learning where to look for the pancreas. arXiv:1804.03999

  7. Dong, S., Zhao, J., Zhang, M., Shi, Z., Deng, J., Shi, Y., Tian, M., Zhuo, C.: Deu-net: Deformable u-net for 3d cardiac mri video segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention pp. 98–107. Springer (2020)

  8. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. Springer (2015)

  9. Bhardwaj, R., Nambiar, A.R., Dutta, D.: A study of machine learning in healthcare. In: 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC) 02 (2017), pp. 236–241

  10. Shan, J., Cheng, H.-D., Wang, Y.: A novel automatic seed point selection algorithm for breast ultrasound images. In: 2008 19th International Conference on Pattern Recognition, pp. 1–4. IEEE (2008)

  11. Joo, S., Yang, Y.S., Moon, W.K., Kim, H.C.: Computer-aided diagnosis of solid breast nodules: use of an artificial neural network based on multiple sonographic features. IEEE Trans. Med. Imaging 23(10), 1292–1300 (2004)

    Article  Google Scholar 

  12. Huang, Y.-L., Chen, D.-R.: Automatic contouring for breast tumors in 2-d sonography. In: IEEE Engineering in Medicine and Biology 27th Annual Conference, pp. 3225–3228. IEEE (2005)

  13. Shan, J., Cheng, H., Wang, Y.: Completely automated segmentation approach for breast ultrasound images using multiple-domain features. Ultrasound Med Biol 38(2), 262–275 (2012)

    Article  Google Scholar 

  14. Torbati, N., Ayatollahi, A., Kermani, A.: An efficient neural network based method for medical image segmentation. Comput. Biol. Med. 44, 76–87 (2014)

    Article  Google Scholar 

  15. Cheng, J.-Z., Ni, D., Chou, Y.-H., Qin, J., Tiu, C.-M., Chang, Y.-C., Huang, C.-S., Shen, D., Chen, C.-M.: Computer-aided diagnosis with deep learning architecture: applications to breast lesions in us images and pulmonary nodules in ct scans. Sci. Rep. 6(1), 1–13 (2016)

    Article  Google Scholar 

  16. Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., Liu, C.: A survey on deep transfer learning. In: International Conference on Artificial Neural Networks, pp. 270–279. Springer (2018)

  17. Huynh, B., Drukker, K., Giger, M.: Mo-de-207b-06: computer-aided diagnosis of breast ultrasound images using transfer learning from deep convolutional neural networks. Med. Phys. 43(6(Part30)), 3705–3705 (2016)

    Article  Google Scholar 

  18. Fujioka, T., Kubota, K., Mori, M., Kikuchi, Y., Katsuta, L., Kasahara, M., Oda, G., Ishiba, T., Nakagawa, T., Tateishi, U.: Distinction between benign and malignant breast masses at breast ultrasound using deep learning method with convolutional neural network. Jpn. J. Radiol. 37(6), 466–472 (2019)

    Article  Google Scholar 

  19. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

  20. Yap, M.H., Pons, G., Marti, J., Ganau, S., Sentis, M., Zwiggelaar, R., Davison, A.K., Marti, R.: Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform. 22(4), 1218–1226 (2017)

    Article  Google Scholar 

  21. Huang, Q., Huang, Y., Luo, Y., Yuan, F., Li, X.: Segmentation of breast ultrasound image with semantic classification of superpixels. Med. Image Anal. 61, 101657 (2020)

    Article  Google Scholar 

  22. Ilesanmi, A.E., Idowu, O.P., Makhanov, S.S.: Multiscale superpixel method for segmentation of breast ultrasound. Comput. Biol. Med. 125, 103879 (2020)

    Article  Google Scholar 

  23. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. arXiv:1706.03762

  24. Lee, H., Park, J., Hwang, J.Y.: Channel attention module with multiscale grid average pooling for breast cancer segmentation in an ultrasound image. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 67(7), 1344–1353 (2020)

    Google Scholar 

  25. Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)

  26. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)

  27. Akhtar, N., Ragavendran, U.: Interpretation of intelligence in cnn-pooling processes: a methodological survey. Neural Comput. Appl. 32(3), 879–898 (2020)

    Article  Google Scholar 

  28. Rippel, O., Snoek, J., Adams, R.P.: Spectral representations for convolutional neural networks. arXiv:1506.03767

  29. Punn, N.S., Agarwal, S.: Multi-modality encoded fusion with 3d inception u-net and decoder model for brain tumor segmentation. In: Multimedia Tools and Applications, pp. 1–16 (2020)

  30. Luo, W., Li, Y., Urtasun, R., Zemel, R.: Understanding the effective receptive field in deep convolutional neural networks. arXiv:1701.04128

  31. Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)

    Article  Google Scholar 

  32. Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: Unet++: a nested u-net architecture for medical image segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer, pp. 3–11 (2018)

  33. Yu, F., Wang, D., Shelhamer, E., Darrell, T.: Deep layer aggregation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2403–2412 (2018)

  34. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  35. Xian, M., Zhang, Y., Cheng, H.-D., Xu, F., Huang, K., Zhang, B., Ding, J., Ning, C., Wang, Y.: A benchmark for breast ultrasound image segmentation (BUSIS). Infinite Study

  36. Al-Dhabyani, W., Gomaa, M., Khaled, H., Fahmy, A.: Dataset of breast ultrasound images. Data Brief 28, 104863 (2020)

    Article  Google Scholar 

  37. Ruder, S.: An overview of gradient descent optimization algorithms. arXiv:1609.04747

  38. Geifman, A.: The correct way to measure inference time of deep neural networks. https://deci.ai/resources/blog/measure-inference-time-deep-neural-networks/. Accessed October 23, 2021 (2020)

Download references

Acknowledgements

We thank our institute, Indian Institute of Information Technology Allahabad (IIITA), India, and Big Data Analytics (BDA) laboratory for allocating the centralized computing facility and other necessary resources to perform this research. We extend our thanks to our colleagues for their valuable guidance and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Narinder Singh Punn.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Punn, N.S., Agarwal, S. RCA-IUnet: a residual cross-spatial attention-guided inception U-Net model for tumor segmentation in breast ultrasound imaging. Machine Vision and Applications 33, 27 (2022). https://doi.org/10.1007/s00138-022-01280-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00138-022-01280-3

Keywords

Navigation