Skip to main content

Multi-head Attention-Based Masked Sequence Model for Mapping Functional Brain Networks

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (MICCAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13431))

Abstract

It has been of great interest in the neuroimaging community to discover brain functional networks (FBNs) based on task functional magnetic resonance imaging (tfMRI). A variety of methods have been used to model tfMRI sequences so far, such as recurrent neural network (RNN) and Autoencoder. However, these models are not designed to incorporate the characteristics of tfMRI sequences, and the same signal values at different time points in a fMRI time series may rep-resent different states and meanings. Inspired by cloze learning methods and the human ability to judge polysemous words based on context, we proposed a self-supervised a Multi-head Attention-based Masked Sequence Model (MAMSM), as BERT model uses (Masked Language Modeling) MLM and multi-head attention to learn the different meanings of the same word in different sentences. MAMSM masks and encodes tfMRI time series, uses multi-head attention to calculate different meanings corresponding to the same signal value in fMRI sequence, and obtains context information through MSM pre-training. Furthermore this work redefined a new loss function to extract FBNs according to the task de-sign information of tfMRI data. The model has been applied to the Human Connectome Project (HCP) task fMRI dataset and achieves state-of-the-art performance in brain temporal dynamics, the Pearson correlation coefficient between learning features and task design curves was more than 0.95, and the model can extract more meaningful network besides the known task related brain networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cabral, J., Kringelbach, M.L., Deco, G.: Exploring the network dynamics underlying brain activity during rest. Prog. Neurobiol. 114, 102–131 (2014)

    Article  Google Scholar 

  2. Kanwisher, N.: Functional specificity in the human brain: a window into the functional architecture of the mind. Proc. Natl. Acad. Sci. 107(25), 11163–11170 (2010)

    Article  Google Scholar 

  3. Beckmann, C.F., et al.: General multilevel linear modeling for group analysis in FMRI. Neuroimage 20(2), 1052–1063 (2003)

    Article  Google Scholar 

  4. Jiang, X., et al.: Sparse representation of HCP grayordinate data reveals novel functional architecture of cerebral cortex. Hum. Brain Mapp. 36(12), 5301–5319 (2015)

    Article  Google Scholar 

  5. Lv, J., et al.: Holistic atlases of functional networks and interactions reveal reciprocal organizational architecture of cortical function. IEEE Trans. Biomed. Eng. 62(4), 1120–1131 (2015)

    Article  Google Scholar 

  6. Li, X., et al.: Multple-demand system identification and characterization via sparse representations of fMRI data. In: 2016 IEEE 13th International Symposium on Biomedical Imaging(ISBI). IEEE (2016)

    Google Scholar 

  7. Smith, S.M., et al.: Correspondence of the brain’s functional architecture during activation and rest. Proc. Natl. Acad. Sci. 106(31), 13040–13045 (2009)

    Article  Google Scholar 

  8. Huang, H., et al.: Modeling task fMRI data via mixture of deep expert networks. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE (2018)

    Google Scholar 

  9. Huang, H., et al.: Modeling task fMRI data via deep convolutional autoencoder. IEEE Trans. Med. Imaging 37(7), 1551–1561 (2018)

    Article  Google Scholar 

  10. Zhao, Y., et al.: Automatic recognition of fMRI-derived functional networks using 3-D convolutional neural networks. IEEE Trans. Biomed. Eng. 65(9), 1975–1984 (2018)

    Article  Google Scholar 

  11. Li, Q., et al.: Simultaneous spatial-temporal decomposition of connectome-scale brain networks by deep sparse recurrent auto-encoders. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) IPMI 2019. LNCS, vol. 11492, pp. 579–591. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20351-1_45

    Chapter  Google Scholar 

  12. Sak, H., et al.: Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: Fifteenth Annual Conference of the International Speech Communication Association (2014)

    Google Scholar 

  13. Wang, H., et al.: Recognizing brain states using deep sparse recurrent neural network. IEEE Trans. Med. Imaging 38, 1058–1068 (2018)

    Article  Google Scholar 

  14. Barch, D.M., et al.: Function in the human connectome: task-fMRI and individual differences in behavior. Neuroimage 80, 169–189 (2013)

    Article  Google Scholar 

  15. Glasser, M.F., et al.: The minimal preprocessing pipelines for the human Connectome project. Neuroimage 80, 105–124 (2013)

    Article  Google Scholar 

  16. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  17. Devlin, J., et al.: Bert: pre-training of deep bidirectional transformers for language under-standing (2018)

    Google Scholar 

  18. Xie, Z., et al.: Simmim: A simple framework for masked image modeling. arXiv preprint arXiv:2111.09886 (2021)

  19. Dong, Q., et al.: Spatiotemporal Attention Autoencoder (STAAE) for ADHD Classification. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, (2020) https://doi.org/10.1007/978-3-030-59728-3_50

  20. He, K., et al.: Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377 (2021)

  21. Tang, G., et al.: Why self-attention? a targeted evaluation of neural machine translation architectures. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 4263–4272 (2018)

    Google Scholar 

  22. Abraham, A., et al.: Machine learning for neuroimaging with scikit-learn. Front. Neuroinform. 8, 14 (2014)

    Article  Google Scholar 

Download references

Acknowledgement

The work was supported by the National Natural Science Foundation of China (NSFC61976131 and NSFC61936007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bao Ge .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

He, M. et al. (2022). Multi-head Attention-Based Masked Sequence Model for Mapping Functional Brain Networks. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13431. Springer, Cham. https://doi.org/10.1007/978-3-031-16431-6_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16431-6_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16430-9

  • Online ISBN: 978-3-031-16431-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics