Skip to main content

Concurrent Transformer for Spatial-Temporal Graph Modeling

  • Conference paper
  • First Online:
Database Systems for Advanced Applications (DASFAA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13247))

Included in the following conference series:

  • 2562 Accesses

Abstract

Previous studies have shown that concurrently extracting spatial and temporal information is a better way to model spatial-temporal data. However, in these studies, the receptive field has been fixed to construct the carrier of concurrent extraction, resulting in the lack of flexibility in selecting receptive fields, and the loss of scalability for capturing long-range temporal dependencies. Furthermore, these studies will result in static weights that are insufficient to describe the complex spatial and temporal dependencies. In this paper, we propose the Concurrent Spatial-Temporal Transformer (CSTT), which ensures the denseness of the carrier of concurrent extraction, so that messages under different receptive fields can be passed more efficiently, thus making the selection of receptive field more flexible. Therefore, the scalability of capturing long-range temporal dependencies can also be promised. Additionally, a unified self-attention mechanism will be applied on the carrier of concurrent extraction to capture spatial and temporal information, so that the dependencies between both dimensions under different contextual information can be preserved. On these bases, we design an iterative strategy to further tackle long sequences. Experiments on four traffic real-world datasets illustrate that our algorithm can achieve significant improvement on the classical spatial-temporal modeling task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Yu, B., Yin, H., Zhu, Z.: Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting. In: IJCAI (2018)

    Google Scholar 

  2. Li, Y., Yu, R., Shahabi, C., Liu, Y.: Diffusion convolutional recurrent neural network: data-driven traffic forecasting. In: ICLR (2018)

    Google Scholar 

  3. Ji, S., Wei, X., Yang, M., Kai, Yu.: 3D convolutional neural networks for human action recognition. T-PAMI 35, 221–231 (2013)

    Article  Google Scholar 

  4. Yan, S., Xiong, Y., Lin, D.: Spatial temporal graph convolutional networks for skeleton-based action recognition. ArXiv abs/1801.07455 (2018)

    Google Scholar 

  5. Wu, Z., Pan, S., Long, G., Jiang, J., Zhang, C.: Graph WaveNet for deep spatial-temporal graph modeling. In: IJCAI (2019)

    Google Scholar 

  6. Guo, S., Lin, Y., Feng, N., Song, C., Wan, H.: Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In: AAAI (2019)

    Google Scholar 

  7. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: ICCV (2015)

    Google Scholar 

  8. Song, C., Lin, Y., Guo, S., Wan, H.: Spatial-temporal synchronous graph convolutional networks: a new framework for spatial-temporal network data forecasting. In: AAAI (2020)

    Google Scholar 

  9. Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O., Dahl, G.E.: Neural message passing for quantum chemistry. In: ICML (2017)

    Google Scholar 

  10. Klicpera, J., Bojchevski, A., Günnemann, S.: Predict then propagate: graph neural networks meet personalized PageRank. In: ICLR (2019)

    Google Scholar 

  11. Vaswani, A., et al.: Attention is all you need. In: NIPS (2017)

    Google Scholar 

  12. Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. In: ICLR (2018)

    Google Scholar 

  13. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473 (2015)

    Google Scholar 

  14. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9 (1997)

    Google Scholar 

  15. Drucker, H., Burges, C.J., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. In: NIPS (1996)

    Google Scholar 

Download references

Acknowledgements

This work is funded in part by the National Natural Science Foundation of China Projects No. U1936213. This work is also supported in part by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yun Xiong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xie, Y. et al. (2022). Concurrent Transformer for Spatial-Temporal Graph Modeling. In: Bhattacharya, A., et al. Database Systems for Advanced Applications. DASFAA 2022. Lecture Notes in Computer Science, vol 13247. Springer, Cham. https://doi.org/10.1007/978-3-031-00129-1_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-00129-1_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-00128-4

  • Online ISBN: 978-3-031-00129-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics