Skip to main content

Multi-scale Spatial-Temporal Attention for Action Recognition

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11857))

Included in the following conference series:

Abstract

In this paper, we propose a new attention model by integrating multi-scale features to recognize human action. We introduce multi-scale features through different sizes of convolution kernel on both spatial and temporal fields. The spatial attention model considers the relationship between detail and integral of the human action, therefore our model can focus on the significant part of the action on the spatial field. The temporal attention model considers the speed of action, in order that our model can concentrate on the pivotal clips of the action on the temporal field. We verify the validity of multi-scale features in the benchmark action recognition datasets, including UCF-101 (\(88.8\%\)), HMDB-51 (\(60.0\%\)) and Penn (\(96.3\%\)). As a result that the accuracy of our model outperforms the previous methods.

The first author of this paper is an undergraduate.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Sharma, S., Kiros, R., Salakhutdinov, R.: Action recognition using visual attention. Comput. Sci. (2015)

    Google Scholar 

  2. Yu, T.Z., Guo, C.X., Wang, L.F., Gu, H.X., Xiang, S.M., Pan, C.H.: Joint spatial-temporal attention for action recognition. Pattern Recogn. Lett. 112, 226–233 (2018)

    Article  Google Scholar 

  3. Dempsey, P.W., Allison, M.E.D., Akkaraju, S., et al.: C3d of complement as a molecular adjuvant: bridging innate and acquired immunity. Science 271(5247), 348–350 (1996)

    Article  Google Scholar 

  4. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos (2014)

    Google Scholar 

  5. Balaguer, J.F., Gobbetti, E.: i3D: a high-speed 3D Web browser. In: Proceedings ACM Symposium on VRML, pp. 69–76 (1995)

    Google Scholar 

  6. Qiu, Z., Yao, T., Mei, T.: Learning spatio-temporal representation with pseudo-3D residual networks (2017)

    Google Scholar 

  7. Wang, L., Xiong, Y., Wang, Z., et al.: Towards good practices for very deep two-stream ConvNets. Comput. Sci. (2015)

    Google Scholar 

  8. Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition. In: Computer Vision Pattern Recognition (2016)

    Google Scholar 

  9. Kosiorek, A.R., Bewley, A., Posner, I.: Hierarchical attentive recurrent tracking (2017)

    Google Scholar 

  10. Wang, F., Jiang, M., Qian, C., et al.: Residual attention network for image classification (2017)

    Google Scholar 

  11. Hu, J., Shen, L., Albanie, S., et al.: Squeeze-and-excitation networks (2017)

    Google Scholar 

  12. Mnih, V., Heess, N., Graves, A., et al.: Recurrent models of visual attention. In: Advances in Neural Information Processing Systems (2014)

    Google Scholar 

  13. Chen, L.C., Yi, Y., Jiang, W., et al.: Attention to scale: scale-aware semantic image segmentation. In: IEEE Conference on Computer Vision Pattern Recognition (2016)

    Google Scholar 

  14. Florack, L., Romeny, B.T.H., Viergever, M., et al.: The Gaussian scale-space paradigm and the multiscale local jet. Int. J. Comput. Vision 18(1), 61–75 (1996)

    Article  Google Scholar 

  15. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2014)

    Google Scholar 

  16. Lin, G.S., Shen, C.H., Hengel, A., Reid, I.: Efficient piecewise training of deep structured models for semantic segmentation. In: CVPR, pp. 3194–3203 (2016)

    Google Scholar 

  17. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition (2015)

    Google Scholar 

  18. Clement, F., Couprie, C., Najman, L., et al.: Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach. Intell. (2013)

    Google Scholar 

  19. Wang, L., et al.: Temporal segment networks: towards good practices for deep action recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 20–36. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_2

    Chapter  Google Scholar 

  20. Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Feifei, L.: Large-scale video classification with convolutional neural networks. In: CVPR, pp. 1725–1732 (2014)

    Google Scholar 

  21. Donahue, J., et al.: Long-term recurrent convolutional networks for visual recognition and description. In: CVPR, pp. 2625–2634 (2015)

    Google Scholar 

  22. Sun, L., Jia, K., Shi, B.: Human action recognition using factorized spatio-temporal convolutional networks. In: ICCV, pp. 57–65 (2015)

    Google Scholar 

  23. Rohit, G., Deva, R.: Attentional pooling for action recognition. In: NIPS, pp. 33–44 (2017)

    Google Scholar 

  24. Cao, C., Zhang, Y., Zhang, C., Lu, H.: Body joint guided 3D deep convolutional descriptors for action recognition, CoRR. abs/1704.07160 (2017)

    Google Scholar 

  25. Yu, T., Gu, H., Wang, L., Xiang, S., Pan, C.: Cascaded temporal spatial features for video action recognition. In: ICIP, pp. 1552–1556 (2017)

    Google Scholar 

  26. Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition. In: CVPR, pp. 1933–1941 (2016)

    Google Scholar 

  27. Soomro, K., Zamir, A.R., Shah, M.: UCF101: a dataset of 101 human actions classes from videos in the wild, CoRR. abs/1212.0402 (2012)

    Google Scholar 

  28. Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: HMDB: a large video database for human motion recognition. In: ICCV, pp. 2556–2563 (2011)

    Google Scholar 

  29. Zhang, W., Zhu, M., Derpanis, K.: From actemes to action: a strongly-supervised representation for detailed action understanding. In: ICCV, pp. 2248–2255 (2013)

    Google Scholar 

  30. Wang, H., Schmid, C.: Action recognition with improved trajectories. In: ICCV, pp. 3551–3558 (2013)

    Google Scholar 

  31. Iqbal, U., Garbade, M., Gall, J.: Pose for action-action for pose. In: FG, pp. 438–445 (2017)

    Google Scholar 

Download references

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant Number 61773377 and 61573352).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lingfeng Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Q., Yan, H., Wang, L. (2019). Multi-scale Spatial-Temporal Attention for Action Recognition. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2019. Lecture Notes in Computer Science(), vol 11857. Springer, Cham. https://doi.org/10.1007/978-3-030-31654-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-31654-9_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-31653-2

  • Online ISBN: 978-3-030-31654-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics