Skip to main content

A Particle Filter Framework for Object Tracking Using Visual-Saliency Information

  • Chapter
Intelligent Multimedia Surveillance

Abstract

Automated processing of video streams is core to current surveillance systems. The basic building blocks of video processing techniques are object detection and tracking. Tracking results are further analyzed to detect various events and activities for situation assessment. Several approaches to object detection and tracking are based on background modeling. These approaches are generally vulnerable to noise, illumination changes etc. Further, the object may not look similar in an image sequence over time due to changes in orientation, lighting, occlusion, etc. In this chapter, we explore application of neurobiology-saliency for object detection and tracking using particle filters. We use low-level features such as color, luminance and edge information along with motion cues to track a single person. Experimental results show that this approach is illumination invariant and can track persons in varying lighting conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abrams, R.A., Christ, S.E.: Motion onset captures attention. Psychol. Sci. 14(5) (2003)

    Google Scholar 

  2. Bajcsy, A., Gelade, G.: A feature integration theory of attention. Cogn. Psychol. 12, 97–136 (1980)

    Article  Google Scholar 

  3. Baluja, S., Pomerleau, D.: Expectation based selective attention for visual monitoring and control of a robot vehicle. Robot. Auton. Syst. 22(3–4), 329–344 (1997)

    Article  Google Scholar 

  4. Barth, E., Dorr, M., Böhme, M., Gegenfurtner, K.R., Martinetz, T.: Guiding the mind’s eye: improving communication and vision by external control of the scanpath. In: Proc. SPIE Human Vision and Electronic Imaging, San Jose, CA, USA, vol. 6057 (2006)

    Google Scholar 

  5. Boiman, O., Irani, M.: Detecting irregularities in images and in video. In: IEEE Intl Conf. on Computer Vision, pp. 1–8 (2005)

    Google Scholar 

  6. Cannon, M., Fullenkamp, S.: A model for inhibitory lateral interaction effects on perceived contrast. Vis. Res. 36(8), 1115–1125 (1996)

    Article  Google Scholar 

  7. Chen, C., Wolf, W.: Background modeling and object tracking using multi-spectral sensors. In: 4th ACM International Workshop on Video Surveillance and Sensor Networks, pp. 27–34 (2006)

    Chapter  Google Scholar 

  8. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)

    Article  Google Scholar 

  9. Engel, S., Zhang, X., Wandell, B.: Color tuning in visual cortex measured with functional magnetic resonance imaging. Nature 388(6637), 68–71 (1997)

    Article  Google Scholar 

  10. Greenspan, H., Belongie, S., Goodman, R., Perona, P., Rakshit, S., Anderson, C.H.: Overcomplete steerable pyramid filters and rotation invariance. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 222–228 (1994)

    Chapter  Google Scholar 

  11. Itti, L., Baldi, P.: Bayesian surprise attracts human attention. In: Neural Information Processing Systems (NIPS), pp. 1–8 (2005)

    Google Scholar 

  12. Itti, L., Baldi, P.: A principled approach to detecting surprising events in video. In: IEEE Intl. Conf. Computer Vision and Pattern Recognition, pp. 631–637 (2005)

    Google Scholar 

  13. Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 40, 1489–1506 (2000). citeseer.ist.psu.edu/itti00saliencybased.html

    Article  Google Scholar 

  14. Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 40, 1489–1506 (2000)

    Article  Google Scholar 

  15. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  16. Kadir, T., Brady, M.: Saliency, scale and image description. Int. J. Comput. Vis. 45(2), 85–105 (2001)

    Article  Google Scholar 

  17. Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum. Neurobiol. 4, 219–227 (1985)

    Google Scholar 

  18. Leventhal, A.: The Neural Basis of Visual Function. Vision and Visual Dysfunction, vol. 4. CRC Press, Boca Raton (1991)

    Google Scholar 

  19. Logan, G.: The CODE theory of visual attention: an integration of space-based and object-based attention. Psychol. Rev. 103, 603–649 (1996)

    Article  Google Scholar 

  20. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)

    Article  Google Scholar 

  21. Ma, Y., Lu, L., Zhang, H., Li, M.: A user attention model for video summarization. In: Proceedings of ACM Multimedia (2002). citeseer.ist.psu.edu/ma03user.html

    Google Scholar 

  22. Mahapatra, D., Sun, Y.: Nonrigid registration of dynamic renal MR images using a saliency based MRF model. In: Proc. MICCAI, pp. 771–779 (2008)

    Google Scholar 

  23. Mahapatra, D., Sun, Y.: Registration of dynamic renal mr images using neurobiological model of saliency. In: Proc. ISBI, pp. 1119–1122 (2008)

    Google Scholar 

  24. Mahapatra, D., Sun, Y.: Using saliency features for graphcut segmentation of perfusion kidney images. In: 13th International Conference on Biomedical Engineering, pp. 639–642 (2008)

    Google Scholar 

  25. Mahapatra, D., Sun, Y.: Joint registration and segmentation of dynamic cardiac perfusion images using MRFs. In: Proc. MICCAI, pp. 493–501 (2010)

    Google Scholar 

  26. Mahapatra, D., Sun, Y.: Mrf based intensity invariant elastic registration of cardiac perfusion images using saliency information. IEEE Trans. Biomed. Eng. 58(4), 991–1000 (2011)

    Article  Google Scholar 

  27. Mahapatra, D., Sun, Y.: Orientation histograms as shape priors for left ventricle segmentation using graph cuts. In: Proc: MICCAI, pp. 420–427 (2011)

    Google Scholar 

  28. Mahapatra, D., Sun, Y.: Integrating segmentation information for improved mrf-based elastic image registration. IEEE Trans. Image Process. 21(1), 170–183 (2012)

    Article  MathSciNet  Google Scholar 

  29. Mahapatra, D., Saini, M., Sun, Y.: Illumination invariant tracking in office environments using neurobiology-saliency based particle filter. In: IEEE ICME, pp. 953–956 (2008)

    Google Scholar 

  30. Mahapatra, D., Winkler, S., Yen, S.C.: Motion saliency outweighs other low-level features while watching videos. In: Proc. SPIE Human Vision and Electronic Imaging, San Jose, CA, vol. 6806 (2008)

    Google Scholar 

  31. Milanese, R., Gil, S., Pun, T.: Attentive mechanisms for dynamic and static scene analysis. Opt. Eng. 34(8), 2428–2434 (1995)

    Article  Google Scholar 

  32. Mozer, M., Sitton, M.: Computational modeling of spatial attention. In: Pashle, H. (ed.) Attention, pp. 341–393. UCL Press, London (1998)

    Google Scholar 

  33. Neibur, E., Koch, C.: Computational architectures for attention. In: Parasuraman, R. (ed.) The Attentive Brain, pp. 163–186. MIT Press, Cambridge (1998)

    Google Scholar 

  34. Nie, Y., Ma, K.H.: Adaptive rood pattern search for fast block-matching motion estimation. IEEE Trans. Image Process. 11(12), 1442–1448 (2002)

    Article  Google Scholar 

  35. Nummiaro, K., Koller-Meier, E., Gool, L.V.: An adaptive color-based particle filter. Image Vis. Comput. 21(1), 99–110 (2003). citeseer.ist.psu.edu/nummiaro02adaptive.html

    Article  Google Scholar 

  36. Olsahausen, B., Anderson, C.H., van Essen, D.: A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. J. Neurosci. 13(11), 4700–4719 (1993)

    Google Scholar 

  37. Posner, M., Cohen, Y.: Components of visual orienting. In: Bouma, H., Bouwhuis, D. (eds.) Attention and Performance, pp. 531–556. Erlbaum, Hilldale (1984)

    Google Scholar 

  38. Robinson, D., Peterson, S.: The representation of visual salience in monkey parietal cortex. Nature 391(6,666), 481–484 (1998)

    Google Scholar 

  39. Serre, T., Wolf, L., Poggio, T.: A new biologically motivated framework for robust object recognition. Technical report AI Memo 2004-026, Computer Science and Artificial Intelligence Laboratory, Massachussets Institute of Technology (2004)

    Google Scholar 

  40. Simoncelli, E.P., Heeger, D.J.: A model of neuronal responses in visual area MT. Vis. Res. 38(5), 743–761 (1998). http://www.cns.nyu.edu/~eero/ABSTRACTS/simoncelli96-abstract.html

    Article  Google Scholar 

  41. Soto, D., Blanco, M.: Spatial attention and object-based attention: a comparison within a single task. Vis. Res. 44, 69–81 (2004)

    Article  Google Scholar 

  42. Spengler, M., Schiele, B.: Towards robust multi-cue integration for visual tracking. ACM Comput. Surv. 14(1), 50–58 (2003)

    Google Scholar 

  43. Tremoluet, P., Feldman, J.: Perception of animacy from the motion of a single object. Perception 29, 943–951 (2000)

    Article  Google Scholar 

  44. Triesch, J., Malsburg, C.: Self-organized integration of adaptive visual cues for face tracking. In: International Conference on Automatic Face and Gesture Recognition, pp. 102–107 (2000)

    Chapter  Google Scholar 

  45. Tsotsos, J., Culhane, S., Hai, W., Lai, Y., Davis, N., Nuflo, F.: Modeling visual attention via selective tuning. Artif. Intell. 78(1), 507–545 (1995)

    Article  Google Scholar 

  46. Wixson, L.: Detecting salient motion by accumulating directionally-consistent flow. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 774–780 (2000)

    Article  Google Scholar 

  47. Yilmaz, A., Javed, O., Shah, M.: Object tracking: a survey. ACM Comput. Surv. 38(4) (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dwarikanath Mahapatra .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Mahapatra, D., Saini, M. (2013). A Particle Filter Framework for Object Tracking Using Visual-Saliency Information. In: Atrey, P., Kankanhalli, M., Cavallaro, A. (eds) Intelligent Multimedia Surveillance. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41512-8_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-41512-8_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-41511-1

  • Online ISBN: 978-3-642-41512-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics