Spatio-temporal attention model for foreground detection in cross-scene surveillance videos

Dong Liang*, Jiaxing Pan, Han Sun, Huiyu Zhou

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)
5 Downloads (Pure)

Abstract

Foreground detection is an important theme in video surveillance. Conventional background modeling approaches build sophisticated temporal statistical model to detect foreground based on low-level features, while modern semantic/instance segmentation approaches generate high-level foreground annotation, but ignore the temporal relevance among consecutive frames. In this paper, we propose a Spatio-Temporal Attention Model (STAM) for cross-scene foreground detection. To fill the semantic gap between low and high level features, appearance and optical flow features are synthesized by attention modules via the feature learning procedure. Experimental results on CDnet 2014 benchmarks validate it and outperformed many state-of-the-art methods in seven evaluation metrics. With the attention modules and optical flow, its F-measure increased 9% and 6% respectively. The model without any tuning showed its cross-scene generalization on Wallflower and PETS datasets. The processing speed was 10.8 fps with the frame size 256 by 256.

Original languageEnglish
Article number5142
JournalSensors (Switzerland)
Volume19
Issue number23
DOIs
Publication statusPublished - 24 Nov 2019
Externally publishedYes

Bibliographical note

Funding Information:
This work is supported by the National Key R&D Program of China under Grant 2017YFB0802300, National Natural Science Foundation of China 61601223. H. Zhou was supported by UK EPSRC under Grant EP/N011074/1, Royal Society-Newton Advanced Fellowship under Grant NA160342, and European Union?s Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 720325.

Funding Information:
Funding: This work is supported by the National Key R&D Program of China under Grant 2017YFB0802300, National Natural Science Foundation of China 61601223. H. Zhou was supported by UK EPSRC under Grant EP/N011074/1, Royal Society-Newton Advanced Fellowship under Grant NA160342, and European Union’s Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 720325.

Publisher Copyright:
© 2019 by the authors. Licensee MDPI, Basel, Switzerland.

Copyright:
Copyright 2019 Elsevier B.V., All rights reserved.

Keywords

  • Attention model
  • Background modeling
  • Foreground detection
  • Optical flow

ASJC Scopus subject areas

  • Analytical Chemistry
  • Biochemistry
  • Atomic and Molecular Physics, and Optics
  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Spatio-temporal attention model for foreground detection in cross-scene surveillance videos'. Together they form a unique fingerprint.

Cite this