Multi-Camera Saliency

Yan Luo, Ming Jiang, Yongkang Wong, Qi Zhao

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

A significant body of literature on saliency modeling predicts where humans look in a single image or video. Besides the scientific goal of understanding how information is fused from multiple visual sources to identify regions of interest in a holistic manner, there are tremendous engineering applications of multi-camera saliency due to the widespread of cameras. This paper proposes a principled framework to smoothly integrate visual information from multiple views to a global scene map, and to employ a saliency algorithm incorporating high-level features to identify the most important regions by fusing visual information. The proposed method has the following key distinguishing features compared with its counterparts: (1) the proposed saliency detection is global (salient regions from one local view may not be important in a global context), (2) it does not require special ways for camera deployment or overlapping field of view, and (3) the key saliency algorithm is effective in highlighting interesting object regions though not a single detector is used. Experiments on several data sets confirm the effectiveness of the proposed principled framework.

Original languageEnglish (US)
Article number7010978
Pages (from-to)2057-2070
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume37
Issue number10
DOIs
StatePublished - Oct 1 2015

Bibliographical note

Publisher Copyright:
© 2015 IEEE.

Keywords

  • Global Saliency
  • High-Level Feature Saliency
  • Label Consistent K-SVD
  • Multi-Camera Eye Tracking Dataset
  • Multi-Camera Saliency
  • Region Competition

Fingerprint

Dive into the research topics of 'Multi-Camera Saliency'. Together they form a unique fingerprint.

Cite this