A graph-based foreground representation and its application in example based people matching in video

Kedar A. Patwardhan, Guillermo Sapiro, Vassilios Morellas

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

In this work, we propose a framework for foreground representation, in video and illustrate it with a multi-camera people matching application. We first decompose the video into foreground and back-ground. A low-level coarse segmentation of the foreground is then used to generate a simple graph representation. A vertex in the graph represents the "appearance" of a corresponding segment in the foreground, while the relationship between, two segments is encoded by an edge between the corresponding vertices. This provides a simple yet powerful and general representation, of the foreground, which can be very useful in problems such as people detection and tracking. We illustrate the effectiveness of this model using an "example based query" type of application for people matching in videos. Matching results are provided in multiple-camera situations and also under occlusion.

Original languageEnglish (US)
Title of host publication2007 IEEE International Conference on Image Processing, ICIP 2007 Proceedings
PublisherIEEE Computer Society
PagesV37-V40
ISBN (Print)1424414377, 9781424414376
DOIs
StatePublished - Jan 1 2007
Event14th IEEE International Conference on Image Processing, ICIP 2007 - San Antonio, TX, United States
Duration: Sep 16 2007Sep 19 2007

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume5
ISSN (Print)1522-4880

Other

Other14th IEEE International Conference on Image Processing, ICIP 2007
CountryUnited States
CitySan Antonio, TX
Period9/16/079/19/07

Keywords

  • Image analysis
  • Image matching
  • Machine vision
  • Video

Fingerprint Dive into the research topics of 'A graph-based foreground representation and its application in example based people matching in video'. Together they form a unique fingerprint.

Cite this