Classification and counting of composite objects in traffic scenes using global and local image analysis

Guruprasad Somasundaram, Ravishankar Sivalingam, Vassilios Morellas, Nikolaos Papanikolopoulos

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

Object recognition algorithms often focus on determining the class of a detected object in a scene. Two significant phases are usually involved in object recognition. The first phase is the object representation phase, in which the most suitable features that provide the best discriminative power under constraints such as lighting, resolution, scale, and view variations are chosen to describe the objects. The second phase is to use this representation space to develop models for each object class using discriminative classifiers. In this paper, we focus on composite objects, i.e., objects with two or more simpler classes that are interconnected in a complicated manner. One classic example of such a scenario is a bicyclist. A bicyclist consists of a bicycle and a human who rides the bicycle. When we are faced with the task of classifying bicyclists and pedestrians, it is counterintuitive and often hard to come up with a discriminative classifier to distinguish the two classes. We explore global image analysis based on bag of visual words to compare the results with local image analysis, in which we attempt to distinguish the individual parts of the composite object. We also propose a unified naive Bayes framework and a combined histogram feature method for combining the individual classifiers for enhanced performance.

Original languageEnglish (US)
Article number6291788
Pages (from-to)69-81
Number of pages13
JournalIEEE Transactions on Intelligent Transportation Systems
Volume14
Issue number1
DOIs
StatePublished - 2013

Keywords

  • Classification
  • dictionary learning
  • pedestrian tracking
  • visual tracking

Fingerprint

Dive into the research topics of 'Classification and counting of composite objects in traffic scenes using global and local image analysis'. Together they form a unique fingerprint.

Cite this