Anticipating Where People will Look Using Adversarial Networks

Mengmi Zhang, Keng Teck Ma, Joo Hwee Lim, Qi Zhao, Jiashi Feng

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

We introduce a new problem of gaze anticipation on future frames which extends the conventional gaze prediction problem to go beyond current frames. To solve this problem, we propose a new generative adversarial network based model, Deep Future Gaze (DFG), encompassing two pathways: DFG-P is to anticipate gaze prior maps conditioned on the input frame which provides task influences; DFG-G is to learn to model both semantic and motion information in future frame generation. DFG-P and DFG-G are then fused to anticipate future gazes. DFG-G consists of two networks: a generator and a discriminator. The generator uses a two-stream spatial-temporal convolution architecture (3D-CNN) for explicitly untangling the foreground and background to generate future frames. It then attaches another 3D-CNN for gaze anticipation based on these synthetic frames. The discriminator plays against the generator by distinguishing the synthetic frames of the generator from the real frames. Experimental results on the publicly available egocentric and third person video datasets show that DFG significantly outperforms all competitive baselines. We also demonstrate that DFG achieves better performance of gaze prediction on current frames in egocentric and third person videos than state-of-the-art methods.

Original languageEnglish (US)
Article number8471119
Pages (from-to)1783-1796
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume41
Issue number8
DOIs
StatePublished - Aug 1 2019

Bibliographical note

Funding Information:
This work was supported by the Reverse Engineering Visual Intelligence for cognitiVe Enhancement (REVIVE) programme (1335H00098) funded by A*STAR, National University of Singapore startup grant R-263-000-C08-133 and Ministry of Education of Singapore AcRF Tier One grant R-263-000-C21-112. We also like to thank Yin Li, Sayed Hossein Khatoonabadi, and Victor Leboran for their help in replicating the experimental setups in [3], [33], [34].

Publisher Copyright:
© 2018 IEEE.

Keywords

  • Egocentric videos
  • gaze anticipation
  • generative adversarial network
  • saliency
  • visual attention

Fingerprint

Dive into the research topics of 'Anticipating Where People will Look Using Adversarial Networks'. Together they form a unique fingerprint.

Cite this