Visual attention has shown usefulness in image captioning, with the goal of enabling a caption model to selectively focus on regions of interest. Existing models typically rely on top-down language information and learn attention implicitly by optimizing the captioning objectives. While somewhat effective, the learned top-down attention can fail to focus on correct regions of interest without direct supervision of attention. Inspired by the human visual system which is driven by not only the task-specific top-down signals but also the visual stimuli, we in this work propose to use both types of attention for image captioning. In particular, we highlight the complementary nature of the two types of attention and develop a model (Boosted Attention) to integrate them for image captioning. We validate the proposed approach with state-of-the-art performance across various evaluation metrics.
|Original language||English (US)|
|Title of host publication||Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings|
|Editors||Vittorio Ferrari, Cristian Sminchisescu, Yair Weiss, Martial Hebert|
|Number of pages||17|
|State||Published - 2018|
|Event||15th European Conference on Computer Vision, ECCV 2018 - Munich, Germany|
Duration: Sep 8 2018 → Sep 14 2018
|Name||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Other||15th European Conference on Computer Vision, ECCV 2018|
|Period||9/8/18 → 9/14/18|
Bibliographical noteFunding Information:
Acknowledgements. This work is supported by NSF Grant 1763761 and University of Minnesota Department of Computer Science and Engineering Start-up Fund (QZ).
- Human attention
- Image captioning
- Visual attention