Abstract
This paper presents a method to co-segment an object from wide baseline multiview images using cross-view self-supervision. A key challenge in the wide baseline images lies in the fragility of photometric matching. Inspired by shape-from-silhouette that does not require photometric matching, we formulate a new theory of shape belief transfer - the segmentation belief in one image can be used to predict that of the other image through epipolar geometry. This formulation is differentiable, and therefore, an end-to-end training is possible. We analyze the shape belief transfer to identify the theoretical upper and lower bounds of the unlabeled data segmentation, which characterizes the degenerate cases of co-segmentation. We design a novel triple network that embeds this shape belief transfer, which is agnostic to visual appearance and baseline. The resulting network is validated by recognizing a target object from realworld visual data including non-human species and a subject of interest in social videos where attaining large-scale annotated data is challenging.
Original language | English (US) |
---|---|
Title of host publication | Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1931-1940 |
Number of pages | 10 |
ISBN (Electronic) | 9781728165530 |
DOIs | |
State | Published - Mar 2020 |
Event | 2020 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2020 - Snowmass Village, United States Duration: Mar 1 2020 → Mar 5 2020 |
Publication series
Name | Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020 |
---|
Conference
Conference | 2020 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2020 |
---|---|
Country/Territory | United States |
City | Snowmass Village |
Period | 3/1/20 → 3/5/20 |
Bibliographical note
Funding Information:This work is supported by NSF IIS 1846031 and NSF IIS 1755895.
Publisher Copyright:
© 2020 IEEE.