This paper presents an approach for autonomous underwater robots to visually detect and identify divers. The proposed approach enables an autonomous underwater robot to detect multiple divers in a visual scene and distinguish between them. Such methods are useful for robots to identify a human leader, for example, in multi-human/robot teams where only designated individuals are allowed to command or lead a team of robots. Initial diver identification is performed using the Faster R-CNN algorithm with a region proposal network which produces bounding boxes around the divers' locations. Subsequently, a suite of spatial and frequency domain descriptors are extracted from the bounding boxes to create a feature vector. A K-Means clustering algorithm, with k set to the number of detected bounding boxes, thereafter identifies the detected divers based on these feature vectors. We evaluate the performance of the proposed approach on video footage of divers swimming in front of a mobile robot and demonstrate its accuracy.
|Original language||English (US)|
|Title of host publication||2019 International Conference on Robotics and Automation, ICRA 2019|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||7|
|State||Published - May 2019|
|Event||2019 International Conference on Robotics and Automation, ICRA 2019 - Montreal, Canada|
Duration: May 20 2019 → May 24 2019
|Name||Proceedings - IEEE International Conference on Robotics and Automation|
|Conference||2019 International Conference on Robotics and Automation, ICRA 2019|
|Period||5/20/19 → 5/24/19|
Bibliographical noteFunding Information:
ACKNOWLEDGEMENT We gratefully acknowledge the support of NVIDIA Corporation for the donation of the Titan Xp GPU used for this research and the support of the MnDRIVE initiative. We also acknowledge colleagues Marc Ho, Julian Lagman, and Hannah Dubois for assisting with pool trials and providing test datasets.
© 2019 IEEE.