Learning Object Grasping for Soft Robot Hands

Changhyun Choi, Wilko Schwarting, Joseph Delpreto, Daniela Rus

Research output: Contribution to journalArticlepeer-review

124 Scopus citations

Abstract

We present a three-dimensional deep convolutional neural network (3D CNN) approach for grasping unknown objects with soft hands. Soft hands are compliant and capable of handling uncertainty in sensing and actuation, but come at the cost of unpredictable deformation of the soft fingers. Traditional model-driven grasping approaches, which assume known models for objects, robot hands, and stable grasps with expected contacts, are inapplicable to such soft hands, since predicting contact points between objects and soft hands is not straightforward. Our solution adopts a deep CNN approach to find good caging grasps for previously unseen objects by learning effective features and a classifier from point cloud data. Unlike recent CNN models applied to robotic grasping which have been trained on 2D or 2.5D images and limited to a fixed top grasping direction, we exploit the power of a 3D CNN model to estimate suitable grasp poses from multiple grasping directions (top and side directions) and wrist orientations, which has great potential for geometry-related robotic tasks. Our soft hands guided by the 3D CNN algorithm show 87% successful grasping on previously unseen objects. A set of comparative evaluations shows the robustness of our approach with respect to noise and occlusions.

Original languageEnglish (US)
Pages (from-to)2370-2377
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume3
Issue number3
DOIs
StatePublished - Jul 2018
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2016 IEEE.

Keywords

  • Perception for grasping and manipulation
  • deep learning in robotics and automation

Fingerprint

Dive into the research topics of 'Learning Object Grasping for Soft Robot Hands'. Together they form a unique fingerprint.

Cite this