Grasping of static and moving objects using a vision-based control approach

Christopher E. Smith, Nikolaos P Papanikolopoulos

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

Robotic systems require the use of sensing to enable flexible operation in uncalibrated or partially calibrated environments. Recent work combining robotics with vision has emphasized an active vision paradigm where the system changes the pose of the camera to improve environmental knowledge or to establish and preserve a desired relationship between the robot and objects in the environment. Much of this work has concentrated upon the active observation of objects by the robotic agent. In this paper, we present extensions to the Controlled Active Vision framework that focus upon the autonomous grasping of a moving or static object in the manipulator's workspace. Our work extends the capabilities of an eye-in-hand system beyond those as a `pointer' or a `camera orienter' to provide the flexibility required to robustly interact with the environment in the presence of uncertainty. The proposed work is experimentally verified using the Minnesota Robotic Visual Tracker (MRVT) to automatically select object features, to derive estimates of unknown environmental parameters, and to supply a control vector based upon these estimates to guide the manipulator in the grasping of a moving or static object.

Original languageEnglish (US)
Title of host publicationIEEE International Conference on Intelligent Robots and Systems
Editors Anon
PublisherIEEE
Pages329-334
Number of pages6
Volume1
StatePublished - Jan 1 1995
EventProceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Part 3 (of 3) - Pittsburgh, PA, USA
Duration: Aug 5 1995Aug 9 1995

Other

OtherProceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Part 3 (of 3)
CityPittsburgh, PA, USA
Period8/5/958/9/95

Fingerprint

Dive into the research topics of 'Grasping of static and moving objects using a vision-based control approach'. Together they form a unique fingerprint.

Cite this