Abstract
Robotic systems require the use of sensing to enable flexible operation in uncalibrated or partially calibrated environments. Recent work combining robotics with vision has emphasized an active vision paradigm where the system changes the pose of the camera to improve environmental knowledge or to establish and preserve a desired relationship between the robot and objects in the environment. Much of this work has concentrated upon the active observation of objects by the robotic agent. In this paper, we present extensions to the Controlled Active Vision framework that focus upon the autonomous grasping of a moving or static object in the manipulator's workspace. Our work extends the capabilities of an eye-in-hand system beyond those as a `pointer' or a `camera orienter' to provide the flexibility required to robustly interact with the environment in the presence of uncertainty. The proposed work is experimentally verified using the Minnesota Robotic Visual Tracker (MRVT) to automatically select object features, to derive estimates of unknown environmental parameters, and to supply a control vector based upon these estimates to guide the manipulator in the grasping of a moving or static object.
Original language | English (US) |
---|---|
Title of host publication | IEEE International Conference on Intelligent Robots and Systems |
Editors | Anon |
Publisher | IEEE |
Pages | 329-334 |
Number of pages | 6 |
Volume | 1 |
State | Published - Jan 1 1995 |
Event | Proceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Part 3 (of 3) - Pittsburgh, PA, USA Duration: Aug 5 1995 → Aug 9 1995 |
Other
Other | Proceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Part 3 (of 3) |
---|---|
City | Pittsburgh, PA, USA |
Period | 8/5/95 → 8/9/95 |