Tumbling locomotion allows for small robots to traverse comparatively rough terrain, however, their motion is complex and difficult to control. Existing tumbling robot control methods involve manual control or the assumption of at terrain. Reinforcement learning allows for the exploration and exploitation of diverse environments. By utilizing reinforcement learning with domain randomization, a robust control policy can be learned in simulation then transferred to the real world. In this paper, we demonstrate autonomous setpoint navigation with a tumbling robot prototype on at and non- at terrain. The flexibility of this system improves the viability of nontraditional robots for navigational tasks.
|Original language||English (US)|
|Title of host publication||2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||7|
|State||Published - Oct 24 2020|
|Event||2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020 - Las Vegas, United States|
Duration: Oct 24 2020 → Jan 24 2021
|Name||IEEE International Conference on Intelligent Robots and Systems|
|Conference||2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020|
|Period||10/24/20 → 1/24/21|
Bibliographical noteFunding Information:
VI. ACKNOWLEDGEMENTS The authors would like to thank all the members of the Center for Distributed Robotics Laboratory for their help. This material is based upon work partially supported by the Corn Growers Association of MN, the Minnesota Robotics Institute (MnRI), Honeywell, and the National Science Foundation through grants CNS-1439728, CNS-1531330, and CNS-1939033. USDA/NIFA has also supported this work through the grant 2020-67021-30755. The source code used for URDF generation is provided in a repository at https://github.com/MOLLYBAS/urdf randomizer.
© 2020 IEEE.