Many situations could benefit from mapping human arm motions to robot arm motions. For example, a person could control a robot in real-time by having the robot mimic the operator’s arm motions or a person could teach a robot by providing an effective demonstration with their own hand in the robot’s workspace. While a mapping from human arm motion to robot arm motion could have many applications, the vastly different motion capabilities caused by a robot’s differing scale, speed, geometry, or even number of degrees of freedom, make a direct mapping infeasible. In this line of work, we propose methods that bridge the gap as best as possible between human-motion and robot-motion for control and teaching applications.
In our first thread of research, we develop novel interfaces that allow novice users to effectively and intuitively control robot manipulators. The premise of our methods is that an interface that allows a user to direct a robot using the natural 6-DOF space of his/her hand would afford effective direct control of a robot arm. While a direct mapping is infeasible (given the reasons mentioned above), our key idea is that by relaxing the constraint of the direct mapping between hand position and orientation and end-effector configuration, a system can provide the user with the feeling of direct control, yet be able to achieve the practical requirements for telemanipulation such as motion smoothness and singularity avoidance. We demonstrate in several user studies that novice users can complete a range of tasks more efficiently and enjoyably using our relaxed-mimicry based interfaces than with standard interfaces. We are also exploring ways of modifying this mapping from human to robot motion to use it as a communication channel with the human observer, for example, to convey information about haptic properties such as weight.
In our second thread of research, we develop techniques to effectively teach a robot using natural human-arm motion demonstrations. We introduce a novel input device called instrumented tongs that is intuitive for untrained people to use when showing examples of tasks to a robot. The design of the tongs allows people to demonstrate using manipulation strategies that are relevant and can be mapped easily to robots with two-finger parallel grippers. The instrumented tongs are able to accurately capture the position, orientation, force, and torque information throughout a demonstration trace. We use the data from the demonstration to calculate a feasible execution trace by the robot, such that the robot robustly performs the same task exhibited during the demonstration.
THIS WORK HAS BEEN SUPPORTED IN PART BY NATIONAL SCIENCE FOUNDATION AWARD 1830242 AND A UW2020 AWARD FROM THE UNIVERSITY OF WISCONSIN-MADISON OFFICE OF THE VICE CHANCELLOR FOR RESEARCH AND GRADUATE EDUCATION.
-
Praveena, P., L. Molina, Y. Wang, E. Senft, B. Mutlu, and M. Gleicher. “Understanding Control Frames in Multi-Camera Robot Telemanipulation”. Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, IEEE Press, 2022, pp. 432–440.
-
Praveena, P., D. Rakita, B. Mutlu, and M. Gleicher. “Supporting Perception of Weight through Motion-Induced Sensory Conflicts in Robot Teleoperation”. Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020, pp. 509-17.
-
Rakita, D., B. Mutlu, and M. Gleicher. “Effects of Onset Latency and Robot Speed Delays on Mimicry-Control Teleoperation”. HRI, 2020, pp. 519-27.
-
Praveena, P., G. Subramani, B. Mutlu, and M. Gleicher. “Characterizing Input Methods for Human-to-Robot Demonstrations”. Proceedings of the 2019 ACM/IEEE International Conference on Human-Robot Interaction, IEEE, 2019, pp. 344-53.
-
Praveena, P., D. Rakita, B. Mutlu, and M. Gleicher. “User-Guided Offline Synthesis of Robot Arm Motion from 6-DoF Paths”. 2019 International Conference on Robotics and Automation (ICRA), IEEE, 2019, pp. 8825-31.
-
Rakita, D., B. Mutlu, M. Gleicher, and L. Hiatt. “Shared Dynamic Curves: A Shared-Control Telemanipulation Method for Motor Task training”. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, ACM, 2018.
-
Bodden, C., D. Rakita, B. Mutlu, and M. Gleicher. “A Flexible Optimization-Based Method for Synthesizing Intent-Expressive Robot Arm motion”. The International Journal of Robotics Research, Vol. 37, no. 11, 2018, pp. 1376-94.
-
Rakita, D., B. Mutlu, and M. Gleicher. “An Analysis of RelaxedIK: An Optimization-Based Framework for Generating Accurate and Feasible Robot Arm motions”. Autonomous Robots, Vol. 44, no. 7, Springer US, pp. 1341-58.