Collaborative robots (cobots) deployed in industry offer the potential for a paradigm shift in the way human operators work with their robotic co-workers over traditional robotic automation. The goal of our research into human-robot teaming is to understand how we can facilitate collaborative interactions that addresses the skillsets of each member in the team in three threads.
The first thread of our work is to develop collaborative-task authoring that considers the skills of humans and robots. We implemented an authoring environment that allows engineers to express their manufacturing processes using common work analysis structure and once specified, the tool allocates humans and robots optimally. The tool produces a robot program and human workplan that can be tested in simulation.
Our second thread focuses on human operators performing collaborative tasks, such as those generated from thread one. We evaluated participants working with cobots to understand the levels of task interdependence suitable for collaborative assembly. We also evaluated a supervisory task to understand how attention aids affect workers performance and cognitive load.
Our third thread addresses the various skills gaps surrounding cobot integration. In a recent ethnography, we found cobots are being treated as uncaged traditional robots instead of taking advantage of newer, more collaborative work paradigms. Potential factors for the uncaged approach are training for industrial robotics focuses on traditional automation skills, concerns over human safety, and ease of development for automated/semi-automated solutions. To further compound this problem, there is tension between operators’ desire to adjust a cobot’s program and the engineers’ concern over the safety ramifications. To address this tension, we are developing an educational environment that allows operators to safely learn how to program the robot, learn about cobot safety, and learn various business objectives that influence program design. In future work we intend to further explore these three threads to better augment operators’ and engineers’ abilities in their manufacturing roles.
-
Schoen, A., N. White, C. Henrichs, A. Siebert-Evenstone, D. Shaffer, and B. Mutlu. “CoFrame: A System for Training Novice Cobot Programmers”. Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, IEEE Press, 2022, pp. 185–194.
Abstract
The introduction of collaborative robots (cobots) into the workplace has presented both opportunities and challenges for those seeking to utilize their functionality. Prior research has shown that despite the capabilities afforded by cobots, there is a disconnect between those capabilities and the applications that they currently are deployed in, partially due to a lack of effective cobot-focused instruction in the field. Experts who work successfully within this collaborative domain could offer insight into the considerations and process they use to more effectively capture this cobot capability. Using an analysis of expert insights in the collaborative interaction design space, we developed a set of Expert Frames based on these insights and integrated these Expert Frames into a new training and programming system that can be used to teach novice operators to think, program, and troubleshoot in ways that experts do. We present our system and case studies that demonstrate how Expert Frames provide novice users with the ability to analyze and learn from complex cobot application scenarios.
DOI: 10.5555/3523760.3523788
BibTex
@inproceedings{ 10.5555/3523760.3523788, author = {Schoen, Andrew and White, Nathan and Henrichs, Curt and Siebert-Evenstone, Amanda and Shaffer, David and Mutlu, Bilge}, title = {CoFrame: A System for Training Novice Cobot Programmers}, year = {2022}, publisher = {IEEE Press}, abstract = {The introduction of collaborative robots (cobots) into the workplace has presented both opportunities and challenges for those seeking to utilize their functionality. Prior research has shown that despite the capabilities afforded by cobots, there is a disconnect between those capabilities and the applications that they currently are deployed in, partially due to a lack of effective cobot-focused instruction in the field. Experts who work successfully within this collaborative domain could offer insight into the considerations and process they use to more effectively capture this cobot capability. Using an analysis of expert insights in the collaborative interaction design space, we developed a set of Expert Frames based on these insights and integrated these Expert Frames into a new training and programming system that can be used to teach novice operators to think, program, and troubleshoot in ways that experts do. We present our system and case studies that demonstrate how Expert Frames provide novice users with the ability to analyze and learn from complex cobot application scenarios.}, booktitle = {Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction}, pages = {185–194}, numpages = {10}, keywords = {robot programming interfaces, novice users, robotics operator training, expert models, collaborative robots}, location = {Sapporo, Hokkaido, Japan}, series = {HRI '22} }
-
Schoen, A., D. Sullivan, H. Zhang, D. Rakita, and B. Mutlu. Lively: Enabling Multimodal, Lifelike, and Extensible Real-Time Robot Motion. Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’23), 2023.
Abstract
Robots designed to interact with people in collaborative or social scenarios must move in ways that are consistent with the robot’s task and communication goals. However, combining these goals in a naïve manner can result in mutually exclusive solutions, or infeasible or problematic states and actions. In this paper, we present Lively, a framework which supports configurable, real-time, task-based and communicative or socially-expressive motion for collaborative and social robotics across multiple levels of programmatic accessibility. Lively supports a wide range of control methods (i.e., position, orientation, and joint-space goals), and balances them with complex procedural behaviors for natural, lifelike motion that are effective in collaborative and social contexts. We discuss the design of three levels of programmatic accessibility of Lively, including a graphical user interface for visual design called LivelyStudio, the core library Lively for full access to its capabilities for developers, and an extensible architecture for greater customizability and capability.DOI: 10.1145/3568162.3576982
Read more Document