Collaborative robots (cobots) deployed in industry offer the potential for a paradigm shift in the way human operators work with their robotic co-workers over traditional robotic automation. The goal of our research into human-robot teaming is to understand how we can facilitate collaborative interactions that addresses the skillsets of each member in the team in three threads.
The first thread of our work is to develop collaborative-task authoring that considers the skills of humans and robots. We implemented an authoring environment that allows engineers to express their manufacturing processes using common work analysis structure and once specified, the tool allocates humans and robots optimally. The tool produces a robot program and human workplan that can be tested in simulation.
Our second thread focuses on human operators performing collaborative tasks, such as those generated from thread one. We evaluated participants working with cobots to understand the levels of task interdependence suitable for collaborative assembly. We also evaluated a supervisory task to understand how attention aids affect workers performance and cognitive load.
Our third thread addresses the various skills gaps surrounding cobot integration. In a recent ethnography, we found cobots are being treated as uncaged traditional robots instead of taking advantage of newer, more collaborative work paradigms. Potential factors for the uncaged approach are training for industrial robotics focuses on traditional automation skills, concerns over human safety, and ease of development for automated/semi-automated solutions. To further compound this problem, there is tension between operators’ desire to adjust a cobot’s program and the engineers’ concern over the safety ramifications. To address this tension, we are developing an educational environment that allows operators to safely learn how to program the robot, learn about cobot safety, and learn various business objectives that influence program design. In future work we intend to further explore these three threads to better augment operators’ and engineers’ abilities in their manufacturing roles.
-
Michaelis, J., A. Siebert-Evenstone, D. Shaffer, B. Mutlu, and B. Mutlu. “Collaborative or Simply Uncaged? Understanding Human-Cobot Interactions in Automation”. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020, pp. 1-12.
Abstract
Collaborative robots, or cobots, represent a breakthrough technology designed for high-level (e.g. collaborative) interactions between workers and robots with capabilities for flexible deployment in industries such as manufacturing. Understanding how workers and companies use and integrate cobots is important to inform the future design of cobot systems and educational technologies that facilitate effective worker-cobot interaction. Yet, little is known about typical training for collaboration and the application of cobots in manufacturing. To close this gap, we interviewed nine experts in manufacturing about their experience with cobots. Our thematic analysis revealed that, contrary to the envisioned use, experts described most cobot applications as only low-level (e.g. pressing start/stop buttons) interactions with little flexible deployment, and experts felt traditional robotics skills were needed for collaborative and flexible interaction with cobots. We conclude with design recommendations for improved future robots, including programming and interface designs, and educational technologies to support collaborative use.
DOI:10.1145/3313831.3376547
Bibtex
@inproceedings{Michaelis_2020, doi = {10.1145/3313831.3376547}, url = {https://doi.org/10.1145%2F3313831.3376547}, year = 2020, month = {apr}, publisher = {{ACM}}, author = {Joseph E. Michaelis and Amanda Siebert-Evenstone and David Williamson Shaffer and Bilge Mutlu}, title = {Collaborative or Simply Uncaged? Understanding Human-Cobot Interactions in Automation}, booktitle = {Proceedings of the 2020 {CHI} Conference on Human Factors in Computing Systems} }
-
Schoen, A., C. Henrichs, M. Strohkirch, and B. Mutlu. “Authr: A Task Authoring Environment for Human-Robot Teams”. Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, 2020, pp. 1194-08.
Abstract
Collaborative robots promise to transform work across many industries and promote human-robot teaming as a novel paradigm. However, realizing this promise requires the understanding of how existing tasks, developed for and performed by humans, can be effectively translated into tasks that robots can singularly or human-robot teams can collaboratively perform. In the interest of developing tools that facilitate this process we present Authr, an end-to-end task authoring environment that assists engineers at manufacturing facilities in translating existing manual tasks into plans applicable for human-robot teams and simulates these plans as they would be performed by the human and robot. We evaluated Authr with two user studies, which demonstrate the usability and effectiveness of Authr as an interface and the benefits of assistive task allocation methods for designing complex tasks for human-robot teams. We discuss the implications of these findings for the design of software tools for authoring human-robot collaborative plans.
DOI: 10.1145/3379337.3415872
BibTex
Read more Document@inproceedings{schoen2020authr,
title={Authr: A Task Authoring Environment for Human-Robot Teams},
author={Schoen, Andrew and Henrichs, Curt and Strohkirch, Mathias and Mutlu, Bilge},
booktitle={UIST’20: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology},
year={2020}
} -
Zhao, F., C. Henrichs, and B. Mutlu. “Task Interdependence in Human-Robot Teaming”. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2020, pp. 1143-9.
Abstract
Human-robot teaming is becoming increasingly common within manufacturing processes. A key aspect practitioners need to decide on when developing effective processes is the level of task interdependence between human and robot team members. Task interdependence refers to the extent to which one’s behavior affects the performance of others in a team. In this work, we examine the effects of three levels of task interdependence – pooled, sequential, reciprocal – in human-robot teaming on human worker’s mental states, task performance, and perceptions of the robot. Participants worked with the robot in an assembly task while their heart rate variability was being recorded. Results suggested human workers in the reciprocal interdependence level experienced less stress and perceived the robot more as a collaborator than other two levels. Task interdependence did not affect perceived safety. Our findings highlight the importance of considering task structure in human-robot teaming and inform future research on and industry practices for human-robot task allocation.
DOI:10.1109/ro-man47096.2020.9223555
Bibtex
@inproceedings{Zhao_2020, doi = {10.1109/ro-man47096.2020.9223555}, url = {https://doi.org/10.1109%2Fro-man47096.2020.9223555}, year = 2020, month = {aug}, publisher = {{IEEE}}, author = {Fangyun Zhao and Curt Henrichs and Bilge Mutlu}, title = {Task Interdependence in Human-Robot Teaming}, booktitle = {2020 29th {IEEE} International Conference on Robot and Human Interactive Communication ({RO}-{MAN})} }
Read more -
Hagenow, M., E. Senft, R. Radwin, M. Gleicher, B. Mutlu, and M. Zinn. “Corrective Shared Autonomy for Addressing Task Variability”. IEEE Robotics and Automation Letters, Vol. 6, no. 2, 2021, pp. 3720-7.
Abstract
Many tasks, particularly those involving interaction with the environment, are characterized by high variability, making robotic autonomy difficult. One flexible solution is to introduce the input of a human with superior experience and cognitive abilities as part of a shared autonomy policy. However, current methods for shared autonomy are not designed to address the wide range of necessary corrections (e.g., positions, forces, execution rate, etc.) that the user may need to provide to address task variability. In this letter, we present corrective shared autonomy , where users provide corrections to key robot state variables on top of an otherwise autonomous task model. We provide an instantiation of this shared autonomy paradigm and demonstrate its viability and benefits such as low user effort and physical demand via a system-level user study on three tasks involving variability situated in aircraft manufacturing.
DOI: http://dx.doi.org/10.1109/LRA.2021.3064500
BibTex
@ARTICLE{hagenow2021corrective, author={M. {Hagenow} and E. {Senft} and R. {Radwin} and M. {Gleicher} and B. {Mutlu} and M. {Zinn}}, journal={IEEE Robotics and Automation Letters}, title={Corrective Shared Autonomy for Addressing Task Variability}, year={2021}, volume={6}, number={2}, pages={3720-3727}, doi={10.1109/LRA.2021.3064500}}
-
Schoen, A., N. White, C. Henrichs, A. Siebert-Evenstone, D. Shaffer, and B. Mutlu. “CoFrame: A System for Training Novice Cobot Programmers”. Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, IEEE Press, 2022, pp. 185–194.
Abstract
The introduction of collaborative robots (cobots) into the workplace has presented both opportunities and challenges for those seeking to utilize their functionality. Prior research has shown that despite the capabilities afforded by cobots, there is a disconnect between those capabilities and the applications that they currently are deployed in, partially due to a lack of effective cobot-focused instruction in the field. Experts who work successfully within this collaborative domain could offer insight into the considerations and process they use to more effectively capture this cobot capability. Using an analysis of expert insights in the collaborative interaction design space, we developed a set of Expert Frames based on these insights and integrated these Expert Frames into a new training and programming system that can be used to teach novice operators to think, program, and troubleshoot in ways that experts do. We present our system and case studies that demonstrate how Expert Frames provide novice users with the ability to analyze and learn from complex cobot application scenarios.
DOI: 10.5555/3523760.3523788
BibTex
@inproceedings{ 10.5555/3523760.3523788, author = {Schoen, Andrew and White, Nathan and Henrichs, Curt and Siebert-Evenstone, Amanda and Shaffer, David and Mutlu, Bilge}, title = {CoFrame: A System for Training Novice Cobot Programmers}, year = {2022}, publisher = {IEEE Press}, abstract = {The introduction of collaborative robots (cobots) into the workplace has presented both opportunities and challenges for those seeking to utilize their functionality. Prior research has shown that despite the capabilities afforded by cobots, there is a disconnect between those capabilities and the applications that they currently are deployed in, partially due to a lack of effective cobot-focused instruction in the field. Experts who work successfully within this collaborative domain could offer insight into the considerations and process they use to more effectively capture this cobot capability. Using an analysis of expert insights in the collaborative interaction design space, we developed a set of Expert Frames based on these insights and integrated these Expert Frames into a new training and programming system that can be used to teach novice operators to think, program, and troubleshoot in ways that experts do. We present our system and case studies that demonstrate how Expert Frames provide novice users with the ability to analyze and learn from complex cobot application scenarios.}, booktitle = {Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction}, pages = {185–194}, numpages = {10}, keywords = {robot programming interfaces, novice users, robotics operator training, expert models, collaborative robots}, location = {Sapporo, Hokkaido, Japan}, series = {HRI '22} }