Conversational gaze mechanisms for humanlike robots

Mutlu, B., T. Kanda, J. Forlizzi, J. Hodgins, and H. Ishiguro. “Conversational Gaze Mechanisms for Humanlike Robots”. ACM Transactions on Interactive Intelligent Systems, ACM, 2012, p. 12.

Abstract

During conversations, speakers employ a number of verbal and nonverbal mechanisms to establish who participates in the conversation, when, and in what capacity. Gaze cues and mechanisms are particularly instrumental in establishing the participant roles of interlocutors, managing speaker turns, and signaling discourse structure. If humanlike robots are to have fluent conversations with people, they will need to use these gaze mechanisms effectively. The current work investigates people’s use of key conversational gaze mechanisms, how they might be designed for and implemented in humanlike robots, and whether these signals effectively shape human-robot conversations. We focus particularly on whether humanlike gaze mechanisms might help robots signal different participant roles, manage turn-exchanges, and shape how interlocutors perceive the robot and the conversation. The evaluation of these mechanisms involved 36 trials of three-party human-robot conversations. In these trials, the robot used gaze mechanisms to signal to its conversational partners their roles either of two addressees, an addressee and a bystander, or an addressee and a nonparticipant. Results showed that participants conformed to these intended roles 97% of the time. Their conversational roles affected their rapport with the robot, feelings of groupness with their conversational partners, and attention to the task.During conversations, speakers employ a number of verbal and nonverbal mechanisms to establish who participates in the conversation, when, and in what capacity. Gaze cues and mechanisms are particularly instrumental in establishing the participant roles of interlocutors, managing speaker turns, and signaling discourse structure. If humanlike robots are to have fluent conversations with people, they will need to use these gaze mechanisms effectively. The current work investigates people’s use of key conversational gaze mechanisms, how they might be designed for and implemented in humanlike robots, and whether these signals effectively shape human-robot conversations. We focus particularly on whether humanlike gaze mechanisms might help robots signal different participant roles, manage turn-exchanges, and shape how interlocutors perceive the robot and the conversation. The evaluation of these mechanisms involved 36 trials of three-party human-robot conversations. In these trials, the robot used gaze mechanisms to signal to its conversational partners their roles either of two addressees, an addressee and a bystander, or an addressee and a nonparticipant. Results showed that participants conformed to these intended roles 97% of the time. Their conversational roles affected their rapport with the robot, feelings of groupness with their conversational partners, and attention to the task.

DOI: 10.1145/2070719.2070725

BibTex

@article{Mutlu_2012,
	doi = {10.1145/2070719.2070725},
	url = {https://doi.org/10.1145%2F2070719.2070725},
	year = 2012,
	month = {jan},
	publisher = {Association for Computing Machinery ({ACM})},
	volume = {1},
	number = {2},
	pages = {1--33},
	author = {Bilge Mutlu and Takayuki Kanda and Jodi Forlizzi and Jessica Hodgins and Hiroshi Ishiguro},
	title = {Conversational gaze mechanisms for humanlike robots},
	journal = {{ACM} Transactions on Interactive Intelligent Systems}
}