Robot deictics: How gesture and context shape referential communication (Inproceedings)

Sauppe, A., and B. Mutlu. “Robot Deictics: How Gesture and Context Shape Referential Communication”. Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, ACM, 2014, pp. 342-9.


As robots collaborate with humans in increasingly diverse environments, they will need to effectively refer to objects of joint interest and adapt their references to various physical, environmental, and task conditions. Humans use a broad range of deictic gestures-gestures that direct attention to collocated objects, persons, or spaces-that include pointing, touching, and exhibiting to help their listeners understand their references. These gestures offer varying levels of support under different conditions, making some gestures more or less suitable for different settings. While these gestures offer a rich space for designing communicative behaviors for robots, a better understanding of how different deictic gestures affect communication under different conditions is critical for achieving effective human-robot interaction. In this paper, we seek to build such an understanding by implementing six deictic gestures on a humanlike robot and evaluating their communicative effectiveness in six diverse settings that represent physical, environmental, and task conditions under which robots are expected to employ deictic communication. Our results show that gestures which come into physical contact with the object offer the highest overall communicative accuracy and that specific settings benefit from the use of particular types of gestures. Our results highlight the rich design space for deictic gestures and inform how robots might adapt their gestures to the specific physical, environmental, and task conditions.

DOI: 10.1145/2559636.2559657


	doi = {10.1145/2559636.2559657},
	url = {},
	year = 2014,
	month = {mar},
	publisher = {{ACM}},
	author = {Allison Saupp{\'{e}} and Bilge Mutlu},
	title = {Robot deictics},
	booktitle = {Proceedings of the 2014 {ACM}/{IEEE} international conference on Human-robot interaction}