Abstract
Gestures support and enrich speech across various forms of communication. Effective use of gestures by speakers improves not only the listeners’ comprehension of the spoken material but also their perceptions of the speaker. How might robots use gestures to improve human-robot interaction? What gestures are most effective in achieving such improvements? This paper seeks answers to these questions by presenting a model of human gestures and a system-level evaluation of how robots might selectively use different types of gestures to improve interaction outcomes, such as user task performance and perceptions of the robot, in a narrative performance scenario. The results show that robot deictic gestures consistently predict users’ information recall, that all types of gestures affect user perceptions of the robot’s performance as a narrator, and that males and females show significant differences in their responses to robot gestures. These results have strong implications for designing effective robot gestures that improve human-robot interaction.
DOI:10.15607/rss.2013.ix.026
BibTex
@inproceedings{Huang_2013, doi = {10.15607/rss.2013.ix.026}, url = {https://doi.org/10.15607%2Frss.2013.ix.026}, year = 2013, month = {jun}, publisher = {Robotics: Science and Systems Foundation}, author = {Chien-Ming Huang and Bilge Mutlu}, title = {Modeling and Evaluating Narrative Gestures for Humanlike Robots}, booktitle = {Robotics: Science and Systems {IX}} }