Abstract
Successful collaboration relies on the coordination and alignment of communicative cues. In this paper, we present mechanisms of bidirectional gaze – the coordinated production and detection of gaze cues – by which a virtual character can coordinate its gaze cues with those of its human user. We implement these mechanisms in a hybrid stochastic/heuristic model synthesized from data collected in human-human interactions. In three lab studies wherein a virtual character instructs participants in a sandwich-making task, we demonstrate how bidirectional gaze can lead to positive outcomes in error rate, completion time, and the agent’s ability to produce quick, effective nonverbal references. The first study involved an on-screen agent and the participant wearing eye-tracking glasses. The second study demonstrates that these positive outcomes can be achieved using head-pose estimation in place of full eye tracking. The third study demonstrates that these effects also transfer into virtual-reality interactions.
DOI: 10.1145/3025453.3026033
BibTex
@inproceedings{Andrist_2017, doi = {10.1145/3025453.3026033}, url = {https://doi.org/10.1145%2F3025453.3026033}, year = 2017, month = {may}, publisher = {{ACM}}, author = {Sean Andrist and Michael Gleicher and Bilge Mutlu}, title = {Looking Coordinated}, booktitle = {Proceedings of the 2017 {CHI} Conference on Human Factors in Computing Systems} }