TY - GEN
T1 - Shadowplay
T2 - 4th ACM/IEEE International Conference on Human-Robot Interaction, HRI'09
AU - Meisner, Eric
AU - Šabanović, Selma
AU - Isler, Volkan
AU - Caporael, Linnda R.
AU - Trinkle, Jeff
PY - 2008
Y1 - 2008
N2 - Humans rely on a finely tuned ability to recognize and adapt to socially relevant patterns in their everyday face-to-face interactions. This allows them to anticipate the actions of others, coordinate their behaviors, and create shared meaning- to communicate. Social robots must likewise be able to recognize and perform relevant social patterns, including interactional synchrony, imitation, and particular sequences of behaviors. We use existing empirical work in the social sciences and observations of human interaction to develop nonverbal interactive capabilities for a robot in the context of shadow puppet play, where people interact through shadows of hands cast against a wall. We show how information theoretic quantities can be used to model interaction between humans and to generate interactive controllers for a robot. Finally, we evaluate the resulting model in an embodied human-robot interaction study. We show the benefit of modeling interaction as a joint process rather than modeling individual agents.
AB - Humans rely on a finely tuned ability to recognize and adapt to socially relevant patterns in their everyday face-to-face interactions. This allows them to anticipate the actions of others, coordinate their behaviors, and create shared meaning- to communicate. Social robots must likewise be able to recognize and perform relevant social patterns, including interactional synchrony, imitation, and particular sequences of behaviors. We use existing empirical work in the social sciences and observations of human interaction to develop nonverbal interactive capabilities for a robot in the context of shadow puppet play, where people interact through shadows of hands cast against a wall. We show how information theoretic quantities can be used to model interaction between humans and to generate interactive controllers for a robot. Finally, we evaluate the resulting model in an embodied human-robot interaction study. We show the benefit of modeling interaction as a joint process rather than modeling individual agents.
KW - Control architecture
KW - Gesture recognition
KW - Interaction synchrony
KW - Modeling social situations
KW - Nonverbal interaction
UR - http://www.scopus.com/inward/record.url?scp=67650671562&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=67650671562&partnerID=8YFLogxK
U2 - 10.1145/1514095.1514118
DO - 10.1145/1514095.1514118
M3 - Conference contribution
AN - SCOPUS:67650671562
SN - 9781605584041
T3 - Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, HRI'09
SP - 117
EP - 124
BT - Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, HRI'09
Y2 - 11 March 2009 through 13 March 2009
ER -