TY - GEN
T1 - Determining child orientation from overhead video
T2 - 2017 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2017
AU - Manner, Marie D.
AU - Jiang, Ming
AU - Zhao, Qi
AU - Gini, Maria L
AU - Elison, Jed T
PY - 2017/11/27
Y1 - 2017/11/27
N2 - Our goal is to automatically detect which direction a child is facing based on a single, simple overhead picture, and track that direction across time. Engaging in joint attention, which is the shared focus of two individuals on some object of interest, is a strong cue of typically developing children, and the lack thereof can be an indicator of autism spectrum disorder or other pervasive developmental disorder. Therefore, the goal of many psychology experiments with children is to determine when, for how long, and towards what the child looks after some bid for attention or reaction. While much research looks for the orientation of faces based on frontal or profile pictures, or non-morphable, larger objects like cars, fewer studies work in the setting of minimally-invasive overhead person gaze or orientation detection. To automatically detect the child's orientation during a human-robot interaction experiment, we mount a camera on the ceiling of a child development laboratory and analyze the video footage. We use multiple kernel learning on eight potential orientation directions to determine a child's orientation during the video recorded interaction. We also contribute the labelled dataset we used on this challenging problem.
AB - Our goal is to automatically detect which direction a child is facing based on a single, simple overhead picture, and track that direction across time. Engaging in joint attention, which is the shared focus of two individuals on some object of interest, is a strong cue of typically developing children, and the lack thereof can be an indicator of autism spectrum disorder or other pervasive developmental disorder. Therefore, the goal of many psychology experiments with children is to determine when, for how long, and towards what the child looks after some bid for attention or reaction. While much research looks for the orientation of faces based on frontal or profile pictures, or non-morphable, larger objects like cars, fewer studies work in the setting of minimally-invasive overhead person gaze or orientation detection. To automatically detect the child's orientation during a human-robot interaction experiment, we mount a camera on the ceiling of a child development laboratory and analyze the video footage. We use multiple kernel learning on eight potential orientation directions to determine a child's orientation during the video recorded interaction. We also contribute the labelled dataset we used on this challenging problem.
UR - http://www.scopus.com/inward/record.url?scp=85044390887&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85044390887&partnerID=8YFLogxK
U2 - 10.1109/SMC.2017.8123169
DO - 10.1109/SMC.2017.8123169
M3 - Conference contribution
AN - SCOPUS:85044390887
T3 - 2017 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2017
SP - 3477
EP - 3482
BT - 2017 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2017
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 5 October 2017 through 8 October 2017
ER -