Our goal is to automatically detect which direction a child is facing based on a single, simple overhead picture, and track that direction across time. Engaging in joint attention, which is the shared focus of two individuals on some object of interest, is a strong cue of typically developing children, and the lack thereof can be an indicator of autism spectrum disorder or other pervasive developmental disorder. Therefore, the goal of many psychology experiments with children is to determine when, for how long, and towards what the child looks after some bid for attention or reaction. While much research looks for the orientation of faces based on frontal or profile pictures, or non-morphable, larger objects like cars, fewer studies work in the setting of minimally-invasive overhead person gaze or orientation detection. To automatically detect the child's orientation during a human-robot interaction experiment, we mount a camera on the ceiling of a child development laboratory and analyze the video footage. We use multiple kernel learning on eight potential orientation directions to determine a child's orientation during the video recorded interaction. We also contribute the labelled dataset we used on this challenging problem.