Animated digital characters play an important role in virtual experiences. In this work, we utilize data from a large scale user study as training data for a generative model for producing a variety of animated smiles. Our method involves a four stage process that samples a variety of facial expressions,and annotates them with perceived happiness from the user study. The expressions are then transformed into a standardized space and used by a non-parametric classifier to predict happiness of new smiles.
|Original language||English (US)|
|Title of host publication||Proceedings - Motion in Games 2016|
|Subtitle of host publication||9th International Conference on Motion in Games, MIG 2016|
|Editors||Stephen N. Spencer|
|Publisher||Association for Computing Machinery, Inc|
|Number of pages||2|
|State||Published - Oct 10 2016|
|Event||9th International Conference on Motion in Games, MIG 2016 - San Francisco, United States|
Duration: Oct 10 2016 → Oct 12 2016
|Name||Proceedings - Motion in Games 2016: 9th International Conference on Motion in Games, MIG 2016|
|Other||9th International Conference on Motion in Games, MIG 2016|
|Period||10/10/16 → 10/12/16|
Bibliographical noteFunding Information:
This work has been supported in part by the National Science Foundation through grant #CHS-1526693
© 2016 Copyright held by the owner/author(s).
Copyright 2017 Elsevier B.V., All rights reserved.
- Computer graphics
- Data-driven facial animation
- Digital character emotion