User-Guided Lip Correction for Facial Performance Capture


Dimitar Dinev
University of Utah
Disney Research
 
Thabo Beeler
Disney Research
 
Derek Bradley
Disney Research
 
Moritz Bacher
Disney Research
 

Hongyi Xu
Disney Research
 
Ladislav Kavan
University of Utah
 


We present a user-guided method for correcting lips in facial performance capture. From left to right: state-of-the-art facial capture methods can achieve high quality 3D face results but often struggle in the lip region. Our regression-based lip correction method is easy to use and can quickly improve the lip shapes for a whole performance, increasing the fidelity with respect to the true motion.



Abstract

Facial performance capture is the primary method for generating facial animation in video games, feature films and virtual environments, and recent advances have produced very compelling results. Still, one of the most challenging regions is the mouth, which often contains systematic errors due to the complex appearance and occlusion/dis-occlusion of the lips. We present a novel user-guided approach to correcting these common lip shape errors present in traditional capture systems. Our approach is to allow a user to manually correct a small number of problematic frames, and then our system learns the types of corrections desired and automatically corrects the entire performance. As correcting even a single frame using traditional 3D sculpting tools can be time consuming and require great skill, we also propose a simple and fast 2D sketch-based method for generating plausible lip corrections for the problematic key frames. We demonstrate our results on captured performances of three different subjects, and validate our method with an additional sequence that contains ground truth lip reconstructions.






Publication

Dimitar Dinev, Thabo Beeler, Derek Bradley, Moritz Bacher, Hongyi Xu, Ladislav Kavan. User-Guided Lip Correction for Facial Performance Capture. Symposium on Computer Animation, 2018.  


Links and Downloads

Paper

 
BibTeX



Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant Numbers IIS-1617172 and IIS-1622360. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We also gratefully acknowledge the support of Activision and hardware donation from NVIDIA Corporation.