Elasticity-Inspired Deformers for Character Articulation


Ladislav Kavan
ETH Zurich
 
Olga Sorkine
ETH Zurich
 


We present a new automatic skinning technique which mimics the quality of nonlinear elastic simulation. Our method achieves higher-quality results than both linear blend (LBS) and dual quaternion skinning (DQS) with minimal additional costs.



Abstract

Current approaches to skeletally-controlled character articulation range from real-time, closed-form skinning methods to offline, physically-based simulation. In this paper, we seek a closed-form skinning method that approximates nonlinear elastic deformations well while remaining very fast. Our contribution is two-fold: (1) we optimize skinning weights for the standard linear and dual quaternion skinning techniques so that the resulting deformations minimize an elastic energy function. We observe that this is not sufficient to match the visual quality of the original elastic deformations and therefore, we develop (2) a new skinning method based on the concept of joint-based deformers. We propose a specific deformer which is visually similar to nonlinear variational deformation methods. Our final algorithm is fully automatic and requires little or no input from the user other than a rest-pose mesh and a skeleton. The runtime complexity requires minimal memory and computational overheads compared to linear blend skinning, while producing higher quality deformations than both linear and dual quaternion skinning.



accompanying video





SIGGRAPH Asia fast forward





SIGGRAPH Asia talk





Publication

Ladislav Kavan, Olga Sorkine. Elasticity-Inspired Deformers for Character Articulation. ACM Transactions on Graphics 31(6) [Proceedings of SIGGRAPH Asia], 2012.  


Links and Downloads

Paper

 
BibTeX



Acknowledgements

We are grateful to Chris Evans and John Howe for art and rigging feedback. We thank Eftychios Sifakis for his open source fast 3x3 SVD code and Alec Jacobson, Stelian Coros, and Bernhard Thomaszewski for many useful discussions. We also thank Emily Whiting for her narration of the accompanying video and Kenshi Takayama and Katie Bassett for proofreading. This work was supported in part by an SNF award 200021-137879.