Computational Bodybuilding: Anatomically-based Modeling of Human Bodies


Shunsuke Saito
University of Pennsylvania
Waseda University
 
Zi-Ye Zhou
University of Pennsylvania
 
Ladislav Kavan
University of Pennsylvania
 


Given an input 3D anatomy template, we propose a system to simulate the effects of muscle, fat, and bone growth. This allows us to create a wide range of human body shapes.



Abstract

We propose a method to create a wide range of human body shapes from a single input 3D anatomy template. Our approach is inspired by biological processes responsible for human body growth. In particular, we simulate growth of skeletal muscles and subcutaneous fat using physics-based models which combine growth and elasticity. Together with a tool to edit proportions of the bones, our method allows us to achieve a desired shape of the human body by directly controlling hypertrophy (or atrophy) of every muscle and enlargement of fat tissues. We achieve near-interactive run times by utilizing a special quasi-statics solver (Projective Dynamics) and by crafting a volumetric discretization which results in accurate deformations without an excessive number of degrees of freedom. Our system is intuitive to use and the resulting human body models are ready for simulation using existing physics-based animation methods, because we deform not only the surface, but also the entire volumetric model.






Publication

Shunsuke Saito, Zi-Ye Zhou, Ladislav Kavan. Computational Bodybuilding: Anatomically-based Modeling of Human Bodies. ACM Transaction on Graphics 34(4) [Proceedings of SIGGRAPH], 2015.  


Links and Downloads

Paper

 
BibTeX



Acknowledgements

Our special thanks belong to Sanchit Garg for designing the fat maps and helping with rendering and video editing. We thank Marianne Augustine, Norm Badler, Benedict Brown, Scott Delp, Jiatong He, Xiaoyan Hu, Chuang Lan, Tiantian Liu, Shigeo Morishima, Saba Pascha, Eftychios Sifakis, Robin Tomcin, and Lifeng Zhu for many insightful discussions and the anonymous reviewers for their valuable comments. We also thank Harmony Li for narrating the accompanying video. This research was supported by NSF CAREER Award IIS-1350330.