The human face is an anatomical system exhibiting heterogenous and anisotropic mechanical behavior. This leads to complex deformations even in a neutral facial expression due to external forces such as gravity.
We start by building a volumetric model from magnetic resonance images of a neutral facial expression. To obtain data on facial deformations we capture and register 3D scans of the face with different gravity directions and with various facial expressions.
Our main contribution consists in solving an inverse physics problem where we learn mechanical properties of the face from our training data (3D scans). Specifically, we learn heterogenous stiffness and prestrain (which introduces anisotropy).
The generalization capability of our resulting physics-based model is tested on 3D scans. We demonstrate that our model generates predictions of facial deformations more accurately than recent related physics-based techniques.