Topologically Consistent Multi-View Face Inference Using Volumetric Sampling

ICCV 2021 (oral)

Tianye Li1,2    Shichen Liu1,2    Timo Bolkart3    Jiayi Liu1,2    Hao Li1,2    Yajie Zhao1

1USC Institute for Creative Technologies
2University of Southern California
3Max Planck Institute for Intelligent Systems

[Paper]
[ArXiv]
[Sup. Mat.]
[Slides]
[Poster]
[Code]

teaser

Given (a) multi-view images, our face modeling framework ToFu uses volumetric sampling to predict accurate base meshes in consistent topology in only 0.385 seconds. ToFu also infers (c) high-resolution details and appearances. Our efficient pipeline enables (d) rapid creation of production-quality avatars for animation.

Abstract

High-fidelity face digitization solutions often combine multi-view stereo (MVS) techniques for 3D reconstruction and a non-rigid registration step to establish dense correspondence across identities and expressions. A common problem is the need for manual clean-up after the MVS step, as 3D scans are typically affected by noise and outliers and contain hairy surface regions that need to be cleaned up by artists. Furthermore, mesh registration tends to fail for extreme facial expressions. Most learning-based methods use an underlying 3D morphable model (3DMM) to ensure robustness, but this limits the output accuracy for extreme facial expressions. In addition, the global bottleneck of regression architectures cannot produce meshes that tightly fit the ground truth surfaces. We propose ToFu, Topologically consistent Face from multi-view, a geometry inference framework that can produce topologically consistent meshes across facial identities and expressions using a volumetric representation instead of an explicit underlying 3DMM. Our novel progressive mesh generation network embeds the topological structure of the face in a feature volume, sampled from geometry-aware local features. A coarse-to-fine architecture facilitates dense and accurate facial mesh predictions in a consistent mesh topology. ToFu further captures displacement maps for pore-level geometric details and facilitates high-quality rendering in the form of albedo and specular reflectance maps. These high-quality assets are readily usable by production studios for avatar creation, animation and physically-based skin rendering. We demonstrate state-of-the-art geometric and correspondence accuracy, while only taking 0.385 seconds to compute a mesh with 10K vertices, which is three orders of magnitude faster than traditional techniques. The code and the model are available for research purposes at https://tianyeli.github.io/tofu.


Talk



Supplemental Video



Acknowledgements

We thank Marcel Ramos, Mingming He and Jing Yang for their help in visualizations and rendering. We thank Ruilong Li, Junying Wang, Qianli Ma and Ziqian Bai for their help in comparisons and their inputs in the discussions. We thank Michael J. Black for his advices. We thank Pratusha Prasad, Zhaoyang Lv for their help in proofreading. We thank the website template from here.