ToFu: Topologically Consistent Multi-View Face Inference Using Volumetric Sampling
ICCV 2021

Given (a) multi-view images, our face modeling framework ToFu uses volumetric sampling to predict accurate base meshes in consistent topology in only 0.385 seconds. ToFu also infers (c) high-resolution details and appearances. Our efficient pipeline enables (d) rapid creation of production-quality avatars for animation.

Abstract

High-fidelity face digitization solutions often combine multi-view stereo (MVS) techniques for 3D reconstruction and a non-rigid registration step to establish dense correspondence across identities and expressions. A common problem is the need for manual clean-up after the MVS step, as 3D scans are typically affected by noise and outliers and contain hairy surface regions that need to be cleaned up by artists. Furthermore, mesh registration tends to fail for extreme facial expressions. Most learning-based methods use an underlying 3D morphable model (3DMM) to ensure robustness, but this limits the output accuracy for extreme facial expressions. In addition, the global bottleneck of regression architectures cannot produce meshes that tightly fit the ground truth surfaces. We propose ToFu, Topological consistent Face from multi-view, a geometry inference framework that can produce topologically consistent meshes across facial identities and expressions using a volumetric representation instead of an explicit underlying 3DMM. Our novel progressive mesh generation network embeds the topological structure of the face in a feature volume, sampled from geometry-aware local features. A coarse-to-fine architecture facilitates dense and accurate facial mesh predictions in a consistent mesh topology. ToFu further captures displacement maps for pore-level geometric details and facilitates high-quality rendering in the form of albedo and specular reflectance maps. These high-quality assets are readily usable by production studios for avatar creation, animation and physically-based skin rendering. We demonstrate state-of-the-art geometric and correspondence accuracy, while only taking 0.385 seconds to compute a mesh with 10K vertices, which is three orders of magnitude faster than traditional techniques. The code and the model are available for research purposes at https://tianyeli.github.io/tofu.

Setup

Tested in Python 3.7.

The ICCV 2021 version was done with:

Create conda environment (or other virtual environment of your choice):

conda create -n tofu python=3.7

Before running:

conda activate tofu
export PYTHONPATH=$PYTHONPATH:$(pwd)

Installation

pip install imageio
pip install pyyaml
pip install scikit-image
conda install -c menpo opencv

Then install PyTorch that fits your system.

You then need to manually install [MPI-IS mesh package]

Models and Data

Please request the dataset and trained model by sending an email to Kathleen Haase and Christina Trejo

Download the trained model files tofu_models.zip and the LightStage demo data LightStageOpenTest.zip.

Unzip both files to ./data.

Demo

Note: The current codes run the global and local stage separately (and sequentially).

Test on LightStage demo data:

Run the global stage:

$ python tester/test_sparse_point.py -op configs/test_ls_sparse.json

Then run the local stage:

$ python tester/test_dense_point_known_sparse_input.py -op 
configs/test_ls_dense_known_sparse_input.json

On CoMA data:

***TBA***

Citation

If you use code or data from this repository please consider citing:

@inproceedings{li2021tofu,
  title={Topologically Consistent Multi-View Face Inference Using Volumetric Sampling},
  author={Li, Tianye and Liu, Shichen and Bolkart, Timo and Liu, Jiayi and Li, Hao and Zhao,
Yajie},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={3824--3834},
  year={2021}
}  
License

This software is Copyright © 2022 The University of Southern California. All Rights Reserved. Permission to use, copy, modify, and distribute this software and its documentation for educational, research and non-profit purposes, without fee, and without a written agreement is hereby granted, provided that the above copyright notice, this paragraph and the following three paragraphs appear in all copies.

Permission to make commercial use of this software may be obtained by contacting:

USC Stevens Center for Innovation
University of Southern California
1150 S. Olive Street, Suite 2300
Los Angeles, CA 90115, USA

Email: accounting@stevens.usc.edu

CC to:

Kathleen Haase (haase@ict.usc.edu)
Tianye Li (tianyeli@protonmail.com)
ICT Vision & Graphics Lab
University of Southern California

This software program and documentation are copyrighted by The University of Southern California. The software program and documentation are supplied "as is", without any accompanying services from USC. USC does not warrant that the operation of the program will be uninterrupted or error-free. The end-user understands that the program was developed for research purposes and is advised not to rely exclusively on the program for any reason.

IN NO EVENT SHALL THE UNIVERSITY OF SOUTHERN CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF SOUTHERN CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE UNIVERSITY OF SOUTHERN CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF SOUTHERN CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.




Footer With Address And Phones