by Alexander, Oleg, Busch, Jay, Graham, Paul, Tunwattanapong, Borom, Jones, Andrew, Nagano, Koki, Ichikari, Ryosuke, Debevec, Paul and Fyffe, Graham
Abstract:
In this collaboration between Activision and USC ICT, we tried to create a real-time, photoreal digital human character which could be seen from any viewpoint, any lighting, and could perform realistically from video performance capture even in a tight closeup. In addition, we needed this to run in a game-ready production pipeline. To achieve this, we scanned the actor in thirty high-resolution expressions using the USC ICT's new Light Stage X system [Ghosh et al. SIGGRAPHAsia2011] and chose eight expressions for the real-time performance rendering. To record the performance, we shot multi-view 30fps video of the actor performing improvised lines using the same multi-camera rig. We used a new tool called Vuvuzela to interactively and precisely correspond all expression (u,v)'s to the neutral expression, which was retopologized to an artist mesh. Our new offline animation solver works by creating a performance graph representing dense GPU optical flow between the video frames and the eight expressions. This graph gets pruned by analyzing the correlation between the video frames and the expression scans over twelve facial regions. The algorithm then computes dense optical flow and 3D triangulation yielding per-frame spatially varying blendshape weights approximating the performance.
Reference:
Digital Ira: High-Resolution Facial Performance Playback (Alexander, Oleg, Busch, Jay, Graham, Paul, Tunwattanapong, Borom, Jones, Andrew, Nagano, Koki, Ichikari, Ryosuke, Debevec, Paul and Fyffe, Graham), In SIGGRAPH 2013 Real-Time Live! The 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques, 2013.
Bibtex Entry:
@inproceedings{alexander_digital_2013-1,
address = {Anaheim, CA},
title = {Digital {Ira}: {High}-{Resolution} {Facial} {Performance} {Playback}},
url = {http://gl.ict.usc.edu/Research/DigitalIra/},
abstract = {In this collaboration between Activision and USC ICT, we tried to create a real-time, photoreal digital human character which could be seen from any viewpoint, any lighting, and could perform realistically from video performance capture even in a tight closeup. In addition, we needed this to run in a game-ready production pipeline. To achieve this, we scanned the actor in thirty high-resolution expressions using the USC ICT's new Light Stage X system [Ghosh et al. SIGGRAPHAsia2011] and chose eight expressions for the real-time performance rendering. To record the performance, we shot multi-view 30fps video of the actor performing improvised lines using the same multi-camera rig. We used a new tool called Vuvuzela to interactively and precisely correspond all expression (u,v)'s to the neutral expression, which was retopologized to an artist mesh. Our new offline animation solver works by creating a performance graph representing dense GPU optical flow between the video frames and the eight expressions. This graph gets pruned by analyzing the correlation between the video frames and the expression scans over twelve facial regions. The algorithm then computes dense optical flow and 3D triangulation yielding per-frame spatially varying blendshape weights approximating the performance.},
booktitle = {{SIGGRAPH} 2013 {Real}-{Time} {Live}! {The} 40th {International} {Conference} and {Exhibition} on {Computer} {Graphics} and {Interactive} {Techniques}},
author = {Alexander, Oleg and Busch, Jay and Graham, Paul and Tunwattanapong, Borom and Jones, Andrew and Nagano, Koki and Ichikari, Ryosuke and Debevec, Paul and Fyffe, Graham},
month = jul,
year = {2013},
keywords = {Graphics, UARC}
}