Creating a Photoreal Digital Actor: The Digital Emily Project (bibtex)
by Alexander, Oleg, Rogers, Mike, Lambeth, William, Chiang, Matt and Debevec, Paul
Abstract:
The Digital Emily Project is a collaboration between facial animation company Image Metrics and the Graphics Laboratory at the University of Southern California's Institute for Creative Technologies to achieve one of the world's first photorealistic digital facial performances. The project leverages latest-generation techniques in high-resolution face scanning, character rigging, video-based facial animation, and compositing. An actress was first filmed on a studio set speaking emotive lines of dialog in high definition. The lighting on the set was captured as a high dynamic range light probe image. The actress' face was then three-dimensionally scanned in thirty-three facial expressions showing different emotions and mouth and eye movements using a high-resolution facial scanning process accurate to the level of skin pores and fine wrinkles. Lighting-independent diffuse and specular reflectance maps were also acquired as part of the scanning process. Correspondences between the 3D expression scans were formed using a semi-automatic process, allowing a blendshape facial animation rig to be constructed whose expressions closely mirrored the shapes observed in the rich set of facial scans; animated eyes and teeth were also added to the model. Skin texture detail showing dynamic wrinkling was converted into multiresolution displacement maps also driven by the blend shapes. A semi-automatic video-based facial animation system was then used to animate the 3D face rig to match the performance seen in the original video, and this performance was tracked onto the facial motion in the studio video. The final face was illuminated by the captured studio illumination and shading using the acquired reflectance maps with a skin translucency shading algorithm. Using this process, the project was able to render a synthetic facial performance which was generally accepted as being a real face.
Reference:
Creating a Photoreal Digital Actor: The Digital Emily Project (Alexander, Oleg, Rogers, Mike, Lambeth, William, Chiang, Matt and Debevec, Paul), In IEEE European Conference on Visual Media Production (CVMP), 2009.
Bibtex Entry:
@inproceedings{alexander_creating_2009,
	address = {London, UK},
	title = {Creating a {Photoreal} {Digital} {Actor}: {The} {Digital} {Emily} {Project}},
	url = {http://ict.usc.edu/pubs/CREATING%20A%20PHOTOREAL%20DIGITAL%20ACTOR-%20THE%20DIGITAL%20EMILY%20PROJECT.pdf},
	abstract = {The Digital Emily Project is a collaboration between facial animation company Image Metrics and the Graphics Laboratory at the University of Southern California's Institute for Creative Technologies to achieve one of the world's first photorealistic digital facial performances. The project leverages latest-generation techniques in high-resolution face scanning, character rigging, video-based facial animation, and compositing. An actress was first filmed on a studio set speaking emotive lines of dialog in high definition. The lighting on the set was captured as a high dynamic range light probe image. The actress' face was then three-dimensionally scanned in thirty-three facial expressions showing different emotions and mouth and eye movements using a high-resolution facial scanning process accurate to the level of skin pores and fine wrinkles. Lighting-independent diffuse and specular reflectance maps were also acquired as part of the scanning process. Correspondences between the 3D expression scans were formed using a semi-automatic process, allowing a blendshape facial animation rig to be constructed whose expressions closely mirrored the shapes observed in the rich set of facial scans; animated eyes and teeth were also added to the model. Skin texture detail showing dynamic wrinkling was converted into multiresolution displacement maps also driven by the blend shapes. A semi-automatic video-based facial animation system was then used to animate the 3D face rig to match the performance seen in the original video, and this performance was tracked onto the facial motion in the studio video. The final face was illuminated by the captured studio illumination and shading using the acquired reflectance maps with a skin translucency shading algorithm. Using this process, the project was able to render a synthetic facial performance which was generally accepted as being a real face.},
	booktitle = {{IEEE} {European} {Conference} on {Visual} {Media} {Production} ({CVMP})},
	author = {Alexander, Oleg and Rogers, Mike and Lambeth, William and Chiang, Matt and Debevec, Paul},
	month = nov,
	year = {2009},
	keywords = {Graphics}
}
Powered by bibtexbrowser