One of the "holy grails" of computer graphics is to create human faces which both look
and act realistically. The domains in which these photorealistic faces are of use are
quite numerous. The digital effects industry often requires the use of a virtual replica
of an actor. This allows for the actor to appear as if they are in a situation too
dangerous for to subject a real human to. In other words, the industry requires digital
stuntmen. In human/computer interaction, researchers strive to design interfaces that
allow interaction with computers similar to how an individual communicates with human
beings. In teleconferencing, the ability to render realistic faces is an essential
component of model-based coding.
The goal of realistic human facial representation has remained elusive for several reasons.
First, the mechanisms that underline the face appearance and motion are extremely complicated.
The appearance of the face is determined by how light bounces between multiple layers of skin,
resulting in subsurface scattering. Furthermore, the face is deformed by the combined actions
of ten different muscle groups. Moreover, a lot of the difficulties in creating realistic
digital faces come from our well-honed ability to observe and interpret the faces and expressions
of people around us. This ability makes us very proficient at noticing the slightest deviation
from reality when observing digital images of the human face.
Traditional attempts at creating realistic faces have either involved a great deal of effort
by talented artists or detailed mathematical simulations. The artistic approach is inherently
limited by the amount of effort it takes to recreate, by hand, the realistic three-dimensional
appearance of the human face. Although generations of talented artists have created very believable
portraits and sculptures of human faces, creating a realistic facial model that reflects light
and moves in a believable way remains a challenging task. The mathematical approach has its
limitations too. Although, there are realistic mathematical models of the surface of the skin
and of the face muscles, these simulations are unable to render the idiosynchrasies that are
part of the identity of a person. The mathematical simulations provide generic motions and
surfaces, but do not provide mechanisms by which to model variations across individuals.
These approaches have yet to produce results that could trick us into thinking we are looking
at a real person's face and not a computer-generated image.
We are working on a new approach to put this "holy grail" of computer animation within reach:
recording the appearance and motion of a real person to create a digital replica. While a
photograph or a video capture of the appearance of a subject may be realistic, it does not
give the freedom to change the view point or the light falling on the person's face. This
would be expected for a digital actor existing in a synthetic environment. We have begun to
address this issue by recovering the three-dimensional surface of a person's face from a set
of images , its motion during a performance, and its appearance under different lights.
Our goal is to develop technologies that would permit the capture of a performer and to
digitally reanimate him/her in an arbitrary scenario.