We propose a variant to polarized gradient illumination facial scanning which uses monochrome instead of color cameras to achieve more efficient and higher-resolution results. In typical polarized gradient facial scanning, sub-millimeter geometric detail is acquired by photographing the subject in eight or more polarized spherical gradient lighting conditions made with white LEDs, and RGB cameras are used to acquire color texture maps of the subject's appearance. In our approach, we replace the color cameras and white LEDs with monochrome cameras and multispectral, colored LEDs, leveraging that color images can be formed from successive monochrome images recorded under different illumination colors. While a naive extension of the scanning process to this setup would require multiplying the number of images by number of color channels, we show that the surface detail maps can be estimated directly from monochrome imagery, so that only an additional n photographs are required, where n is the number of added spectral channels. We also introduce a new multispectral optical flow approach to align images across spectral channels in the presence of slight subject motion. Lastly, for the case where a capture system's white light sources are polarized and its multispectral colored LEDs are not, we introduce the technique of multispectral polarization promotion, where we estimate the cross- and parallel-polarized monochrome images for each spectral channel from their corresponding images under a full sphere of even, unpolarized illumination. We demonstrate that this technique allows us to efficiently acquire a full color (or even multispectral) facial scan using monochrome cameras, unpolarized multispectral colored LEDs, and polarized white LEDs.
Comparing a flash-lit photograph of a subject (left) with a rendering under a similar lighting condition (middle) and a texture-less rendering to show the geometry (right). Skin microgeometry detail was added to the facial scan produced using our technique. The color scan was completed using only monochrome photography.
Comparing specular normals obtained with a color camera (top) with those obtained for a monochrome camera (bottom). Both cameras are the same sensor, with the only difference being the color filter array in the Bayer pattern. The color images were demosaiced with the Adaptive Homogeneity-directed Demosaicing algorithm, and then specular normals were computed. The monochrome scan pipeline can produce normals of comparatively higher resolution.