Time-Offset Conversations on a Life-Sized Automultiscopic Projector Array (bibtex)
by Jones, Andrew, Nagano, Koki, Busch, Jay, Yu, Xueming, Peng, Hsuan-Yueh, Barreto, Joseph, Alexander, Oleg, Bolas, Mark, Debevec, Paul and Unger, Jonas
Abstract:
We present a system for creating and displaying interactive life-sized 3D digital humans based on pre-recorded interviews. We use 30 cameras and an extensive list of questions to record a large set of video responses. Users access videos through a natural conversation interface that mimics face-to-face interaction. Recordings of answers, listening and idle behaviors are linked together to create a persistent visual image of the person throughout the interaction. The interview subjects are rendered using flowed light fields and shown life-size on a special rear-projection screen with an array of 216 video projectors. The display allows multiple users to see different 3D perspectives of the subject in proper relation to their viewpoints, without the need for stereo glasses. The display is effective for interactive conversations since it provides 3D cues such as eye gaze and spatial hand gestures.
Reference:
Time-Offset Conversations on a Life-Sized Automultiscopic Projector Array (Jones, Andrew, Nagano, Koki, Busch, Jay, Yu, Xueming, Peng, Hsuan-Yueh, Barreto, Joseph, Alexander, Oleg, Bolas, Mark, Debevec, Paul and Unger, Jonas), In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016.
Bibtex Entry:
@inproceedings{jones_time-offset_2016,
	address = {Las Vegas, NV},
	title = {Time-{Offset} {Conversations} on a {Life}-{Sized} {Automultiscopic} {Projector} {Array}},
	url = {http://www.cv-foundation.org//openaccess/content_cvpr_2016_workshops/w16/papers/Jones_Time-Offset_Conversations_on_CVPR_2016_paper.pdf},
	abstract = {We present a system for creating and displaying interactive life-sized 3D digital humans based on pre-recorded interviews. We use 30 cameras and an extensive list of questions to record a large set of video responses. Users access videos through a natural conversation interface that mimics face-to-face interaction. Recordings of answers, listening and idle behaviors are linked together to create a persistent visual image of the person throughout the interaction. The interview subjects are rendered using flowed light fields and shown life-size on a special rear-projection screen with an array of 216 video projectors. The display allows multiple users to see different 3D perspectives of the subject in proper relation to their viewpoints, without the need for stereo glasses. The display is effective for interactive conversations since it provides 3D cues such as eye gaze and spatial hand gestures.},
	booktitle = {Proceedings of the {IEEE} {Conference} on {Computer} {Vision} and {Pattern} {Recognition} {Workshops}},
	author = {Jones, Andrew and Nagano, Koki and Busch, Jay and Yu, Xueming and Peng, Hsuan-Yueh and Barreto, Joseph and Alexander, Oleg and Bolas, Mark and Debevec, Paul and Unger, Jonas},
	month = jul,
	year = {2016},
	keywords = {Graphics, MxR, UARC},
	pages = {18--26}
}
Powered by bibtexbrowser