We run the demo with 2 GeForce RTX 2080Ti GPUs, the memory usage is as follows (~3.4GB at GPU1, ~9.7GB at GPU2):
sh scripts/download_model.sh
pip install -r requirements.txt
# if you want to use the input from a webcam: python RTL/main.py --use_server --ip--port 5555 --camera -- netG.ckpt_path ./data/PIFu/net_G netC.ckpt_path ./data/PIFu/net_C # or if you want to use the input from a image folder: python RTL/main.py --use_server --ip --port 5555 --image_folder -- netG.ckpt_path ./data/PIFu/net_G netC.ckpt_path ./data/PIFu/net_C # or if you want to use the input from a video: python RTL/main.py --use_server --ip --port 5555 --videos -- netG.ckpt_path ./data/PIFu/net_G netC.ckpt_path ./data/PIFu/net_C
loading networkG from ./data/PIFu/net_G ... loading networkC from ./data/PIFu/net_C ... initialize data streamer ... Using cache found in /home/rui/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub Using cache found in /home/rui/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub * Serving Flask app "main" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: on * Running on http://:5555/ (Press CTRL+C to quit)
http://YOUR_IP_ADDRESS:5555/
on a web browser from any device (Desktop/IPad/IPhone),
You should be able to see the MonoPort VR Demo page on that device, and at the same time you should be able to
see the a screen poping up on your desktop, showing the reconstructed normal and texture image.MonoPort is based on Monocular Real-Time Volumetric Performance Capture(ECCV'20), authored by Ruilong Li*(@liruilong940607), Yuliang Xiu*(@yuliangxiu), Shunsuke Saito(@shunsukesaito), Zeng Huang(@ImaginationZ) and Kyle Olszewski(@kyleolsz), Hao Li is the corresponding author.
PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization (ICCV 2019)
Shunsuke Saito*, Zeng Huang*, Ryota Natsume*, Shigeo Morishima, Angjoo Kanazawa, Hao Li
The original work of Pixel-Aligned Implicit Function for geometry and texture reconstruction, unifying sigle-view and multi-view methods.
PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020)
Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo
They further improve the quality of reconstruction by leveraging multi-level approach!
ARCH: Animatable Reconstruction of Clothed Humans (CVPR 2020)
Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung
Learning PIFu in canonical space for animatable avatar generation!
Robust 3D Self-portraits in Seconds (CVPR 2020)
Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu
They extend PIFu to RGBD + introduce "PIFusion" utilizing PIFu reconstruction for non-rigid fusion.
Dr. Zeng Huang defended his PhD virtually using our system. (Media in Chinese)
@inproceedings{li2020monocular, title={Monocular Real-Time Volumetric Performance Capture}, author={Li, Ruilong and Xiu, Yuliang and Saito, Shunsuke and Huang, Zeng and Olszewski, Kyle and Li, Hao}, booktitle={European Conference on Computer Vision}, pages={49--67}, year={2020}, organization={Springer} } @inproceedings{10.1145/3407662.3407756, author = {Li, Ruilong and Olszewski, Kyle and Xiu, Yuliang and Saito, Shunsuke and Huang, Zeng and Li, Hao}, title = {Volumetric Human Teleportation}, year = {2020}, isbn = {9781450380607}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3407662.3407756}, doi = {10.1145/3407662.3407756}, booktitle = {ACM SIGGRAPH 2020 Real-Time Live!}, articleno = {9}, numpages = {1}, location = {Virtual Event, USA}, series = {SIGGRAPH 2020} }
This software is Copyright © 2021 Ruilong Li, The University of Southern California. All Rights Reserved.
Permission to use, copy, modify, and distribute this software and its documentation for educational, research
and non-profit purposes, without fee, and without a written agreement is hereby granted, provided that the above
copyright notice, this paragraph and the following three paragraphs appear in all copies.
Permission to make commercial use of this software may be obtained by contacting:
USC Stevens Center for Innovation
University of Southern California
1150 S. Olive Street, Suite 2300
Los Angeles, CA 90115, USA
This software program and documentation are copyrighted by The University of Southern California. The software program
and documentation are supplied "as is", without any accompanying services from USC. USC does not warrant that the
operation of the program will be uninterrupted or error-free. The end-user understands that the program was developed
for research purposes and is advised not to rely exclusively on the program for any reason.
IN NO EVENT SHALL THE UNIVERSITY OF SOUTHERN CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,
OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF
THE UNIVERSITY OF SOUTHERN CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE UNIVERSITY OF SOUTHERN
CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF
SOUTHERN CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.