GAMES Webinar 2019 – 122期(三维视觉前沿专题报告) |于涛(清华大学),Zhengqi Li(Cornell Tech, Cornell University)

【GAMES Webinar 2019-122期】(三维视觉前沿专题报告)


报告时间:2019年12月12日 晚上8:00-8:45(北京时间)

报告题目:Single-View and Real-time Dynamic 3D Reconstruction for Human Body


Dynamic 3D reconstruction, which aiming at digitizing the holographic information of dynamic scenes given different types of input information (e.g., visual signals), is a hot research topic for decades, and it has various kinds of applications in measurement, teleportation, medical, culture, military, and education industry. Compared with multi-view dynamic 3d reconstruction, single-view and real-time dynamic 3d reconstruction is much more convenient and efficient, which enables much more applications.

Focusing on the main challenges of current single-view and real-time human body dynamic 3d reconstruciton methods, which cannot handle limb motions, cannot reconstruct body shapes, cannot recover from severe self-occlusions and cannot produce dynamic details of cloth. In this talk, I will introduce single-view and real-time human body dynamic 3D reconstruction methods based on: skeleton binding, parametric double layer reconstrtuction, semantic bidirectional motion blending and physics-driven multi-layer cloth generation are proposed. These proposed methods leverages the multi-layer semantic prior of human body 3d representations, and significantly improves the accuracy, robustness and practibility of current single-view and real-time human body dynamic 3d reconstruction.


Tao Yu receieved the B.S. degree in Measurement & Control Technology and Instrumentation from Hefei University of Technology, China, in 2012. He receieved his Ph.D. degree in Precision Instrument and Machinery from Beihang University, China, in 2019. He then joined Tsinghua University as a postdoc. His research lies at the intersection between computer vision, computer graphics and machine learning, and more specifically, he is focusing on dynamic 3d human reconstruction for immersive communication and telepresence in virtual worlds. He has published 7 papers in IEEE-TPAMI, ACM-TOG, CVPR, ICCV, ECCV, etc.


报告嘉宾2:Zhengqi Li,Cornell Tech, Cornell University

报告时间:2019年12月12日 晚上8:45-9:30(北京时间)
报告题目:Learning Inverse Rendering in the Wild


Inverse rendering can be thought of as the inverse process of a graphics rendering engine: we seek to take images and recover the intrinsic properties of a scene, including geometry, illumination and materials. Inverse rendering play an important role in many applications, including Virtual Reality (VR) and Augmented Reality (AR), and a number of companies are building products in these areas that require such vision techniques.
Although we have seen significant recent progress in these core computer vision problems, they still have fundamental limitations that must be addressed to make such applications fully work in practice. Towards this goal, deep learning methods are becoming a key tool for inverse rendering problems, especially for ill-posed problems such as predicting depth from a single image. However, such deep learning methods require large amounts of labeled data, and for inverse rendering problems such data is very difficult to collect, compared with other vision tasks such as object recognition.
In this talk, I will present my work to make key advances on this problem, specifically depth estimation and intrinsic image decomposition,  by (1) using large number of unlabeled Internet photos and videos as a ready source of training data, (2) leveraging classic methods in multi-view geometry as well as physics-based graphics/vision, and (3) combining these with the power of deep learning in principled ways.
This talk is based on joint work with Noah Snavely, Tali Dekel, Forrester Cole, Richard Tucker, Ce Liu, and William T. Freeman.


Zhengqi Li is a CS Ph.D. Candidate at Cornell Tech, Cornell University where he is advised by Prof. Noah Snavely. His research interests span 3D computer vision, computational photography and inverse graphics. In particular, he is interested in using large corpora of unlabeled images and videos in the wild to solve inverse graphics and 3D computer vision problems. He is a recipient of the CVPR Best Paper Hornorable Mention Award in 2019 and Adobe Research Fellowship Award in 2020.




GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;





You may also like...