GAMES Webinar 2019 – 122期(三维视觉前沿专题报告) |于涛(清华大学),Zhengqi Li(Cornell Tech, Cornell University)
【GAMES Webinar 2019-122期】(三维视觉前沿专题报告)
报告嘉宾1:于涛,清华大学
报告时间:2019年12月12日 晚上8:00-8:45(北京时间)
报告题目:Single-View and Real-time Dynamic 3D Reconstruction for Human Body
报告摘要:
Dynamic 3D reconstruction, which aiming at digitizing the holographic information of dynamic scenes given different types of input information (e.g., visual signals), is a hot research topic for decades, and it has various kinds of applications in measurement, teleportation, medical, culture, military, and education industry. Compared with multi-view dynamic 3d reconstruction, single-view and real-time dynamic 3d reconstruction is much more convenient and efficient, which enables much more applications.
Focusing on the main challenges of current single-view and real-time human body dynamic 3d reconstruciton methods, which cannot handle limb motions, cannot reconstruct body shapes, cannot recover from severe self-occlusions and cannot produce dynamic details of cloth. In this talk, I will introduce single-view and real-time human body dynamic 3D reconstruction methods based on: skeleton binding, parametric double layer reconstrtuction, semantic bidirectional motion blending and physics-driven multi-layer cloth generation are proposed. These proposed methods leverages the multi-layer semantic prior of human body 3d representations, and significantly improves the accuracy, robustness and practibility of current single-view and real-time human body dynamic 3d reconstruction.
讲者简介:
Tao Yu receieved the B.S. degree in Measurement & Control Technology and Instrumentation from Hefei University of Technology, China, in 2012. He receieved his Ph.D. degree in Precision Instrument and Machinery from Beihang University, China, in 2019. He then joined Tsinghua University as a postdoc. His research lies at the intersection between computer vision, computer graphics and machine learning, and more specifically, he is focusing on dynamic 3d human reconstruction for immersive communication and telepresence in virtual worlds. He has published 7 papers in IEEE-TPAMI, ACM-TOG, CVPR, ICCV, ECCV, etc.
讲者个人主页:https://ytrock.com/
报告嘉宾2:Zhengqi Li,Cornell Tech, Cornell University
报告时间:2019年12月12日 晚上8:45-9:30(北京时间)
报告题目:Learning Inverse Rendering in the Wild
报告摘要:
Inverse rendering can be thought of as the inverse process of a graphics rendering engine: we seek to take images and recover the intrinsic properties of a scene, including geometry, illumination and materials. Inverse rendering play an important role in many applications, including Virtual Reality (VR) and Augmented Reality (AR), and a number of companies are building products in these areas that require such vision techniques.
Although we have seen significant recent progress in these core computer vision problems, they still have fundamental limitations that must be addressed to make such applications fully work in practice. Towards this goal, deep learning methods are becoming a key tool for inverse rendering problems, especially for ill-posed problems such as predicting depth from a single image. However, such deep learning methods require large amounts of labeled data, and for inverse rendering problems such data is very difficult to collect, compared with other vision tasks such as object recognition.
In this talk, I will present my work to make key advances on this problem, specifically depth estimation and intrinsic image decomposition, by (1) using large number of unlabeled Internet photos and videos as a ready source of training data, (2) leveraging classic methods in multi-view geometry as well as physics-based graphics/vision, and (3) combining these with the power of deep learning in principled ways.
This talk is based on joint work with Noah Snavely, Tali Dekel, Forrester Cole, Richard Tucker, Ce Liu, and William T. Freeman.
讲者简介:
Zhengqi Li is a CS Ph.D. Candidate at Cornell Tech, Cornell University where he is advised by Prof. Noah Snavely. His research interests span 3D computer vision, computational photography and inverse graphics. In particular, he is interested in using large corpora of unlabeled images and videos in the wild to solve inverse graphics and 3D computer vision problems. He is a recipient of the CVPR Best Paper Hornorable Mention Award in 2019 and Adobe Research Fellowship Award in 2020.
讲者个人主页:https://www.cs.cornell.edu/~zl548/
主持人简介:
王康侃,博士,南京理工大学计算机学院副教授,2009年本科毕业于西北工业大学,之后进入浙江大学CAD&CG国家重点实验室直接攻读博士学位,2015年获得浙江大学计算机应用技术专业博士学位,2015年至2017年于中国科学院计算技术研究所任助理研究员,于2017年加入南京理工大学高维信息智能感知与系统教育部重点实验室。其研究方向包括计算机视觉、计算机图形学、三维视觉、三维重建、动态物体重建与运动跟踪等。
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org