GAMES Webinar 2020 – 142期(计算机视觉专题) | 许岚(香港科技大学), Yinda Zhang(Google), Yifan Wang(ETH Zurich)
【GAMES Webinar 2020-142期】(计算机视觉专题)
报告嘉宾1:许岚(香港科技大学)
报告时间:2020年6月11号星期四晚上8:00-8:30(北京时间)
报告题目:High-Speed Human Motion Capture using an Event Camera
报告摘要:
The high frame rate is a critical requirement for capturing fast human motions. In this setting, existing markerless image-based methods are constrained by the lighting requirement, the high data bandwidth and the consequent high computation overhead. In this talk, we introduce our recent project EventCap — the first approach for 3D capturing of high-speed human motions using a single event camera. EvntCap combines model-based optimization and CNN based human pose detection to capture high frequency motion details and to reduce the drifting in the tracking. As a result, we can capture fast motions at millisecond resolution with significantly higher data efficiency than using high frame rate videos. Experiments on our new event-based fast human motion dataset demonstrate the effectiveness and accuracy of our method, as well as its robustness to challenging lighting conditions.
讲者简介:
I am a final-year PhD Candidate in Robotics Institute, ECE Department, Hong Kong University of Science and Technology, supervised by Prof. Fang Lu. Meanwhile, I work closely with Prof. Yebin Liu at Tsinghua University since 2016. My research focuses on computer vision and computer graphics. My goal is to capture, perceive and understand the human-centric dynamic and static scenes in the complex real world, and eventually to enable realistic and immersive tele-presence in virtual reality and augmented reality.
讲者个人主页: https://www.xu-lan.com/
报告嘉宾2: Yinda Zhang(Google)
报告时间:2020年6月11号星期四晚上8:30-9:00(北京时间)
报告题目:Virtual Content Generation via Deep Learning — 3D Shape Generation and Neural Rendering
报告摘要:
The era of augmented reality is coming with the recent fast developments of hardware framework and form factor. With more AR devices accessible in the future, the capability of generating virtual content on device will be on an extremely high demand, which would require a coordination between the vision and graphics systems. On one hand, the vision system needs to capture the real world into virtual domain and manipulate them as desired; and on the other hand, the graphics system shall depict the virtual content in a realistic manner. In this talk, I will briefly introduce our recent work for 3D shape generation and neural rendering – two of the important components in the vision and graphics systems respectively. We propose novel deep learning based solutions for new tasks, or relaxing the constraints and improving the quality of existing tasks. We hope these could facilitate and inspire future work empowering AR platform to better understand and interact with the real scenes.
讲者简介:
I am a Research Scientist at Google. My research interests lie at the intersection of computer vision, computer graphics, and machine learning. Recently, I focus on empowering 3D vision and perception via machine learning, including dense depth estimation, 3D shape analysis, and 3D scene understanding. I received my Ph.D. in Computer Science from Princeton University, advised by Professor Thomas Funkhouser. Before that, I received a Bachelor degree from Dept. Automation in Tsinghua University, and a Master degree from Dept. ECE in National University of Singapore co-supervised by Prof. Ping Tan and Prof. Shuicheng Yan.
讲者个人主页: www.zhangyinda.com
报告嘉宾3: Yifan Wang(ETH Zurich)
报告时间:2020年6月11号星期四晚上9:00-9:30(北京时间)
报告题目:Detail Preserving Shape Deformation
报告摘要:
Geometric details in 3D shapes is a defining factor in many industries such as AR/VR, VFX and design. However, creating high-fidelity shapes with fine-grained geometry details is a laborious process, requiring skillful artistry. Recently, many works have proposed different solutions for shape generations, but all of them fall short in constructing the high-frequency geometry details. In this talk, I’ll introduce our new CVPR paper “Neural Cages for Detail-Preserving 3D Deformations”, which proposes a way to generate high-quality shapes by detail-preserving deformation. In particular, I’ll show you how we can achieve arbitrarily high geometry resolution without increasing the network capacity by incorporating a classic deformation technique, cage deformation, into the neural network.
讲者简介:
Yifan Wang is a third-year PhD student in the Interactive Geometry Lab at ETH Zurich supervised by Prof. Olga Sorkine-Hornung. Her research interest lies in applying machine learning techniques to challenging image and geometry processing problems. During her PhD study, she has worked at the Advanced Innovation Center for Future Visual Entertainment (AICFVE) in Beijing Film Academy, the Imaging and Video group at Disney Research Zurich and the Creative Intelligence Lab at Adobe Research in Seattle.
讲者个人主页: https://yifita.github.io/
主持人简介:
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org