GAMES Webinar 2018-72期(Human Performance Capture Tutorial-4)| Michael Zollhöfer(斯坦福大学),郭凯文(谷歌)


【GAMES Webinar 2018-72期(Human Performance Capture Tutorial-4)】
报告嘉宾1:Michael Zollhöfer,斯坦福大学
报告时间:2018年11月8日 下午2:00-2:45(北京时间)
主持人:刘烨斌,清华大学(个人主页:http://liuyebin.com/
报告题目:Is It Real? Facial Motion Capture and Reenactment using Deep Neural Networks
报告摘要:
A broad range of applications in visual effects, computer animation, autonomous driving, and man-machine interaction heavily depend on robust and fast algorithms to obtain high-quality reconstructions of our physical world in terms of geometry, motion, reflectance, and illumination. Especially, with the increasing popularity of virtual, augmented and mixed reality devices, there comes a rising demand for real-time and low-latency solutions.
This talk covers data-parallel optimization and state-of-the-art machine learning techniques to tackle the underlying 3D and 4D reconstruction problems based on novel mathematical models and fast algorithms. The particular focus of this talk is on self-supervised face reconstruction from a collection of unlabeled in-the-wild images. The proposed approach can be trained end-to-end without dense annotations by fusing a convolutional encoder with a differentiable expert-designed renderer and a self-supervised training loss.
The resulting reconstructions are the foundation for advanced video editing effects, such as photo-realistic re-animation of portrait videos. The core of the proposed approach is a generative rendering-to-video translation network that takes computer graphics renderings as input and generates photo-realistic modified target videos that mimic the source content. With the ability to freely control the underlying parametric face model, we are able to demonstrate a large variety of video rewrite applications. For instance, we can reenact the full head using interactive user-controlled editing and realize high-fidelity visual dubbing.
讲者简介:
Michael Zollhöfer is a Visiting Assistant Professor at Stanford University. His stay at Stanford is funded by a postdoctoral fellowship of the Max Planck Center for Visual Computing and Communication (MPC-VCC), which he received for his work in the fields of computer vision, computer graphics, and machine learning. Before joining Stanford University, Michael was a Postdoctoral Researcher at the Max Planck Institute for Informatics working with Christian Theobalt. He received his PhD from the University of Erlangen-Nuremberg for his work on real-time reconstruction of static and dynamic scenes. During his PhD, he was an intern at Microsoft Research Cambridge working with Shahram Izadi on data-parallel optimization for real-time template-based surface reconstruction. The primary goal of his research is to teach computers to reconstruct and analyze our world at frame rate based on visual input. To this end, he develops key technology to invert the image formation models of computer graphics based on data-parallel optimization and state-of-the-art deep learning techniques. The reconstructed intrinsic scene properties, such as geometry, motion, reflectance, and illumination are the foundation for a broad range of applications not only in virtual and augmented reality, visual effects, computer animation, autonomous driving, and man-machine interaction, but also in other fields such as medicine and biomechanics.
讲者个人主页:https://web.stanford.edu/~zollhoef/

 

报告嘉宾2:郭凯文,谷歌
报告时间:2018年11月8日 下午2:45-3:30(北京时间)
主持人:刘烨斌,清华大学(个人主页:http://liuyebin.com/
报告题目:基于深度图像融合的动态人体重建技术
报告摘要:
动态人体建模是三维计算机视觉中的重要问题,目的是为了获得高真实感的人体运动,表面几何模型和纹理。在全息通信,增强现实,电影游戏领域,人体及运动建模起重要作用。本报告主要介绍过去几年,我们在实时动态人体建模领域中取得的一系列进展。这些工作包括:材质几何运动的联合估计、运用人体骨架和体态模型先验,使用惯性传感器和高帧率相机提高重建效果等。
讲者简介:
郭凯文,谷歌研究员。分别于2011年和2017年在东北大学、清华大学自动化系获得工学学士和工学博士学位。20017年加入谷歌继续三维计算机视觉研究工作。研究方向为三维计算机视觉,包括三维重建、运动捕捉。已发表ACM TOG、IEEE CVPR、ICCV、ECCV、TVCG等领域重要期刊及会议论文多篇。
讲者个人主页:http://www.guokaiwen.com/

 

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。

 

观看直播的链接:http://webinar.games-cn.org

You may also like...