GAMES Webinar 2018-50期(CVPR 2018论文)| 刘烨斌(清华大学),付燕平(武汉大学),徐维鹏(马克斯普朗克研究所),李修(清华大学)
【GAMES Webinar 2018-50期(CVPR 2018论文)】
报告嘉宾1:刘烨斌,清华大学
报告时间:2018年6月14日(星期四)晚20:00 – 20:30(北京时间)
主持人:张举勇,中国科学技术大学(个人主页:http://staff.ustc.edu.cn/~juyong/)
报告题目:DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor
报告摘要:
We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with datadriven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera. One of the key contributions of this method is a double layer representation consisting of a complete parametric body shape inside, and a gradually fused outer surface layer. A pre-defined node graph on the body surface parameterizes the nonrigid deformations near the body, and a free-form dynamically changing graph parameterizes the outer surface layer far from the body, which allows more general reconstruction. We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance. Moreover, the inner body shape is optimized online and forced to fit inside the outer surface layer. Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios.
讲者简介:
刘烨斌,清华大学自动化系副教授,博士生导师。分别于2002年和2009年在北京邮电大学、清华大学自动化系获得工学学士和工学博士学位。2011年起在清华大学自动化系任教至今。研究方向为多维视觉信息获取与重建,发表IEEE TPAMI、ACM TOG等汇刊论文20余篇,发表SIGGRAPH/CVPR/ICCV等领域顶级会议论文近20篇。获2012年度国家技术发明一等奖(排名第三)、2008年度国家技术发明二等奖(排名第三)等科技奖励。获2013年度清华大学学术新人奖,2015年国家自然科学基金优秀青年基金。 个人主页:http://www.liuyebin.com/
报告嘉宾2:付燕平,武汉大学
报告时间:2018年6月14日(星期四)晚20:30 – 21:00(北京时间)
主持人:张举勇,中国科学技术大学(个人主页:http://staff.ustc.edu.cn/~juyong/)
报告题目:Texture Mapping for 3D Reconstruction with RGB-D Sensor
报告摘要:
在三维重建时能够重建出三维模型的高度逼真的纹理细节是非常重要。然而,由于RGB-D传感器获取的数据含有很多噪音,基于这些数据重建的几何模型与真实模型之间存在误差,而且利用这些数据估算出的相机轨迹也不可避免存在漂移。最终这些误差都会造成彩色图像与重建的几何模型不能完全对齐,最终会造成纹理映射的效果远远不能满足我们的需求。为了获取更加理想的三维纹理模型,我们提出了一个从全局到局部的非刚性校正策略来对纹理映射进行优化。首先我们为模型上的每个面选择一个最优的纹理图像,这样可以有效的消除多张图像加权混合带来的模糊和鬼影等影响。然后,我们采用一个全局优化到局部的非刚性校正来有效的对不同面之间纹理进行缝合使得纹理之间充分对齐。这两步非刚性的校正可以有效的补偿相机漂移和几何重建误差造成纹理与几何模型不能完全对齐的问题。
讲者简介:
武汉大学计算机学院在读博士,导师肖春霞教授。主要从事基于RGBD深度传感器的三维重建,SLAM和纹理映射等方面的研究。个人主页:http://graphvision.whu.edu.cn/
报告嘉宾3:徐维鹏,马克斯普朗克研究所
报告时间:2018年6月14日(星期四)晚21:00 – 21:30(北京时间)
主持人:张举勇,中国科学技术大学(个人主页:http://staff.ustc.edu.cn/~juyong/)
报告题目:Video Based Reconstruction of 3D People Models
报告摘要:
This paper describes how to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. Based on a parametric body model, we present a robust processing pipeline achieving 3D model fits with 5mm accuracy also for clothed people. Our main contribution is a method to nonrigidly deform the silhouette cones corresponding to the dynamic human silhouettes, resulting in a visual hull in a common reference frame that enables surface reconstruction. This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. We present evaluation results for a number of test subjects and analyze overall performance. Requiring only a smartphone or webcam, our method enables everyone to create their own fully animatable digital double, e.g., for social VR applications or virtual try-on for online fashion shopping.
讲者简介:
Weipeng Xu is a post-doctoral researcher at the Graphic, Vision & Video group of Max Planck Institute for Informatics in Saarbrücken, Germany. He received B.E. and Ph.D. degrees from Beijing Institute of Technology in 2009 and 2016, respectively. He studied as a long-term visiting student at NICTA and Australian National University from 2013 to 2015. His research interests include human performance capture, human pose estimation and machine learning for vision/graphics. 个人主页:http://people.mpi-inf.mpg.de/~wxu/
报告嘉宾4:李修,清华大学
报告时间:2018年6月14日(星期四)晚21:30 – 22:00(北京时间)
主持人:张举勇,中国科学技术大学(个人主页:http://staff.ustc.edu.cn/~juyong/)
报告题目:Structure from Recurrent Motion: From Rigidity to Recurrency
报告摘要:
This paper proposes a new method for Non-Rigid Structure-from-Motion (NRSfM) from a long monocular video sequence observing a non-rigid object performing recurrent and possibly repetitive dynamic action. Departing from the traditional idea of using linear low-order or low-rank shape model for the task of NRSfM, our method exploits the property of shape recurrency (i.e., many deforming shapes tend to repeat themselves in time). We show that recurrency is, in fact, a {\em generalized rigidity}. Based on this, we reduce NRSfM problems to rigid ones provided that certain recurrency condition is satisfied. Given such a reduction, standard rigid-SfM techniques are directly applicable (without any change) to the reconstruction of non-rigid dynamic shapes. To implement this idea as a practical approach, this paper develops efficient algorithms for automatic recurrency detection, as well as camera view clustering via a rigidity-check. Experiments on both simulated sequences and real data demonstrate the effectiveness of the method. Since this paper offers a novel perspective on re-thinking structure-from-motion, we hope it will inspire other new problems in the field.
讲者简介:
Xiu Li is currently working towards his Ph.D. degree at Tsinghua University. He received B.S degree from Department of Automation, Tsinghua University in 2015. His research interests focus on SfM/SLAM, non-rigid reconstruction, human performance capture and social signal measurement. 个人主页:http://media.au.tsinghua.edu.cn/~xiu/
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org