GAMES Webinar 2018-50期(CVPR 2018论文)| 刘烨斌(清华大学),付燕平(武汉大学),徐维鹏(马克斯普朗克研究所),李修(清华大学)   

【GAMES Webinar 2018-50期(CVPR 2018论文)】
报告时间:2018年6月14日(星期四)晚20:00 – 20:30(北京时间)
报告题目:DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor
We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with datadriven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera. One of the key contributions of this method is a double layer representation consisting of a complete parametric body shape inside, and a gradually fused outer surface layer. A pre-defined node graph on the body surface parameterizes the nonrigid deformations near the body, and a free-form dynamically changing graph parameterizes the outer surface layer far from the body, which allows more general reconstruction. We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance. Moreover, the inner body shape is optimized online and forced to fit inside the outer surface layer. Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios.
刘烨斌,清华大学自动化系副教授,博士生导师。分别于2002年和2009年在北京邮电大学、清华大学自动化系获得工学学士和工学博士学位。2011年起在清华大学自动化系任教至今。研究方向为多维视觉信息获取与重建,发表IEEE TPAMI、ACM TOG等汇刊论文20余篇,发表SIGGRAPH/CVPR/ICCV等领域顶级会议论文近20篇。获2012年度国家技术发明一等奖(排名第三)、2008年度国家技术发明二等奖(排名第三)等科技奖励。获2013年度清华大学学术新人奖,2015年国家自然科学基金优秀青年基金。 个人主页:


报告时间:2018年6月14日(星期四)晚20:30 – 21:00(北京时间)
报告题目:Texture Mapping for 3D Reconstruction with RGB-D Sensor


报告时间:2018年6月14日(星期四)晚21:00 – 21:30(北京时间)
报告题目:Video Based Reconstruction of 3D People Models
This paper describes how to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. Based on a parametric body model, we present a robust processing pipeline achieving 3D model fits with 5mm accuracy also for clothed people. Our main contribution is a method to nonrigidly deform the silhouette cones corresponding to the dynamic human silhouettes, resulting in a visual hull in a common reference frame that enables surface reconstruction. This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. We present evaluation results for a number of test subjects and analyze overall performance. Requiring only a smartphone or webcam, our method enables everyone to create their own fully animatable digital double, e.g., for social VR applications or virtual try-on for online fashion shopping.
Weipeng Xu is a post-doctoral researcher at the Graphic, Vision & Video group of Max Planck Institute for Informatics in Saarbrücken, Germany. He received B.E. and Ph.D. degrees from Beijing Institute of Technology in 2009 and 2016, respectively. He studied as a long-term visiting student at NICTA and Australian National University from 2013 to 2015. His research interests include human performance capture, human pose estimation and machine learning for vision/graphics. 个人主页:


报告时间:2018年6月14日(星期四)晚21:30 – 22:00(北京时间)
报告题目:Structure from Recurrent Motion: From Rigidity to Recurrency
This paper proposes a new method for Non-Rigid Structure-from-Motion (NRSfM) from a long monocular video sequence observing a non-rigid object performing recurrent and possibly repetitive dynamic action.  Departing from the traditional idea of using linear low-order or low-rank shape model for the task of NRSfM, our method exploits the property of shape recurrency (i.e., many deforming shapes tend to repeat themselves in time). We show that recurrency is, in fact, a {\em generalized rigidity}.  Based on this, we reduce NRSfM problems to rigid ones provided that certain recurrency condition is satisfied. Given such a reduction, standard rigid-SfM techniques are directly applicable (without any change) to the reconstruction of non-rigid dynamic shapes. To implement this idea as a practical approach, this paper develops efficient algorithms for automatic recurrency detection, as well as camera view clustering via a rigidity-check. Experiments on both simulated sequences and real data demonstrate the effectiveness of the method.  Since this paper offers a novel perspective on re-thinking structure-from-motion, we hope it will inspire other new problems in the field.
Xiu Li is currently working towards his Ph.D. degree at Tsinghua University. He received B.S degree from Department of Automation, Tsinghua University in 2015. His research interests focus on SfM/SLAM, non-rigid reconstruction, human performance capture and social signal measurement. 个人主页:


GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;



Liu, Ligang

刘利刚,中国科学技术大学教授,曾获得中国科学院“百人计划”、国家优青、杰青,从事计算机图形学研究。分别于1996年及2001年于浙江大学获得应用数学学士及博士学位。曾于微软亚洲研究院、浙江大学、哈佛大学工作或访问。曾获微软青年教授奖、陆增镛CAD&CG高科技奖一等奖、国家自然科学奖二等奖等奖项。负责创建了中科大《计算机图形学前沿》暑期课程及CCF CAD&CG专委图形学在线交流平台GAMES。

You may also like...