GAMES Webinar 2017-27期(Siggraph Asia 2017论文报告)| 李天野(美国南加州大学),胡力文(美国南加州大学)


【GAMES Webinar 2017-27期(Siggraph Asia 2017论文报告)】
报告嘉宾1: 李天野,美国南加州大学
报告时间:2017年12月21日(星期四)晚20:00 – 20:45(北京时间)
主持人:汪俊,南京航空航天大学(个人主页:http://www.3dgp.net/people.html
报告题目:Learning A Model of Facial Shape and Expression from 4D Scans
报告摘要:
The field of 3D face modeling has a large gap between high-end and low-end methods. At the high end, the best facial animation is indistinguishable from real humans, but this comes at the cost of extensive manual labor. At the low end, face capture from consumer depth sensors relies on 3D face models that are not expressive enough to capture the variability in natural facial shape and expression. We seek a middle ground by learning a facial model from thousands of accurately aligned 3D scans. Our FLAME model (Faces Learned with an Articulated Model and Expressions) is designed to work with existing graphics software and be easy to fit to data. FLAME uses a linear shape space trained from 3800 scans of human heads. FLAME combines this linear shape space with an articulated jaw, neck, and eyeballs, pose-dependent corrective blendshapes, and additional global expression from 4D face sequences in the D3DFACS dataset along with additional 4D sequences.We accurately register a template mesh to the scan sequences and make the D3DFACS registrations available for research purposes. In total the model is trained from over 33, 000 scans. FLAME is low-dimensional but more expressive than the FaceWarehouse model and the Basel Face Model. We compare FLAME to these models by fitting them to static 3D scans and 4D sequences using the same optimization method. FLAME is significantly more accurate and is available for research purposes (http://flame.is.tue.mpg.de).
讲者简介:
Tianye Li is a PhD student at University of Southern California (USC), advised by Prof. Hao Li. During his PhD study, he interned at Max Planck Institute for Intelligent Systems (Tübingen, Germany). Previously, he obtained a Master’s degree in Electrical Engineering from USC and a Bachelor’s degree in Electronic and Information Engineering from Xidian University, Xi’an, China. His research interests include computer vision and computer graphics, especially performance capture, modeling and understanding for human face and body. 讲者个人主页:https://sites.google.com/site/tianyefocus/home

 

报告嘉宾2:胡力文,美国南加州大学
报告时间:2017年12月21日(星期四)晚20:45 – 21:30(北京时间)
主持人:汪俊,南京航空航天大学(个人主页:http://www.3dgp.net/people.html
报告题目:Avatar Digitization from A Single Image for Real-time Rendering
报告摘要:
We present a fully automatic framework that digitizes a complete 3D head with hair from a single unconstrained image. Our system offers a practical and consumer-friendly end-to-end solution for avatar personalization in gaming and social VR applications. The reconstructed models include secondary components (eyes, teeth, tongue, and gums) and provide animation-friendly blendshapes and joint-based rigs. While the generated face is a high-quality textured mesh, we propose a versatile and efficient polygonal strips (polystrips) representation for the hair. Polystrips are suitable for an extremely wide range of hairstyles and textures and are compatible with existing game engines for real-time rendering. In addition to integrating state-of-the-art advances in facial shape modeling and appearance inference, we propose a novel single-view hair generation pipeline, based on 3D-model and texture retrieval, shape refinement, and polystrip patching optimization. The performance of our hairstyle retrieval is enhanced using a deep convolutional neural network for semantic hair attribute classification. Our generated models are visually comparable to state-of-the-art game characters designed by professional artists. For real-time settings, we demonstrate the flexibility of polystrips in handling hairstyle variations, as opposed to conventional strand-based representations. We further show the effectiveness of our approach on a large number of images taken in the wild, and how compelling avatars can be easily created by anyone.
讲者简介:
Hu is currently a fourth year Ph.D. student in the Department of Computer Science at University of Southern California. His advisor is Prof. Hao Li. Before that, he obtained his M.S. degree from University of Southern California in 2014 and B.S. degree from Zhejiang University in 2012. Hu works in the field of Computer Graphics. His research focuses on developing new data-driven techniques and modeling tools for the digitization of highly intricate geometric structures such as human hair as well as the acquisition of physical properties from captured data. 讲者个人主页:http://www-scf.usc.edu/~liwenhu/

 

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。

 

观看直播的链接:http://webinar.games-cn.org

 

Liu, Ligang

刘利刚,中国科学技术大学教授,曾获得中国科学院“百人计划”、国家优青、杰青,从事计算机图形学研究。分别于1996年及2001年于浙江大学获得应用数学学士及博士学位。曾于微软亚洲研究院、浙江大学、哈佛大学工作或访问。曾获微软青年教授奖、陆增镛CAD&CG高科技奖一等奖、国家自然科学奖二等奖等奖项。负责创建了中科大《计算机图形学前沿》暑期课程及CCF CAD&CG专委图形学在线交流平台GAMES。

You may also like...