GAMES Webinar 2021 – 206期(绘制专题) | Mengtian Li (Kuaishou Technology),Shuichang Lai(Nanjing University)

【GAMES Webinar 2021-206期】(绘制专题)

报告嘉宾1:Mengtian Li (Kuaishou Technology)

报告时间:2021年11月4号星期四晚上8:00-8:30(北京时间)

报告题目:Volumetric Appearance Stylization with Stylizing Kernel Prediction Network

报告摘要:

This work aims to efficiently construct the volume of heterogeneous single scattering albedo for a given medium that would lead to desired color appearance. We achieve this goal by formulating it as a volumetric style transfer problem in which an input 3D density volume is stylized using color features extracted from a reference 2D image. Unlike existing algorithms that require cumbersome iterative optimizations, our method leverages a feed-forward deep neural network with multiple well-designed modules. At the core of our network is a stylizing kernel predictor (SKP) that extracts multi-scale feature maps from a 2D style image and predicts a handful of stylizing kernels as a highly non-linear combination of the feature maps. Each group of stylizing kernels represents a specific style. A volume autoencoder (VolAE) is designed and jointly learned with the SKP to transform a density volume to an albedo volume based on these stylizing kernels. Since the autoencoder does not encode any style information, it can generate different albedo volumes with a wide range of appearance once training is completed. Additionally, a hybrid multi-scale loss function is used to learn plausible color features and guarantee temporal coherence for time-evolving volumes. Through comprehensive experiments, we validate the effectiveness of our method and show its superiority by comparing against state-of-the-arts. We show that with our method a novice user can easily create a diverse set of realistic translucent effects for 3D models (either static or dynamic), neglecting any cumbersome process of parameter tuning.

讲者简介:

Mengtian Li is currently an algorithm engineer at Kuaishou Technology. He received his M.Sc. degree from Department of Computer Science and Technology, Nanjing University in 2020, supervised by Jie Guo. His research interests include: Image Translation and deep learning for rendering.


报告嘉宾2:Shuichang Lai(Nanjing University)

报告时间:2021年11月4号星期四晚上8:30-9:15(北京时间)

报告题目:Highlight-Aware Two-Stream Network for Single-Image SVBRDF Acquisition

报告摘要:

Our work is about the task of estimating spatially-varying reflectance (i.e., SVBRDF) from a single, casually captured image. Central to our method is a highlight-aware (HA) convolution operation and a two-stream neural network equipped with proper training losses. HA convolution helps to reduce the impact of strong specular highlights on diffuse components and at the same time, hallucinates plausible contents in saturated regions. Our method is effective to recover clear SVBRDFs from a single casually captured image. Since we impose very few constraints on the capture process, even a non-expert user can create high-quality SVBRDFs that cater to many graphical applications. In this talk, we will introduce the structure of our neural network and demonstrate through quantitative analysis and qualitative visualization that the proposed method is effective to recover clear SVBRDFs from a single casually captured image, and performs favorably against state-of-the-arts.

讲者简介:

Shuichang Lai is a postgraduate student at Nanjing University. He received his bachelor degree from School of Computer Science and Engineering, Nanjing University of Science and Technology in 2020. His research interests include: physically-based rendering and material estimation.


主持人简介:

吴鸿智,浙江大学计算机科学与技术学院副教授、博士生导师,国家优秀青年基金获得者。本科毕业于复旦大学,博士毕业于美国耶鲁大学。主要研究兴趣为高密度采集装备与可微分建模,研制了多套具有自主知识产权的高密度光源阵列采集装备,发表了ACM TOG期刊论文10余篇,合作出版了计算机图形学译著2部,主持了国家自然科学基金多个研究项目以及微软亚洲研究院合作项目。担任Chinagraph程序秘书长,中国图像图形学会国际合作与交流工作委员会秘书长、智能图形专委会委员,以及PG、EGSR、CAD/Graphics等多个国际会议的程序委员会委员。个人主页:http://hongzhiwu.com

 

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org

You may also like...