GAMES Webinar 2021 – 179期(绘制专题) | Yu Guo (University of California, Irvine),Zheng Zeng (Shandong University)

【GAMES Webinar 2020-179期】(绘制专题)

报告嘉宾1:Yu Guo (University of California, Irvine)

报告时间:2021年4月15号星期四上午10:00-10:45(北京时间)

报告题目:MaterialGAN: Reflectance Capture using a Generative SVBRDF Model

报告摘要:

In this work, we present MaterialGAN, a deep generative convolutional network based on StyleGAN2, trained to synthesize realistic SVBRDF parameter maps. We show that MaterialGAN can be used as a powerful material prior in an inverse rendering framework: we optimize in its latent representation to generate material maps that match the appearance of the captured images when rendered. We demonstrate this framework on the task of reconstructing SVBRDFs from images captured under flash illumination using a hand-held mobile phone. Our method succeeds in producing plausible material maps that accurately reproduce the target images, and outperforms previous state-of-the-art material capture methods in evaluations on both synthetic and real data. Furthermore, our GAN-based latent space allows for high-level semantic material editing operations such as generating material variations and material morphing.

讲者简介:

Yu Guo is 5th year Ph.D student from Department of Computer Science at University of California, Irvine, supervised by Shuang Zhao. His interests include Computer Graphics and Vision, especially in material appearance modeling and physically based rendering.

讲者个人主页:https://www.ics.uci.edu/~yug10/


报告嘉宾2:Zheng Zeng (Shandong University)

报告时间:2021年4月15号星期四上午10:45-11:30(北京时间)

报告题目:Temporally Reliable Motion Vectors for Real-time Ray Tracing

报告摘要:

Real-time ray tracing (RTRT) is being pervasively applied. The key to RTRT is a reliable denoising scheme that reconstructs clean images from significantly undersampled noisy inputs, usually at 1 sample per pixel as limited by current hardware’s computing power. The state of the art reconstruction methods all rely on temporal filtering to find correspondences of current pixels in the previous frame, described using per-pixel screen-space motion vectors. While these approaches are demonstrated powerful, they suffer from a common issue that the temporal information cannot be used when the motion vectors are not valid, i.e. when temporal correspondences are not obviously available or do not exist in theory. We introduce temporally reliable motion vectors that aim at deeper exploration of temporal coherence, especially for the generally-believed difficult applications on shadows, glossy reflections and occlusions, with the key idea to detect and track the cause of each effect. We show that our temporally reliable motion vectors produce significantly better temporal results on a variety of dynamic scenes when compared to the state of the art methods, but with negligible performance overhead.

讲者简介:

Zheng Zeng received his B.S. degree in digital media technology from Shan-dong University, Jinan, in 2018. He is currently a Master student in the School of Software at Shandong University, Jinan, supervised by Prof. Lu Wang. His research interests include photo realistic rendering and high-performance.

讲者个人主页: https://zheng95z.github.io/


主持人简介:

王贝贝,南京理工大学,副教授,硕士生导师,中国计算机学会CAD&CG专委会委员。主要研究方向是计算机图形学渲染方向,包括了全局光照算法、参与性介质光线传递和复杂材质模型等。王贝贝分别于2009年、2014年在山东大学获得学士、博士学位,期间在巴黎高科进行两年联合培养。2015年在英国游戏公司Studio Gobo参与Disney游戏Infinity 3的研发。2015年底到2017年初,在INRIA(法国信息与自动化研究所)从事博士后研究。之后加入到南京理工大学。共发表高水平论文30余篇,其中以第一作者在ACM TOG, IEEE TVCG, CGF上发表论文十余篇。EGSR 2021, HPG 2021程序委员会委员。 ACM TOG, Siggraph Asia, EG 等期刊会议审稿人。个人主页:https://wangningbei.github.io/

 

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org

You may also like...