GAMES Webinar 2021 – 180期(绘制专题) | Liang Shi ( Massachusetts Institute of Technology),Tiancheng Sun (University of California, San Diego)
【GAMES Webinar 2021-180期】(绘制专题)
报告嘉宾1:Liang Shi ( Massachusetts Institute of Technology)
报告时间:2021年4月22号星期四上午10:00-10:45(北京时间)
报告题目:Towards Real-time Photorealistic 3D Holography with Deep Neural Networks
报告摘要:
We present a deep-learning based method to synthesize photorealistic 3D holograms in real-time on a consumer-grade GPU and interactively on an iPhone. Computer generated holography (CGH) is fundamental to applications such as biosensing, volumetric display, optical/acoustic tweezer, and many others that require spatial control of intricate optical or acoustic fields. For near-eye displays, CGH provides the opportunity to support true 3D projections in a sunglass-like headset. Yet, the conventional approach to compute a true 3D hologram via physical simulation of diffraction and inference is slow and unaware of occlusion. These computational challenges limit the interactiveness and realism of the ultimate immersive experience. In this talk, I will describe techniques to mitigate these challenges, including how to augment the physical simulation algorithm to handle occlusion for RGB-D input, methods to create a large-scale 3D hologram dataset, and design/training of CNNs to speed up 3D hologram synthesis. I will demonstrate high-quality experimentally-captured 3D holograms generated by the proposed system and discuss possible applications and extensions.
讲者简介:
Liang Shi is a PhD candidate at MIT CSAIL advised by Prof. Wojciech Matusik. His current research focus on computational display for virtual/augmented reality, appearance modeling, and computational fabrication. His research has been recognized by Sony Focus Research Award 2018, MIT.nano/NCSOFT Seed Grant Award 2019, Facebook Fellowship 2021. He received his Master’s degree from Stanford University supervised by Prof. Gordon Wetzstein and was a member of Stanford Computational Imaging Lab. He received his Bachelor’s degree from Beihang University. He was an intern at NVIDIA Research, Adobe Research, and Facebook Reality Lab.
讲者个人主页:http://people.csail.mit.edu/liangs/
报告嘉宾2:Tiancheng Sun (University of California, San Diego)
报告时间:2021年4月22号星期四上午10:45-11:30(北京时间)
报告题目:Light Stage Super-Resolution: Continuous High-Frequency Relighting
报告摘要:
The light stage has been widely used in computer graphics for the past two decades, primarily to enable the relighting of human faces. By capturing the appearance of the human subject under different light sources, one obtains the light transport matrix of that subject, which enables image-based relighting in novel environments. However, due to the finite number of lights in the stage, the light transport matrix only represents a sparse sampling on the entire sphere. As a consequence, relighting the subject with a point light or a directional source that does not coincide exactly with one of the lights in the stage requires interpolation and resampling the images corresponding to nearby lights, and this leads to ghosting shadows, aliased specularities, and other artifacts.
To ameliorate these artifacts and produce better results under arbitrary high-frequency lighting, this paper proposes a learning-based solution for the “super-resolution” of scans of human faces taken from a light stage. Given an arbitrary “query” light direction, our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the \changed{face} that appears to be illuminated by a “virtual” light source at the query location.
This neural network must circumvent the inherent aliasing and regularity of the light stage data that was used for training, which we accomplish through the use of regularized traditional interpolation methods within our network.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights, and is able to generalize across a wide variety of subjects.
Our super-resolution approach enables more accurate renderings of human subjects under detailed environment maps, or the construction of simpler light stages that contain fewer light sources while still yielding comparable quality renderings as light stages with more densely sampled lights.
讲者简介:
Tiancheng (Kevin) Sun is a 4th year PhD student from UC San Diego, advised by Prof. Ravi Ramamoorthi. His research interests lie in the combination of inverse rendering and rendering, with a particular interest in portrait appearances. He is the recipient of the 2019 Google PhD Fellowship, and has been awarded the First Place of the 2018 ACM Student Research Competition.
讲者个人主页: www.kevinkingo.com
主持人简介:
吴鸿智,浙江大学计算机科学与技术学院副教授、博士生导师,国家优秀青年基金获得者。本科毕业于复旦大学,博士毕业于美国耶鲁大学。主要研究兴趣为高密度采集装备与可微分建模,研制了多套具有自主知识产权的高密度光源阵列采集装备,发表了ACM TOG期刊论文10余篇,合作出版了计算机图形学译著2部,主持了国家自然科学基金多个研究项目以及微软亚洲研究院合作项目。担任Chinagraph程序秘书长,中国图像图形学会国际合作与交流工作委员会秘书长、智能图形专委会委员,以及PG、EGSR、CAD/Graphics等多个国际会议的程序委员会委员。个人主页:http://www.cad.zju.edu.cn/home/hwu/
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org