GAMES Webinar 2020 – 167期(浙大-斯坦福前沿技术交流之计算成像专题(I)) | Suyeon Choi, Julien Martel, David Lindell, Evan Peng (Stanford University)

【GAMES Webinar 2020-167期】

(浙大-斯坦福前沿技术交流之计算成像专题 (I),  ZJU-Stanford Frontiers in Computational Imaging Seminar (I))

报告嘉宾1(Speaker):Suyeon Choi & Evan Peng (Stanford University)

报告时间(Time):2020年12月7号星期四上午 10:00-11:00(北京时间)(Thu 12/17/2020 10:00-11:00 AM (UTC+8))

报告题目(Title):Neural Holography: Next-generation High Contrast, Real-time Computer-generated Holographic Displays

报告摘要(Abstract):

Holographic displays promise unprecedented capabilities for direct-view displays as well as virtual and augmented reality applications. However, one of the biggest challenges for computer-generated holography (CGH) is the fundamental tradeoff between algorithm runtime and achieved image quality. Moreover, the image quality achieved by most holographic displays is low, due to the mismatch between the optical wave propagation of the display and its simulated model. Here, we develop an algorithmic CGH framework that achieves unprecedented image fidelity and real-time framerates. Our framework comprises several parts, including a novel camera-in-the-loop optimization strategy that allows us to either optimize a hologram directly or train an interpretable model of the optical wave propagation and a neural network architecture that represents the first CGH algorithm capable of generating full-color high-quality holographic images at FHD resolution in real-time. We further propose a holographic display architecture using two SLMs, where the camera-in-the-loop optimization with an automated calibration procedure is applied. As such, both diffracted and undiffracted light on the target plane are acquired to update hologram patterns on SLMs simultaneously with a stochastic gradient-descent algorithm. The experimental results demonstrate that the proposed holographic display architecture, compared to conventional single SLM-based systems, can deliver higher contrast and less noisy holographic images without the need for extra filtering.

讲者简介(Speaker bio):

Suyeon Choi (https://choisuyeon.github.io/) is a PhD student in the Department of Electrical Engineering at Stanford University. His research interests include computational displays, holography, algorithmic frameworks for optical systems, and light transport. Most recently, he is interested in developing 3D display hardware systems with novel algorithmic frameworks. He received his bachelor’s degree at Seoul National University, where he was a recipient of The Presidential Science Scholarship.

Yifan (Evan) Peng (http://stanford.edu/~evanpeng/) is a Postdoc Research Fellow at Stanford University in the Computational Imaging Lab. His research interest rides across the interdisciplinary fields of optics, computer graphics, and computer vision. Much of his work concerns developing computational imaging modalities combining optics and algorithms, for both cameras and displays. He completed his PhD in CS at the University of British Columbia, and his MSc and BE in Optical Science and Engineering at Zhejiang University.


报告嘉宾2(Speaker):Julien Martel & David B. Lindell (Stanford University)

报告时间(Time):2020年12月17号星期四上午 11:00-12:00(北京时间)(Thu 12/17/2020 11:00 AM-12:00 noon (UTC+8))

报告题目(Title):Implicit Neural Representation Networks for Fitting Signals, Derivatives, and Integrals

报告摘要(Abstract):

Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal’s spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. In this talk, we describe how sinusoidal representation networks or SIREN, are ideally suited for representing complex natural signals and their derivatives. Using SIREN, we demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. While SIREN can be used to fit signals and their derivatives, we also introduce a new framework for solving integral equations using implicit neural representation networks. Our automatic integration framework, AutoInt, enables the calculation of any definite integral with two evaluations of a neural network. We apply our approach for efficient integration to the problem of neural volume rendering and demonstrate a greater than ten times improvement in rendering times compared to previous approaches.

讲者简介(Speaker bio):

Julien Martel (http://web.stanford.edu/~jnmartel/) is a Postdoctoral Research Fellow at Stanford University in the Computational Imaging Lab led by Gordon Wetzstein. His research interests are in unconventional visual sensing and processing. More specifically, his current topics of research include the co-design of hardware and algorithms for visual sensing, the design of methods for vision sensors with in-pixel computing capabilities, and the use of novel representations for visual data such as neural implicit representations.

David B. Lindell (https://davidlindell.com) is a PhD candidate in the Department of Electrical Engineering at Stanford University. His research interests are in the areas of computational imaging, machine learning, and remote sensing. Most recently, he has worked on developing neural representations for applications in vision and rendering. He has also developed advanced 3D imaging systems to capture objects hidden around corners or through scattering media.


主持人简介(Host bio):

Rui Wang (http://www.cad.zju.edu.cn/home/rwang) is a professor at the State Key Laboratory of CAD&CG, Zhejiang University. His research interests are mainly in real-time rendering, realistic rendering, GPU-based computation, and 3D display techniques. Now, he is leading a group working on the next-generation rendering techniques and real-time engine.

Yifan (Evan) Peng (http://stanford.edu/~evanpeng/) is a Postdoc Research Fellow at Stanford University in the Computational Imaging Lab. His research interest rides across the interdisciplinary fields of optics, computer graphics, and computer vision. Much of his work concerns developing computational imaging modalities combining optics and algorithms, for both cameras and displays. He completed his PhD in CS at the University of British Columbia, and his MSc and BE in Optical Science and Engineering at Zhejiang University.

 

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org

You may also like...