GAMES Webinar 2022 – 235期(浙大-斯坦福-港大 前沿技术交流之计算成像专题 (III)) | Cindy M. Nguyen, Eric Chan (Stanford University)

【GAMES Webinar 2022-235期】

(浙大-斯坦福-港大 前沿技术交流之计算成像专题 (III),ZJU-Stanford-HKU Frontiers in Computational Imaging Seminar (III))

报告嘉宾1(Speaker):Cindy M. Nguyen (Stanford University)

报告时间(Time):2022年6月30号 早上10:00-10:45(北京时间)(Thur 6/30/2022 10:00-10:45 AM (UTC+8))

报告题目(Title):Learning Spatially Varying Pixel Exposures for Motion Deblurring

报告方式(Broadcast Link):


Computationally removing the motion blur introduced by camera shake or object motion in a captured image remains a challenging task in computational photography. Deblurring methods are often limited by the fixed global exposure time of the image capture process. The post-processing algorithm either must deblur a longer exposure that contains relatively little noise or denoise a short exposure that intentionally removes the opportunity for blur at the cost of increased noise.
We present a novel approach of leveraging spatially varying pixel exposures for motion deblurring using next-generation focal-plane sensor–processors along with an end-to-end design of these exposures and a machine learning–based motion-deblurring framework. We demonstrate in simulation and a physical prototype that learned spatially varying pixel exposures (L-SVPE) can successfully deblur scenes while recovering high frequency detail.

讲者简介(Speaker bio):

Cindy M. Nguyen ( is a PhD candidate in the Department of Electrical Engineering at Stanford University. Her research interests include denoising, deblurring, and depth estimation, among other challenges in computational photography. More recently, she has been interested in monocular depth estimation and tools to reconstruct 3D scenes. She received her bachelor’s and master’s degree from Stanford University.

报告嘉宾2(Speaker):Eric Chan(Stanford University)

报告时间(Time):2022年6月30号 早上10:45-11:25(北京时间)(Thur 6/30/2022 10:45-11:25 AM (UTC+8))

报告题目(Title):Efficient 3D Generative Models

报告方式(Broadcast Link):


In this talk, we’ll discuss 3D-aware generative models, a type of 3D GAN that trains from 2D images. We’ll gain an intuition of the problem of computational efficiency in neural rendering, and I’ll describe some recent techniques that have led to significant improvements in both quality and efficiency in 3D-aware generative models. These breakthroughs have led to new approaches that can generate photorealistic renderings in real-time, produce detailed geometry without requiring 3D ground-truth or multi-view training data, and enable a plethora of interesting applications, including photorealistic 3D avatar generation, and single-image 3D reconstruction.

讲者简介(Speaker bio):

Eric Chan is a Ph.D. student at Stanford where he is advised by Gordon Wetzstein and Jiajun Wu. After studying mechanical engineering and computer science at Yale, he began learning the basics of computer vision in the hope of teaching robots and algorithms how to better understand the world around them. Over the last couple of years, his focus has shifted to the intersection of 3D graphics and vision—to generalization across 3D representations and 3D generative models. Find more at

主持人简介(Host bio):

Evan Y. Peng ( is an Assistant Professor in the University of Hong Kong. Before joining HKU, he was a Postdoctoral Research Scholar in the Stanford University Computational Imaging Laboratory. He received my PhD in Computer Science from the Imager Lab, the University of British Columbia, both his MSc and BS in Optical Science and Engineering from State Key Lab of Modern Optical Instrumentation, Zhejiang University. His research interest lies in the interdisciplinary field of Optics, Graphics, Vision, and Artificial Intelligence, particularly with the focus of: Computational Optics, Sensing, and Display; Holographic Imaging/Display & VR/AR/MR; Computational Microscope Imaging; Low-level Computer Vision; Inverse Rendering; Human-centered Visual & Sensory Systems.


GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
观看直播的链接(link to the livestream):

You may also like...