GAMES Webinar 2022 – 235期(浙大-斯坦福-港大 前沿技术交流之计算成像专题 (III)) | Cindy M. Nguyen, Eric Chan (Stanford University)

【GAMES Webinar 2022-235期】

(浙大-斯坦福-港大 前沿技术交流之计算成像专题 (III),ZJU-Stanford-HKU Frontiers in Computational Imaging Seminar (III))

报告嘉宾1(Speaker):Cindy M. Nguyen (Stanford University)

报告时间(Time):2022年6月30号 早上10:00-10:45(北京时间)(Thur 6/30/2022 10:00-10:45 AM (UTC+8))

报告题目(Title):Learning Spatially Varying Pixel Exposures for Motion Deblurring

报告方式(Broadcast Link):http://webinar.games-cn.org

报告摘要(Abstract):

Computationally removing the motion blur introduced by camera shake or object motion in a captured image remains a challenging task in computational photography. Deblurring methods are often limited by the fixed global exposure time of the image capture process. The post-processing algorithm either must deblur a longer exposure that contains relatively little noise or denoise a short exposure that intentionally removes the opportunity for blur at the cost of increased noise.
We present a novel approach of leveraging spatially varying pixel exposures for motion deblurring using next-generation focal-plane sensor–processors along with an end-to-end design of these exposures and a machine learning–based motion-deblurring framework. We demonstrate in simulation and a physical prototype that learned spatially varying pixel exposures (L-SVPE) can successfully deblur scenes while recovering high frequency detail.

讲者简介(Speaker bio):

Cindy M. Nguyen (https://ccnguyen.github.io/) is a PhD candidate in the Department of Electrical Engineering at Stanford University. Her research interests include denoising, deblurring, and depth estimation, among other challenges in computational photography. More recently, she has been interested in monocular depth estimation and tools to reconstruct 3D scenes. She received her bachelor’s and master’s degree from Stanford University.


报告嘉宾2(Speaker):Eric Chan(Stanford University)

报告时间(Time):2022年6月30号 早上10:45-11:25(北京时间)(Thur 6/30/2022 10:45-11:25 AM (UTC+8))

报告题目(Title):Efficient 3D Generative Models

报告方式(Broadcast Link):http://webinar.games-cn.org

报告摘要(Abstract):

In this talk, we’ll discuss 3D-aware generative models, a type of 3D GAN that trains from 2D images. We’ll gain an intuition of the problem of computational efficiency in neural rendering, and I’ll describe some recent techniques that have led to significant improvements in both quality and efficiency in 3D-aware generative models. These breakthroughs have led to new approaches that can generate photorealistic renderings in real-time, produce detailed geometry without requiring 3D ground-truth or multi-view training data, and enable a plethora of interesting applications, including photorealistic 3D avatar generation, and single-image 3D reconstruction.

讲者简介(Speaker bio):

Eric Chan is a Ph.D. student at Stanford where he is advised by Gordon Wetzstein and Jiajun Wu. After studying mechanical engineering and computer science at Yale, he began learning the basics of computer vision in the hope of teaching robots and algorithms how to better understand the world around them. Over the last couple of years, his focus has shifted to the intersection of 3D graphics and vision—to generalization across 3D representations and 3D generative models. Find more at ericryanchan.github.io


主持人简介(Host bio):

Evan Y. Peng (www.eee.hku.hk/~evanpeng/) is an Assistant Professor in the University of Hong Kong. Before joining HKU, he was a Postdoctoral Research Scholar in the Stanford University Computational Imaging Laboratory. He received my PhD in Computer Science from the Imager Lab, the University of British Columbia, both his MSc and BS in Optical Science and Engineering from State Key Lab of Modern Optical Instrumentation, Zhejiang University. His research interest lies in the interdisciplinary field of Optics, Graphics, Vision, and Artificial Intelligence, particularly with the focus of: Computational Optics, Sensing, and Display; Holographic Imaging/Display & VR/AR/MR; Computational Microscope Imaging; Low-level Computer Vision; Inverse Rendering; Human-centered Visual & Sensory Systems.

王锐,浙江大学教授,博士生导师,长期从事绘制研究,主要围绕虚拟现实与三维游戏中的图形绘制理论、算法与框架开展工作,在复杂光场高效采样、实时绘制算法、绘制架构的自动优化等方面做出了重要突破,取得了国际领先的研究创新成果,发表学术论文60余篇论文(TOP期刊论文十余篇)。承担多项国家省部项目,已获授权十余项专利,相关研究成果已成功应用于华为、西门子、网易游戏、中南卡通等公司。2011年度获教育部高等学校科学研究优秀成果奖科学技术进步奖一等奖、浙江省技术发明一等奖。个人主页(Homepage):http://www.cad.zju.edu.cn/home/rwang

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接(link to the livestream):http://webinar.games-cn.org

You may also like...