GAMES Webinar 2024 – 342期(斯坦福-港大 前沿技术交流之计算成像专题 (V)) | Manu Gopakumar & Gun-Yeal Lee(Stanford University),Haley So(Stanford University)

【GAMES Webinar 2024-342期】(斯坦福-港大 前沿技术交流之计算成像专题 (V)) Stanford-HKU Frontiers in Computational Imaging Seminar (V)

报告时间  Time:2024年9月28号星期六早上10:30-11:30(北京时间)

(Sat 9/28/2024 10:30-11:30 AM (UTC+8))


报告嘉宾  Speaker:Manu Gopakumar, Gun-Yeal Lee (Stanford University)

报告题目 Title:Full-color 3D holographic augmented-reality displays with metasurface waveguides

报告摘要 Abstract:

Emerging spatial computing systems seamlessly superimpose digital information on the physical environment observed by a user, enabling transformative experiences across various domains, such as entertainment, education, communication and training. However, the widespread adoption of augmented-reality (AR) displays has been limited due to the bulky projection optics of their light engines and their inability to accurately portray three-dimensional (3D) depth cues for virtual content, among other factors. We will discuss a holographic AR system that overcomes these challenges using a unique combination of inverse-designed full-color metasurface gratings, a compact dispersion-compensating waveguide geometry, and artificial-intelligence-driven holography algorithms. These elements are co-designed to eliminate the need for bulky collimation optics between the spatial light modulator and the waveguide and to present vibrant, full-color, 3D AR content in a compact device form factor. To deliver unprecedented visual quality with our prototype, we developed an innovative image formation model that combines a physically accurate waveguide model with learned components that are automatically calibrated using camera feedback. Our unique co-design of a nanophotonic metasurface waveguide and artificial-intelligence-driven holographic algorithms represents a significant advancement in creating visually compelling 3D AR experiences in a compact wearable device.

讲者简介 Speaker bio:

Manu is a final year PhD candidate in the Electrical Engineering Department at Stanford University working with Professor Gordon Wetzstein. His research interests are centered on the co-design of optical systems and computational algorithms. More specifically, he is currently focused on utilizing novel computational algorithms to unlock high quality 3D and 4D holography and more compact form-factors for holographic displays. Prior to coming to Stanford, Manu received a Bachelor’s and Master’s degree in Electrical and Computer Engineering from Carnegie Mellon University during which he worked with Pulkit Grover and Aswin Sankaranarayanan.Gun-Yeal is a Postdoctoral Researcher at Stanford University, working with Professor Gordon Wetzstein. He is broadly interested in Optics and Photonics, with a particular focus on nanophotonics and metasurfaces. His recent research at the intersection of optics and computer vision focuses on developing next-generation optical imaging, display, and computing systems, utilizing advanced photonic devices and AI-driven algorithms. Gun-Yeal completed his PhD at Seoul National University under the guidance of Prof. Byoungho Lee. For his undergraduate studies, he double-majored in Electrical and Computer Engineering and Physics, also at Seoul National University. He is a recipient of OSA Incubic/Milton Chang Award, SPIE Optics and Photonics Education Scholarship, OSA Emil-wolf Award finalist, and NRF postdoc fellowship.

报告嘉宾 Speaker:Haley So (Stanford University)

报告题目 Title:In-pixel Recurrent Neural Networks for End-to-end-optimized Perception with Neural Sensors

报告摘要 Abstract:

Conventional image sensors digitize high-resolution images at fast frame rates, producing a large amount of data that needs to be transmitted off the sensor for further processing. This is challenging for perception systems operating on edge devices, because communication is power inefficient and induces latency. Fueled by innovations in stacked image sensor fabrication, emerging sensor—processors offer programmability and minimal processing capabilities directly on the sensor. We exploit these capabilities by developing an efficient recurrent neural network architecture, PixelRNN, that encodes spatio-temporal features on the sensor using purely binary operations. PixelRNN reduces the amount of data to be transmitted off the sensor by factors up to 256 compared to the raw sensor data while offering competitive accuracy for hand gesture recognition and lip reading tasks. In addition, we experimentally validate PixelRNN using a prototype implementation on the SCAMP-5 sensor—processor platform.

讲者简介 Speaker bio:

Haley So is an electrical engineering PhD student in the Stanford Computational Imaging Lab led by Professor Gordon Wetzstein. Her research is focused on utilizing emerging sensors—processors to rethink imaging algorithms and computer vision pipelines. She couples lightweight deep learning algorithms with novel sensing modalities enabled by in-pixel compute for tasks like snapshot high dynamic range imaging, bandwidth efficient perception, lightweight stereo-depth, and real-time super-resolution. During her undergraduate studies at Columbia University, Haley had the privilege to work with Professor Keren Bergman, Professor Michal Bajcsy, and Dr. Darwin Serkland.

主持人简介 Host bio:

Evan Y. Peng is an Assistant Professor in the University of Hong Kong. Before joining HKU, he was a Postdoctoral Research Scholar in the Stanford University Computational Imaging Laboratory. He received my PhD in Computer Science from the Imager Lab, the University of British Columbia, both his MSc and BS in Optical Science and Engineering from State Key Lab of Modern Optical Instrumentation, Zhejiang University. His research interest lies in the interdisciplinary field of Optics, Graphics, Vision, and Artificial Intelligence, particularly with the focus of: Computational Optics, Sensing, and Display; Holographic Imaging/Display & VR/AR/MR; Computational Microscope Imaging; Low-level Computer Vision; Inverse Rendering; Human-centered Visual & Sensory Systems.

Homepage: https://hku.welight.fun/


GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:https://live.bilibili.com/h5/24617282

You may also like...