GAMES Webinar 2025 – 371期(VR/AR的真实感图形生成与显示) | 陈滨(墨尔本大学),钟方程(剑桥大学),胡潇丹(格拉茨工业大学)

【GAMES Webinar 2025-371期】(混合现实专题-VR/AR的真实感图形生成与显示)

报告嘉宾:陈滨墨尔本大学

报告时间:2025年6月26号星期四晚上8:00-8:30(北京时间)

报告题目:Advancing Display Realism: Data-driven signal reconstruction, innovations in hardware, and perception-based optimization

报告摘要:

In our modern digital age, display devices have become integral to daily life, significantly influencing our visual experiences. Ensuring these devices faithfully replicate reality is critical. Yet, achieving this fidelity poses considerable challenges, from the limitations of current display technology, to the complexity of human visual perception and potential loss of information during visual signal acquisition. In this talk, I will present my research focusing on advancing display realism, covering various display hardware systems, optimization of material appearance based on perception, and approaches involving data-driven image signal reconstructions.

讲者简介:

Dr. Bin Chen is a Lecturer in computer graphics and visualization at the School of Computing and Information Systems, University of Melbourne. Prior to this, he was a PostDoc at Max-Planck-Institute for Informatics, and a visiting scholar at University of Cambridge. He received his PhD from City University of Hong Kong. His research aims to push the boundaries of realism in display systems, covering various aspects of the visual signal lifecycle, including acquisition (computational photography), visualization (computational display), and perception (material appearance perception). His work has been published in top graphics/vision/robotics conferences/journals, including SIGGRAPH/SIGGRAPH Asia, CVPR, ICCV, Eurographics, and ICRA. One of his works was selected as a Best Paper Award Finalist in CVPR 2022.

讲者主页:https://binchen.me


报告嘉宾:钟方程(剑桥大学)

报告时间:2025年6月26号星期四晚上8:30-9:00(北京时间)

报告题目:Path from photorealism to perceptual realism

报告摘要:

Realism is a fundamental pursuit in computer graphics and extended reality (XR). Photorealistic rendering—synthesizing images that appear as realistic as photographs—has matured to the point of widespread adoption across industries. With the rise of emerging 3D XR display technologies, the next frontier in graphics is perceptual realism—creating virtual 3D scenes that are visually and perceptually indistinguishable from real-world 3D scenes. Achieving perceptual realism is critical to elevating XR experiences and realizing the immersive visions of the “metaverse.” Consider a black box containing either a physical object or a rendered counterpart shown through an XR device — if a naive observer cannot distinguish between the two scenarios, the system is said to have passed a visual Turing test. This talk explores key advances and open challenges in pursuing perceptually realistic XR, including the world’s first mixed-reality system that successfully passed a visual Turing test, aiming toward the ultimate goal of digitizing and faithfully reproducing arbitrarily complex physical 3D/4D environments in XR.

讲者简介:

Dr. Fangcheng Zhong specializes in computer graphics and vision, machine intelligence, and extended reality. He obtained his PhD in Computer Science at the University of Cambridge and currently serves as a senior researcher and lecturer at the University of Cambridge. Dr. Zhong’s publication spans top-tier venues across machine learning (e.g., NeurIPS, ICLR, Nature Machine Intelligence), computer graphics (e.g., SIGGRAPH, SIGGRAPH Asia), and computer vision (e.g., CVPR, ECCV). He is also the recipient of multiple prestigious awards and honours, including the European Horizon 2020 Marie Skłodowska-Curie Fellowship, and honourable mentions for the ACM SIGGRAPH Outstanding Dissertation, the IEEE VGTC Virtual Reality Best Dissertation, and the Eurographics PhD award.

讲者主页:https://www.cl.cam.ac.uk/~fz261/


报告嘉宾:胡潇丹(格拉茨工业大学)

报告时间:2025年6月26号星期四晚上9:00-9:30(北京时间)

报告题目:Toward Occlusion-Capable Optical See-Through Head-Mounted Displays: Display Design and Perceptual Challenges

报告摘要:

Optical see-through head-mounted displays (OST-HMDs) allow users to view the real world directly while overlaying virtual content, but they are inherently unable to present correct mutual occlusion between real and virtual environments. Since virtual images are optically overlaid onto the user’s direct view, synthetic objects always appear semi-transparent, breaking visual realism and depth cues. While occlusion is essential for convincing and spatially coherent AR, bringing practical occlusion capability to OST-HMDs remains an open challenge. Although significant progress has been made since the first demonstration of hard-edge occlusion in 2000, transitioning these approaches into deployable, user-ready systems remains difficult. In this talk, I will introduce our recent efforts to address key limitations. These include the bulky form factor and limited field of view in hard-edge occlusion systems, as well as the incomplete blocking and perceptual blurriness commonly observed in soft-edge occlusion systems.

讲者简介:

Dr. Xiaodan Hu is a Postdoctoral Researcher at Graz University of Technology (TU Graz), Austria, and a Commissioned Instructor at Nara Institute of Science and Technology (NAIST), Japan. She received her Ph.D. in Information Science from NAIST in 2024 under the supervision of Prof. Kiyoshi Kiyokawa, where she also completed her M.Sc. studies. Her research focuses on optical see-through head-mounted displays (OST-HMDs), particularly occlusion-capable OST-HMDs, vision augmentation, and human visual perception in AR environments. She is especially interested in designing display systems that are closely aligned with the characteristics of the human visual system, aiming to support perceptually optimized and comfortable visual experiences. During her doctoral studies, she was awarded a fellowship from Support for Pioneering Research Initiated by the Next Generation program (次世代研究者挑戦的研究プログラム), funded by the Japan Science and Technology Agency. Her work has been published in high-impact journals and conferences, including TVCG, IEEE VR, Optics Express, and Optics Letters,and includes one granted JP patent and one US patent application.

讲者主页:https://xiaodanhu14.github.io/


主持人简介:

张言,上海交通大学助理研究员,研究方向虚拟现实与增强现实中的近眼显示、注视点渲染和光照重建。在IEEE VR、IEEE ISMAR、IEEE TVCG、Optics Letters等领域重要期刊和会议上发表论文10余篇,申请与授权国内专利3项、国际专利1项,入选上海市浦江人才计划。担任中国计算机学会(CCF)虚拟现实与可视化技术专委会执行委员、中国图学学会(CGS)计算机图学专委会委员、IEEE VR 2023出版主席。


GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:https://live.bilibili.com/h5/24617282

You may also like...