GAMES Webinar 2021 – 198期(VR专题) | 胡志明 (北京大学),Difeng Yu (University Of Melbourne),孟晓旭 (腾讯游戏AI研究中心)

【GAMES Webinar 2021-198期】(VR专题)

报告嘉宾1:胡志明 (北京大学)






胡志明,北京大学2017级博士生。2017年获得北京理工大学学士学位,并进入北京大学信息科学技术学院攻读博士学位。研究方向包括人机交互、虚拟现实、以及眼动追踪。曾多次获得国家奖学金以及北京大学校长奖学金。以第一作者身份在计算机图形学与可视化领域顶级期刊TVCG和虚拟现实领域顶级会议IEEE VR上发表过多篇论文,并获得过TVCG最佳期刊论文提名奖。


报告嘉宾2:Difeng Yu (University Of Melbourne)


报告题目:Gaze-Supported 3D Object Manipulation in Virtual Reality


In this talk, I will present a paper that investigates integration, coordination, and transition strategies of gaze and hand input for 3D object manipulation in VR. Specifically, our work aims to understand whether incorporating gaze input can benefit VR object manipulation tasks, and how it should be combined with hand input for improved usability and efficiency. I will demonstrate four gaze-supported techniques that leverage different combination strategies for object manipulation. For example, I will show a technique called ImplicitGaze which allows the transition between gaze and hand input to happen without any trigger mechanism like button pressing. Next, I will introduce the results from two user evaluation studies of those techniques. I will further offer insights regarding combination strategies of gaze and hand input, and present implications that can help guide the design of future VR systems that incorporate gaze input for 3D object manipulation.


Difeng is a 3rd year Ph.D. student in Human-Computer Interaction Group, The University of Melbourne, advised by Dr. Jorge Goncalves (primary), Dr. Tilman Dingler, and Dr. Eduardo Velloso. He received his BSc degree in Computer Science from Xi’an Jiaotong-Liverpool University in 2018 and was a research assistant at X-CHI Lab directed by Dr. Hai-Ning Liang. His recent research in Human-Computer Interaction (HCI) focuses on 1) designing novel interactive techniques in augmented or virtual reality systems and 2) investigating, analyzing, and modeling user behavior in 3D virtual environments.


报告嘉宾3:孟晓旭 (腾讯游戏AI研究中心)


报告题目:3D-Kernel Foveated Rendering for Light Fields


Light fields capture both the spatial and angular rays, thus enabling free-viewpoint rendering and custom selection of the focal plane. Scientists can interactively explore pre-recorded microscopic light fields of organs, microbes, and neurons using virtual reality headsets. However, rendering high-resolution light fields at interactive frame rates requires a very high rate of texture sampling, which is challenging as the resolutions of light fields and displays continue to increase. In this article, we present an efficient algorithm to visualize 4D light fields with 3D-kernel foveated rendering (3D-KFR). The 3D-KFR scheme coupled with eye-tracking has the potential to accelerate the rendering of 4D depth-cued light fields dramatically. We have developed a perceptual model for foveated light fields by extending the KFR for the rendering of 3D meshes. On datasets of high-resolution microscopic light fields, we observe 3:47x-7:28x speedup in light field rendering with minimal perceptual loss of detail. We envision that 3D-KFR will reconcile the mutually conflicting goals of visual fidelity and rendering speed for interactive visualization of light fields.





杨旭波,上海交通大学教授,研究领域为虚拟现实VR/AR/MR与计算机图形学。博士毕业于浙江大学CAD&CG国家重点实验室,曾在德国Fraunhofer-IMK所虚拟现实系、新加坡国立大学混合现实实验室、美国北卡大学教堂山分校计算机系做研究工作。现任中国图学学会计算图学专委会副主任、中国图学学会理事、中国计算机学会虚拟现实专委会常委、中国图象图形学会虚拟现实专委会常委等,担任IEEE VR等国际会议论文共同主席,Frontiers in VR期刊和Presence:VR & AR等国际期刊编委。


GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;

You may also like...