GAMES Webinar 2021 – 200期(VR专题) | Qi Sun (New York University)
【GAMES Webinar 2021-200期】(VR专题，Talk+Panel形式)
报告嘉宾：Qi Sun (New York University)
报告题目：Learning, Leveraging, and Optimizing Human Experience in Metaverse
The world is becoming unprecedentedly connected thanks to emerging media and cloud-based technologies. The holy grail of metaverse requires recreating a remotely shared world as a digital twin of the physical planet. In this world, the human is probably the most complex mechanical, physical, and biological system. Unlike computers, it is remarkably challenging to model and engineer how humans react in a virtual environment. By leveraging computational advancements such as machine learning and biometric sensors, this talk will share some recent research on altering and optimizing the human experience toward creating the ultimate metaverse.
Qi Sun (http://qisun.me/) is an assistant professor at New York University, Tandon School of Engineering (joint with Dept. of Computer Science and Engineering and Center for Urban Science and Progress). Before joining NYU, he was a research scientist at Adobe Research and a research intern at NVIDIA Research. He received his Ph.D. at Stony Brook University. His research interests lie in computer graphics, VR/AR, vision, machine learning, and human-computer interaction. He is a recipient of the IEEE Virtual Reality Best Dissertation Award.
杨旭波，上海交通大学教授，研究领域为虚拟现实VR/AR/MR与计算机图形学。博士毕业于浙江大学CAD&CG国家重点实验室,曾在德国Fraunhofer-IMK所虚拟现实系、新加坡国立大学混合现实实验室、美国北卡大学教堂山分校计算机系做研究工作。现任中国图学学会计算图学专委会副主任、中国图学学会理事、中国计算机学会虚拟现实专委会常委、中国图象图形学会虚拟现实专委会常委等，担任IEEE VR等国际会议论文共同主席，Frontiers in VR期刊和Presence:VR & AR等国际期刊编委。
Ruofei Du is a Senior Research Scientist at Google and works on creating novel interactive technologies for virtual and augmented reality. Du’s research covers a wide range of topics in VR and AR, including depth-based interaction (DepthLab), mixed-reality social platforms (Geollery), 4D video-based rendering (Montage4D), gaze-based interaction (GazeChat, Kernel Foveated Rendering), and deep learning in graphics (HumanGPS and Sketch Colorization). He serves as an Associate Editor for Frontiers in Virtual Reality, and served as a committee member in CHI 2021-2022, SIGGRAPH Asia 2020 XR, ICMI 2020-2021, etc. He holds 3 US patents and has published over 30 peer-reviewed publications in top venues of HCI, Computer Graphics, and Computer Vision, including CHI, SIGGRAPH Asia, UIST, TVCG, CVPR, ICCV, ECCV, ISMAR, VR, and I3D. Du holds a Ph.D. and an M.S. in Computer Science from University of Maryland, College Park; and a B.S. in ACM Honored Class, Shanghai Jiao Tong University. Website: https://duruofei.com.
Dingzeyu Li is a Research Scientist at Adobe Research, Seattle. Ding received his PhD from Columbia University and BEng from HKUST. He is interested in audiovisual cross-modal analysis and synthesis for accessibility. Leveraging tools from computer vision, graphics, deep learning, and human-computer interaction, he focuses on novel creative authoring/editing experiences for everyone. In addition to publishing at top-tier venues like SIGGRAPH, CVPR, and CHI, his recent research and product impacts have been recognized with an Emmy Award 🏆 for Technology and Engineering (2020), two Adobe MAX Sneaks Demos ✨(2019 , 2020), an ACM UIST Best Paper Award 🏅 (2017), an Adobe Research Fellowship 🎓 (2017), a NVIDIA PhD Fellowship Finalist (2017), a Shapeways Educational Grant 🧧 (2016), and an HKUST academic achievement medal 🥇
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播？”及“如何加入GAMES微信群？”的信息；