GAMES Webinar 2021 – 199期(VR专题) | Hui Ye (City University of Hong Kong),Taizhou CHEN (City University of Hong Kong ),Xiaolong Liu (Beihang University)

【GAMES Webinar 2021-199期】(VR专题)

报告嘉宾1:Hui Ye (City University of Hong Kong)


报告题目:ARAnimator: In-situ Character Animation in Mobile AR with User-defined Motion Gestures


Creating animated virtual Augmented Reality (AR) characters closely interacting with real environments is interesting but difficult. Existing systems adopt video seethrough approaches to indirectly control a virtual character in mobile AR, making close interaction with real environments not intuitive. In this work we use an AR-enabled mobile device to directly control the position and motion of a virtual character situated in a real environment. We conduct two guessability studies to elicit user-defined motions of a virtual character interacting with real environments, and a set of user-defined motion gestures describing specific character motions. We found that an SVM-based learning approach achieves reasonably high accuracy for gesture classification from the motion data of a mobile device. We present ARAnimator, which allows novice and casual animation users to directly represent a virtual character by an AR-enabled mobile phone and control its animation in AR scenes using motion gestures of the device, followed by animation preview and interactive editing through a video see-through interface. Our experimental results show that with ARAnimator, users are able to easily create in-situ character animations closely interacting with different real environments.


Hui Ye is a Ph.D. student at the School of Creative Media, City University of Hong Kong under the supervision of Prof. Hongbo Fu. Her research interests lie in the intersection of Human-Computer Interaction and Computer Graphics. Specifically, her main research focus is on designing and developing novel mobile AR prototyping tools for 3D contents and interactions. Her works have been published on SIGGRAPH 2020 Technical Papers Program and IEEE Transactions on Visualization and Computer Graphics (TVCG). She received the Bachelor’s degree from University of Science and Technology of China.


报告嘉宾2:Taizhou CHEN (City University of Hong Kong )


报告题目:GestOnHMD: Enabling Gesture-based Interaction on Low-cost VR Head-Mounted Display


Low-cost virtual-reality (VR) head-mounted displays (HMDs) with the integration of smartphones have brought the immersive VR to the masses, and increased the ubiquity of VR. However, these systems are often limited by their poor interactivity. In this paper, we present GestOnHMD, a gesture-based interaction technique and a gesture-classification pipeline that leverages the stereo microphones in a commodity smartphone to detect the tapping and the scratching gestures on the front, the left, and the right surfaces on a mobile VR headset. Taking the Google Cardboard as our focused headset, we first conducted a gesture-elicitation study to generate 150 user-defined gestures with 50 on each surface. We then selected 15, 9, and 9 gestures for the front, the left, and the right surfaces respectively based on user preferences and signal detectability. We constructed a data set containing the acoustic signals of 18 users performing these on-surface gestures, and trained the deep-learning classification pipeline for gesture detection and recognition. Lastly, with the real-time demonstration of GestOnHMD, we conducted a series of online participatory-design sessions to collect a set of user-defined gesture-referent mappings that could potentially benefit from GestOnHMD.


Taizhou is a PhD candidate from School of Creative Media, City University of Hong Kong, under supervised by Dr. Kening Zhu and Prof. Hongbo Fu . His research interest lies in the intersection of Human-Computer Interaction and applied machine learning, in which he is currently focusing on investigating sensing technologies, leveraging deep learning algorithm. Specifically, Taizhou invents and builds new sensing techniques to extract meaningful insight from common sensor data for two purposes: on one head, he creates novel input techniques on mobile devices to provide enhanced and intuitive user experiences. On the other head, he designs and evaluates sensing technology to implicitly understand users’ behavior and surroundings. His ultimate research goal is to make ubiquitous device context-aware and user-aware. Taizhou also have research experiences on VR/AR haptic feedback, tangible interface design, smart wearable devices, and multi-modal interface design.


报告嘉宾3:Xiaolong Liu (Beihang University)


报告题目:VR Collaborative Object Manipulation Based on Viewpoint Quality


We introduce a collaborative manipulation method to improve the efficiency and accuracy of object manipulation in virtual reality applications with multiple users. When multiple users manipulate an object in collaboration, a certain user may have a better perspective than other users at a certain moment, and can clearly observe the object to be manipulated and the target position, and it is more efficient and accurate for him to manipulate the object. We construct a viewpoint quality function and evaluate the viewpoints of multiple users by calculating its three components: the visibility of the object need to be manipulated, the visibility of target, the depth and distance combined of the target. By comparing the viewpoint quality of multiple users, the user with the highest viewpoint quality is determined as the dominant manipulator, who can manipulate the object at the moment. A temporal filter is proposed to filter the dominant sequence generated by the previous frames and the current frame, which reduces the dominant manipulator jumping back and forth between multiple users in a short time slice, making the determination of the dominant manipulator more stable. We have designed a user study and tested our method with three multi-user collaborative manipulation tasks. Compared to the previous methods, our method showed significant improvement in task completion time, rotation accuracy, user participation and task load.


Xiaolong Liu is a Ph.D. student at the State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, supervised by Prof. Lili Wang. His research interests lie in virtual reality human-computer interaction and Computer Graphics.  Specifically, his research focuses on collaboration in VR.


高阳,博士毕业于北京航空航天大学计算机学院,现任虚拟现实技术与系统国家重点实验室助理研究员,中国仿真学会医疗仿真专委会委员,中国人工智能学会智能交互专委会委员。主要研究方向为图形学物理仿真、真实感渲染、VR医疗与康复技术。在TVCG、ISMAR、CGF 等图形学和虚拟现实领域国际期刊或会议中发表文章10余篇。曾获2020中国电子学会科技进步一等奖(13/15),中国第二届虚拟现实技术及应用创新大赛高校创新组特等奖(指导老师)。


GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;

You may also like...