GAMES Webinar 2023 – 300期(复杂动作控制策略的学习与生成) | 徐霈(Clemson University)，姚贺源(北京大学)
【GAMES Webinar 2023-300期】(模拟与动画专题-复杂动作控制策略的学习与生成)
报告题目：Physics-based Character Control using Generative Adversarial Models and Reinforcement Learning
Physics-based simulation serves as a foundational pillar in the creation of lifelike animations. Controlling characters under physical constraints to imitate human motions has received a great deal of attention recently, and has found wide-ranging applications, spanning from immersive video games to the development of digital humans and robotics. In this talk, we will introduce our recent work on physics-based character control using a GAN-like approach under the framework of reinforcement learning. Besides generating high-fidelity full-body motions by taking human motions as the reference, our approach also offers support to composite motion learning. It allows a simulated character to imitate motions from multiple reference sources across distinct body parts involving goal-directed task control simultaneously. Motivated by humans’ ability to adapt skills in the learning of new ones, we further propose a method to quickly modify a pre-trained control policy for new like tasks through latent space manipulation. Additionally, we show that our approach can be easily integrated into interactive applications, enabling real-time character control in response to user input.
Pei Xu is a research assistant professor at Clemson University. His research interests include artificial intelligence, computer graphics and computer vision with a focus on motion planning and reinforcement learning for physics-based character control and autonomouas agent navigation. Pei received his Ph.D. in computer science from Clemson University under the supervison of Prof. Ioannis Karamouzas and Prof. Victor Zordan.
报告题目：Physics-based Character Control with Model-based RL and Unified Motion Representations
Physics-based motion control allows a character to interact with a physically simulated environment. It has the potential to generate realistic character animation and provide natural responses to environmental changes and perturbations. Such abilities are crucial for digital humans and embodied intelligent agents. In this talk, we will introduce our recent work on physics-based character control using vector quantized variational autoencoders (VQ-VAE) and model-based reinforcement learning. Our approach can effectively learn motion embeddings from a large, unstructured dataset spanning tens of hours of motion examples. The resultant motion representation not only captures diverse motion skills but also offers a robust and intuitive interface for various applications. We demonstrate the versatility of our approach through several applications: universal tracking control from various motion sources, interactive character control with latent motion representations using supervised learning, physics-based motion generation from natural language descriptions using the GPT framework, and, most interestingly, seamless integration with large language models (LLMs) with in-context learning to tackle complex and abstract tasks.
刘利斌，北京大学智能学院助理教授。博士毕业于清华大学，后曾于加拿大不列颠哥伦比亚大学（The University of British Columbia）及美国迪士尼研究院（Disney Research）进行博士后研究，以及美国硅谷创业公司DeepMotion Inc.担任首席科学家。主要研究方向是计算机图形学，特别是数字人建模与动画、物理仿真、运动控制以及相关的优化控制、机器学习、增强学习等领域。他曾获得 SIGGRAPH Asia 2022 Best Paper Award、SIGGRAPH 2023 Honorable Mention Award 等奖项，并多次担任图形学领域重要国际会议如SIGGRAPH (North America/Asia)、EG、PG、SCA等的论文程序委员，以及图形学领域主要会议和期刊的审稿人。
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播？”及“如何加入GAMES微信群？”的信息；