GAMES Webinar 2022 – 243期(基于物理的角色动画与跨模态角色生成与动画) | Jungdam Won(Meta AI (Former Facebook))，Fangzhou Hong(Nanyang Technological University)
【GAMES Webinar 2022-243期】(模拟与动画专题-基于物理的角色动画与跨模态角色生成与动画)
报告嘉宾：Jungdam Won(Meta AI)
报告题目：Physics-based Character Controllers Using Conditional VAEs
High-quality motion capture datasets are now publicly available, and researchers have used them to create kinematics-based controllers that can generate plausible and diverse human motions without conditioning on specific goals (i.e., a task-agnostic generative model). In this talk, we’ll introduce an algorithm to build such controllers for physically simulated characters having many degrees of freedom. Our physics-based controllers are learned by using conditional VAEs, which can perform a variety of behaviors that are similar to motions in the training dataset. The controllers are robust enough to generate more than a few minutes of motion without conditioning on specific goals and to allow many complex downstream tasks to be solved efficiently. To show the effectiveness of our method, we’ll demonstrate controllers learned from several different motion capture databases and use them to solve a number of downstream tasks that are challenging to learn controllers that generate natural-looking motions from scratch. If the time allows, we’ll also show a few ablation studies to demonstrate the importance of the elements of the algorithm.
Jungdam Won is a research scientist at Meta AI (Former Facebook) Pittsburgh led by prof. Jessica K. Hodgins. Before he joined Meta, he worked at Movement Research Lab. at Seoul National Univ. as a postdoctoral researcher under the supervision of Prof. Jehee Lee. He received his Ph.D. and B.S. in Computer Science and Engineering from Seoul National Univ., Korea, in 2017, and 2011, respectively. He was awarded the global Google Ph.D. Fellowship program in Robotics in 2016. For academic services, he has served as a committee member in several prestigious international conferences, e.g., technical paper program and course committee for ACM SIGGRAPH Asia, technical paper program committee for Eurographics, Pacific Graphics, Motion Interaction and Games, CASA, and IEEE AIVR. He also has worked as a reviewer for various conferences and journals such as SIGGRAPH / SIGGRAPH Asia (ACM TOG), Eurographics (CGF), IEEE TVCG, and etc. His current areas of research include designing controllers for virtual/physical agents, understating interactions between multiple agents, and developing easy-to-use and effective methods that bridge the gap between users and their virtual persona, where motion capture, optimization, and various machine learning approaches have been actively used.
报告嘉宾：Fangzhou Hong(Nanyang Technological University)
报告题目：AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
3D avatar creation plays a crucial role in the digital age. However, the whole production process is prohibitively time-consuming and labour-intensive. To democratize this technology to a larger audience, we propose AvatarCLIP, a zero-shot text-driven framework for 3D avatar generation and animation. Unlike professional software that requires expert knowledge, AvatarCLIP empowers layman users to customize a 3D avatar with the desired shape and texture, and drive the avatar with the described motions using solely natural languages. Our key insight is to take advantage of the powerful vision-language model CLIP for supervising neural human generation, in terms of 3D geometry, texture and animation. Remarkably, AvatarCLIP can generate unseen 3D avatars with novel animations, achieving superior zero-shot capability.
Fangzhou Hong is currently a second-year Ph.D. student in the School of Computer Science and Engineering at Nanyang Technological University (MMLab@NTU), supervised by Prof. Ziwei Liu. Previously, he received B.Eng. degree in Software Engineering from Tsinghua University in 2020. He was awarded Google PhD Fellowship in 2021. His research interests lie on the 3D representation learning and its intersection with computer graphics.
刘利斌，北京大学助理教授。2014年博士毕业于清华大学，2015年至2017年分别于加拿大不列颠哥伦比亚大学以及美国迪士尼研究院从事博士后研究，2017年至2020年在美国创业公司DeepMotion Inc.任首席科学家。主要研究方向为计算机角色动画、基于物理的运动仿真、运动控制以及增强学习等。曾发表SIGGRAPH/TOG论文多篇，并曾多次担任SIGGRAPH、Eurographics、Pacific Graphics、SCA等图形学领域重要学术会议的论文/短文程序委员。
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播？”及“如何加入GAMES微信群？”的信息；