GAMES Webinar 2024 – 321期(面向物理交互场景的人类运动建模与仿真) | 李佳蔓(斯坦福大学),蒋一峰(斯坦福大学)

【GAMES Webinar 2024-321期】(模拟与动画专题-面向物理交互场景的人类运动建模与仿真)



报告题目:Real-time Height-field Simulation of Sand and Water Mixtures


Modeling human behaviors in contextual environments has a wide range of applications in character animation, embodied AI, VR/AR, and robotics. Humans naturally interact with their surroundings, manipulating objects to accomplish everyday tasks. Our recent research focuses on modeling these human manipulation behaviors. We collected a large-scale dataset with high-quality human and object motions.  Based on this dataset, we study the task of synthesizing full-body human motion from object motion. This proposed approach allows for the capture of full-body human manipulation motions by simply attaching a smartphone to the object being manipulated. Furthermore, we address the challenging problem of generating synchronized object motion and human motion guided by language descriptions in 3D environments. By integrating a high-level planning module and a low-level interaction synthesis module, our method can generate long-term human-object interactions conditioned on text within contextual environments.


Jiaman Li is a third-year PhD student at Stanford University, advised by Prof. C. Karen Liu and Prof. Jiajun Wu. Her research interest lies at the intersection of computer vision and computer graphics. Currently, she has been focusing on 3D human motion modeling. Her recent work includes human motion estimation from egocentric video, scene-aware human motion synthesis, human-object interaction synthesis, etc. For more information about her research work, please check her webpage




报告题目:Physical Digital Humans in the Era of GenAI


Modeling human dynamics during movement holds significant potential across Computer Graphics, Vision, and Computational Healthcare. The core question is: How can we generate Digital Twins that are aware of physical laws and act in accordance with them? Despite the remarkable advancements in generative AI, the development of dynamically interactive digital humans is still lagging, constrained by the intricate nature of human anatomy and behavior, as well as the challenges inherent in data collection.

In my talk, I will discuss three projects that aim to elevate data-driven physical digital humans to a next level of scale. The first project explores scalable simulation – how to generate human simulations that are biologically accurate for real-world biomechanics applications, yet devoid of the computational burden associated with highly detailed anatomical simulations.

For purely digital realms like content creation and gaming, the focus shifts from biomechanical precision to adherence to “intuitive physics.” I will introduce innovative tools designed to enhance a modern generative model of kinematic human motion with the ability to reason about and respond to intuitive physics, while fully maintaining the scalability of a large generative framework.

Finally, I will discuss the practical hurdles of expanding human capture efforts and our research on integrating prior knowledge and biomechanics simulations to enrich large motion datasets.


Yifeng Jiang is a fifth-year PhD Candidate at Stanford Computer Science where he is advised by Prof. C. Karen Liu. His research interests include Digital Humans, Physics Simulation, and Computational Biomechanics, with a focus on bridging modern machine learning with simulation techniques from physical first principles. He was the recipient of Stanford School of Engineering fellowship, and finalist of Meta Research PhD Fellowship. He was also a core member of the Wu-Tsai Human Performance Alliance at Stanford.



李梦甜,工学博士,博士后,现为上海大学上海电影学院、上海电影特效工程技术研究中心讲师,硕士生导师。任中国计算机学会计算机辅助设计与图形学专委会执行委员,中国图像图形学学会数字娱乐与智能生成专委会委员。主要研究方向为计算机视觉、计算机图形学。参与国家自然科学基金重大、面上项目,国社科重大项目,国家重点研发计划等科研项目。获CAD/Graphics 2023国际学术会议最佳论文奖。以第一作者/通讯作者在计算机国际顶级期刊和会议上发表论文多篇,担任计算机视觉顶级学术期刊和会议TPAMI、TIP,CVPR、ICCV、ECCV、ICLR、ICML,NeurIPS审稿人。

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;

You may also like...