GAMES Webinar 2018-70期（Human Performance Capture Tutorial-2）| 王雁刚（东南大学），Hanbyul Joo（卡耐基梅隆大学）
【GAMES Webinar 2018-70期（Human Performance Capture Tutorial-2）】
报告题目：Motion Capture with Linear Blend Skinning
Linear blend skinning is a standard technique for 3D animations. In this talk, I will introduce the basics of linear blend skinning and its applications to human performance capture. The global and local optimization strategy to solve the linear blend skinning is also discussed. Beyond that, I will also demonstrate some recent motion capture results with the technique of linear blend skinning. The drawbacks of linear blend skinning for human performance capture are also presented at the end of this talk.
Yangang Wang is currently an associate professor in School of Automation at Southeast University (SEU). Before joining in SEU, he worked as a research scientist at Microsoft Research Asia (MSRA) from 2014 to 2017. He received his Ph.D. in 2014 from Department of Automation at Tsinghua University, advised by Prof. Qionghai Dai. His primary research area involves in computer graphics, computer vision and computational photography. His most recent research interest is 3D/4D reconstruction and human motion capture with single or sparse cameras (a.k.a. markerless motion capture), including 3D/4D capture, modeling and reconstruction Hand motion acquisition, modeling and simulation Facial acquisition, modeling and animation Body motion modeling, simulation and control. He is the 2018 Innovative and Entrepreneurial Talent Ph.D. in Jiangsu Province, China. He is the paper reviewer of SIGGRAPH / TVCG / TPAMI / CVPR / ICCV / AAAI and etc.
报告题目：A Computational Approach to Sensing, Reconstructing and Understanding Human Motion and Social Interaction
Humans convey their thoughts, emotions, and intentions through a concert of social displays: voice, facial expressions, hand gestures, and body posture. Despite advances in machine perception technology, machines are unable to discern the subtle and momentary nuances that carry so much of the information and context of human communication. The encoding of conveyed information by human body movements is still poorly understood, and a major obstacle to scientific progress in understanding human behavior is the inability to measure the full spectrum of social signals in groups of interacting individuals.
In this talk, I will describe my early exploration in building sensors that can capture the full spectrum of human social signaling—from voice, to facial expressions, to hand gestures, to body posture—among groups of multiple people. Leveraging more than 500 synchronized cameras, our method enables us to markerlessly measure subtle 3D movements of interacting people, providing a new opportunity to computationally study social interaction. I will also talk about my ongoing efforts to understand social interaction in a predictive way, based on our novel dataset containing 3D social signals from hundreds of participants.
Hanbyul Joo is a Ph.D. candidate in the Robotics Institute, Carnegie Mellon University. His research focuses on measuring social signals in interpersonal social communication to computationally model social behavior, using tools of computer vision, computer graphics, and machine learning. Hanbyul‘s research has been covered in various media outlets including Discovery, Reuters, IEEE Spectrum, NBC News, Voice of America, The Verge, and WIRED. He is a recipient of the Samsung Scholarship, and the Best Student Paper Award at CVPR 2018.
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播？”及“如何加入GAMES微信群？”的信息；