GAMES Webinar 2019 – 107期 (SIGGRAPH 2019 专题(海外))| 武奎(MIT),Lingjie Liu(University of Hong Kong)

【GAMES Webinar 2019-107期】

报告嘉宾: 武奎,MIT

报告时间:2019年8月22日 晚8:00-8:45(北京时间)
主持人:Yang Liu,Microsoft Research (个人主页:https://www.microsoft.com/en-us/research/people/yangliu/
报告题目:Stitch Meshes for Knitting

报告摘要:

In this talk, I will present our two papers in Siggraph 19.

Knittable Stitch Meshes:

We introduce knittable stitch meshes for modeling complex 3D knit structures that can be fabricated via knitting. We extend the concept of stitch mesh modeling, which provides a powerful 3D design interface for knit structures but lacks the ability to produce actually knittable models. Knittable stitch meshes ensure that the final model can be knitted. Moreover, they include novel representations for handling important shaping techniques that allow modeling more complex knit structures than prior methods. In particular, we introduce shift-paths that connect the yarn for neighboring rows, general solutions for properly connecting pieces of knit fabric with mismatched knitting directions without introducing seams, and a new structure for representing short rows, a shaping technique for knitting that is crucial for creating various 3D forms, within the stitch mesh modeling framework. Our new 3D modeling interface allows for designing knittable structures with complex surface shapes and topologies, and our knittable stitch mesh structure contains all information needed for fabricating these shapes via knitting. Furthermore, we present a scheduling algorithm for providing stepby-step hand knitting instructions to a knitter, so that anyone who knows how to knit can reproduce the complex models that can be designed using our approach. We show a variety of 3D knit shapes and garment examples designed and knitted using our system.

Visual Knitting Machine Programming:

Industrial knitting machines are commonly used to manufacture complicated shapes from yarns; however, designing patterns for these machines requires extensive training. We present the first general visual programming interface for creating 3D objects with complex surface finishes on industrial knitting machines. At the core of our interface is a new, augmented, version of the stitch mesh data structure. The augmented stitch mesh stores low-level knitting operations per-face and encodes the dependencies between faces using directed edge labels. Our system can generate knittable augmented stitch meshes from 3D models, allows users to edit these meshes in a way that preserves their knittability, and can schedule the execution order and location of each face for production on a knitting machine. Our system is general, in that its knittability-preserving editing operations are sufficient to transform between any two machine-knittable stitch patterns with the same orientation on the same surface. We demonstrate the power and flexibility of our pipeline by using it to create and knit objects featuring a wide range of patterns and textures, including intarsia and Fair Isle colorwork; knit and purl textures; cable patterns; and laces.

讲者简介:

I am a postdoctoral associate in the Computational Fabrication Group under the guidance of Prof. Wojciech Matusik at MIT CSAIL. I received my PhD degree in Computer Science from University of Utah, advised by Prof. Cem Yuksel. I have graduated with B.A. in Mathematics and B.S in Software Engineering from Qingdao University in China. My research interests are computer graphics, especially on mesh processing, real-time rendering, and physical-based simulation.

讲者个人主页:http://www.cs.utah.edu/~kwu/

 

报告嘉宾:Lingjie Liu,University of Hong Kong

报告时间:2019年8月22日 晚8:45-9:30(北京时间)
主持人:Yang Liu,Microsoft Research (个人主页:https://www.microsoft.com/en-us/research/people/yangliu/
报告题目:Neural Rendering and Reenactment of Human Actor Videos

报告摘要:

We propose a method for generating video-realistic animations of real humans under user control. In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person. With that, our approach significantly reduces production cost compared to conventional rendering approaches based on production-quality 3D models, and can also be used to realistically edit existing videos. Technically, this is achieved by training a neural network that translates simple synthetic images of a human character into realistic imagery. For training our networks, we first track the 3D motion of the person in the video using the template model, and subsequently generate a synthetically rendered version of the video. These images are then used to train a conditional generative adversarial network that translates synthetic images of the 3D model into realistic imagery of the human. We evaluate our method for the reenactment of another person that is tracked in order to obtain the motion data, and show video results generated from artist-designed skeleton motion. Our results outperform the state-of-the-art in learning-based human image synthesis.

讲者简介:

Ms. Lingjie Liu is a 5th year Ph.D. candidate of the Department of Computer Science at the University of Hong Kong, supervised by Prof. Wenping Wang. Her research interests include 3D reconstruction, video synthesis and human performance capturing. This work was done during her internship at Max Planck Institute for Informatics.

讲者个人主页:https://lingjie0206.github.io/

 

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org

 

 

 

 

You may also like...