GAMES Webinar 2019 – 120期(三维视觉前沿专题报告) | Xintong Han(Huya Inc),LI Xianzhi(The Chinese University of Hong Kong)

【GAMES Webinar 2019-120期】(三维视觉前沿专题报告)

报告嘉宾1:Xintong Han,Huya Inc

报告时间:2019年11月28日 晚上8:00-8:45(北京时间)

报告题目:Generative models for clothed human


This talk will cover two topics related to using generative models to synthesize clothing.

  1. FiNet: Compatible and Diverse Fashion Image Inpainting:

FiNet is a two-stage generation network for synthesizing compatible and diverse fashion images. By
decomposition of shape and appearance generation, FiNet can inpaint garments in a target region with diverse shapes
and appearances. Moreover, we integrate a compatibility module that encodes compatibility information into the network, constraining the generated shapes and appearances to be close to the existing clothing pieces in a learned latent style space.

  1. ClothFlow: A Flow-Based Model for Clothed Person Generation:

ClothFlow models the appearance flow between the source and target clothing regions for pose-guided image generation and virtual try-on. At the core of ClothFlow is a cascaded appearance flow estimation network with a two-stream architecture to progressively warp source image features and refine the flow prediction. The estimated flow properly handles the geometric deformation as well as occlusions/invisibility between the source and target image.


Xintong Han is a Tech Lead at Huya Inc as of August 2019 focusing on generative modeling of human face and body. Before that, he spent one year at Malong Technologies as a research scientist. He got his B.S. degree from Shanghai Jiao Tong University advised by Prof. Weiyao Lin and his Ph.D. degree at the Department of Electrical and Computer Engineering of the University of Maryland, College Park under the supervision of Prof. Larry S. Davis.

报告嘉宾2:LI Xianzhi,The Chinese University of Hong Kong

报告时间:2019年11月28日 晚上8:45-9:30(北京时间)
报告题目:Deep Point Cloud Upsampling


Point clouds are the standard outputs from 3D scanning. In recent years, they are gaining more and more popularity as a compact representation for 3D data, and as an effective means for processing 3D geometry. However, raw point clouds produced from depth cameras and LiDAR sensors are often sparse, noisy, and non-uniform. This is evidenced in various public benchmark datasets, such as KITTI, SUN RGB-D, and ScanNet. Clearly, the raw data is required to be amended, before it can be effectively used for rendering, analysis, or general processing.

In this talk, I will introduce three upsampling techniques with the goal of generating a dense and uniform point cloud given a sparse, noisy, and non-uniform point cloud. In the first work, we present the first point cloud upsampling network named PU-Net. The key idea is to learn multi-level features per point, and then expand them via a multi-branch convolution unit, to implicitly expand the point set in feature space. The expanded features are then reconstructed to an upsampled point set. Considering that the sparseness problem is typically more severe near edges and corner, we further present an edge-aware consolidation network, namely EC-Net, for point cloud upsampling by arranging more points along sharp edges. Lastly, we propose a PU-GAN, which is formulated based on generative adversarial network (GAN), to upsample large-scale range scans. Experimental results demonstrate the effectiveness and superiority of our works over state-of-the-arts.



Li Xianzhi is a Ph.D. candidate in the CSE department of CUHK, co-supervised by Prof. Heng Pheng-Ann and Prof. Fu Chi-Wing. Her research interests focus on geometry processing, computer graphics, and deep learning. Before that, she got her Master’s degree from CUHK in 2015 and Bachelor’s degree from Sichuan University in 2014.




GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;





You may also like...