GAMES Webinar 2019 – 108期 (SIGGRAPH 2019 专题(海外))| 孙天成(University of California, San Diego),徐泽祥(University of California, San Diego)

【GAMES Webinar 2019-108期】(SIGGRAPH 2019 专题(海外))

主持人:周晓巍,浙江大学(个人主页:http://www.cad.zju.edu.cn/home/xzhou

报告嘉宾1:孙天成,University of California, San Diego

报告时间:2019年8月29日 下午1:30-2:15(北京时间)

报告题目:Single Image Portrait Relighting

报告摘要:

Lighting plays a central role in conveying the essence and depth of the subject in a 2D portrait photograph. Professional photographers will carefully control the lighting in their studio to manipulate the appearance of their subject, while consumer photographers are usually constrained to the illumination of their environment. Though prior works have explored techniques for relighting an image, their utility is usually limited due to requirements of specialized hardware, multiple images of the subject under controlled or known illuminations, or accurate models of geometry and reflectance. To this end, we present a system for portrait relighting: a neural network that takes as input a single RGB image of a portrait taken with a standard cellphone camera in an unconstrained environment, and from that image produces a relit image of that subject as though it were illuminated according to any provided environment map. Our method is trained on a small database of 18 individuals captured under different directional light sources in a controlled light stage setup consisting of a densely sampled sphere of lights. Our proposed technique produces quantitatively superior results on our dataset’s validation set compared to prior work, and produces convincing qualitative relighting results on a dataset of hundreds of real-world cellphone portraits. Because our technique can produce a 640×640 image in only 160 milliseconds, it may enable interactive user-facing photo graphic applications in the future.

讲者简介:

Tiancheng Sun is a second-year PhD student at UC San Diego. His advisor is Professor Ravi Ramamoorthi. Before UCSD, he was an undergraduate student in Yao Class, Tsinghua University. His main research interest lies in computer photography and image-based rendering. He won the 1st place at the 2018 ACM Student Research Competition (undergraduate category), and is one of the receivers of 2019 Google PhD Fellowship.

讲者个人主页:http://www.kevinkingo.com/


报告嘉宾2:徐泽祥,University of California, San Diego

报告时间:2019年8月29日 下午2:15-3:00(北京时间)
报告题目:Deep View Synthesis from Sparse Photometric Images

报告摘要:

The goal of light transport acquisition is to take images from a sparse set of lighting and viewing directions, and combine them to enable arbitrary relighting with changing view. While relighting from sparse images has received significant attention, there has been relatively less progress on view synthesis from a sparse set of “photometric” images—images captured under controlled conditions, lit by a single directional source. In this paper, we synthesize novel viewpoints across a wide range of viewing directions (covering a 60◦ cone) from a sparse set of just six viewing directions. While our approach relates to previous view synthesis and image-based rendering techniques, those methods are usually restricted to much smaller baselines, and are captured under environment illumination. At our baselines, input images have few correspondences and large occlusions; however we benefit from structured photometric images. Our method is based on a deep convolutional network trained to directly synthesize new views from the six input views. This network combines 3D convolutions on a plane sweep volume with a novel per-view per-depth plane attention map prediction network to effectively aggregate multi-view appearance. We train our network with a large-scale synthetic dataset of 1000 scenes with complex geometry and material properties. In practice, it is able to synthesize novel viewpoints for captured real data and reproduces complex appearance effects like occlusions, view-dependent specularities and hard shadows. Moreover, the method can also be combined with previous relighting techniques to enable changing both lighting and view, and applied to computer vision problems like multiview stereo from sparse image sets.

讲者简介:

Zexiang Xu is currently a fourth-year Ph.D. student in computer science at University of California, San Diego. His advisor is Prof. Ravi Ramamoorthi. His research interests lie at the intersection of computer graphics and computer vision, including relighting, view synthesis, appearance acquisition, 3D reconstruction, hair modeling, and light fields.

讲者个人主页:http://cseweb.ucsd.edu/~zex014/

 

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org

 

 

 

 

You may also like...