GAMES Webinar 2022 – 217期(神经建模:泛化与分析) | Qianqian Wang (Cornell University),Yu Deng (Institute for Advanced Study, Tsinghua University)

【GAMES Webinar 2022-217期】(视觉专题-神经建模:泛化与分析)

报告嘉宾:Qianqian Wang(Cornell University)

报告时间:2022年1月20号星期四上午10:00-10:30(北京时间)

报告题目:Generalizable Neural Rendering for Novel View Synthesis

报告摘要:

Synthesizing photo-realistic images and videos is a long-standing problem in computer vision and graphics. Recently, neural rendering (neural scene representations) has gained great popularity, showing state-of-the-art performance for view synthesis. However, most of them are scene-specific (i.e., must be optimized for each new scene), which limits their real-world applications. In this talk, I will first give a high-level introduction about several recent neural rendering methods for view synthesis of static scenes, and then introduce our work on generalizable neural rendering called IBRNet. IBRNet combines ideas from classical image-based rendering with recent progress on neural rendering, and can be applied to new scenes for high quality view synthesis without per scene optimization.

讲者简介:

Qianqian Wang is a fourth year PhD student at Cornell University advised by Prof. Noah Snavely. Her research interests lie at the intersection of 3D computer vision, computer graphics and machine learning. She is particularly interested in image-based rendering, neural rendering and 3D reconstruction. Before starting her PhD in 2018, she received her bachelors degree in Information Engineering at Zhejiang University.

讲者个人主页:https://www.cs.cornell.edu/~qqw/


报告嘉宾:Yu Deng(Institute for Advanced Study, Tsinghua University)

报告时间:2022年1月20号星期四上午10:30-11:10(北京时间)

报告题目:Introducing Dense Correspondence to Implicit 3D Shape Representation

报告摘要:

Implicit neural representations have shown promising results for reconstructing complex 3D shapes with topology variations. However, they often lack the ability to build dense correspondence between shapes which is essential for shape modeling and manipulation. We hope to find a better representation that can achieve both accurate surface reconstruction as well as dense correspondence reasoning for a shape category. In this talk, I will present a recent work of us that tries to solve this problem.

讲者简介:

Yu Deng is a 5th year joint Ph.D. student of Tsinghua University and Microsoft Research Asia, under the supervision of Prof. Harry Shum. He is currently working as a research intern in visual computing group in MSRA. His research interest includes 3D reconstruction, 3D representation learning, and neural rendering. He has published several papers on CVPR during his Ph.D. study. He also received his B.S. degree from Tsinghua University.

讲者个人主页:https://yudeng.github.io/


主持人简介:

周晓巍,浙江大学“百人计划”研究员、博士生导师。2008年本科毕业于浙江大学,2013年博士毕业于香港科技大学,2014至2017年在宾夕法尼亚大学 GRASP 机器人实验室从事博士后研究。2017年入选国家级青年项目并加入浙江大学。研究方向主要为计算机视觉及其在混合现实、机器人等领域的应用。相关工作十余次获得计算机视觉顶级会议口头报告(<5%),多次入选CVPR Best Paper Candidates/Finalist。担任计算机视觉顶级期刊IJCV编委、顶级会议CVPR/ICCV领域主席。更多信息请见个人主页:xzhou.me。

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org

You may also like...