GAMES Webinar 2019 – 80期（CVPR 2018三维视觉论文报告）| 齐晓娟（牛津大学），宋舒然（普林斯顿大学）
【GAMES Webinar 2019-80期（CVPR 2018三维视觉论文报告）】
报告题目：GeoNet: Geometric Neural Network for Joint Depth and Surface Normal Estimation
In this paper, we propose Geometric Neural Network (GeoNet) to jointly predict depth and surface normal maps from a single image. Building on top of two-stream CNNs, our GeoNet incorporates geometric relation between depth and surface normal via the new depth-to-normal and normalto-depth networks. Depth-to-normal network exploits the least square solution of surface normal from depth and improves its quality with a residual module. Normal-to-depth network, contrarily, refines the depth map based on the constraints from the surface normal through a kernel regression module, which has no parameter to learn. These two networks enforce the underlying model to efficiently predict depth and surface normal for high consistency and corresponding accuracy. Our experiments on NYU v2 dataset verify that our GeoNet is able to predict geometrically consistent depth and normal maps. It achieves top performance on surface normal estimation and is on par with state-of-theart depth estimation methods.
Xiaojuan Qi is currently a postdoc in university of Oxford. Before that, she obtained her PhD degree in Computer Science and Engineering Department, the Chinese University of Hong Kong (CUHK) in 2018, supervised by Prof. Jiaya Jia. And she got the B.Sc. degree in Electronic Science and Technology at Shanghai Jiao Tong University (SJTU) in 2014. Her research interests include computer vision and deep learning. Recently she is focusing on understanding semantics and geometry from images
报告题目：Seeing the Unseen: Comprehensive 3D Scene Understanding
Intelligent robots require advanced vision capabilities to perceive and interact with the real physical world. While computer vision has made great strides in recent years, its predominant paradigm still focuses on analyzing image pixels to infer 2D output representations (bounding boxes, segmentations, etc.), which remain far from sufficient for real-world robotics applications.
In this talk, I will advocate the use of complete 3D scene representations that enable intelligent systems to not only recognize what is seen (e.g. Am I looking at a chair?), but also predict contextual information about the complete 3D environment beyond visible surfaces (e.g. What could be behind the table? Where should I look to find an exit?). As examples, I will present a line of my recent works that demonstrate the power of these representations through amodal 3D object detection (Sliding Shape and Deep Sliding Shapes), analyzing and synthesizing 3D scenes (Semantic Scene Completion), and predicting semantic and 3D structure outside the image field of view (Im2Pano3D). Finally, I will discuss some ongoing efforts on how these 3D scene representations can further enable and benefit from real-world robotic interactions, shifting the way we view computer vision problems from the perspective of a passive observer to that of an active explorer.
Shuran Song is a visiting researcher at Google and will be joining the School of Computing Science at Columbia University in New York, as an Assistant Professor in 2019.
She earned her Ph.D. degree in Computer Science at Princeton University 2018, advised by Thomas Funkhouser. Before that, she received a BEng. in Computer Science from HKUST in 2013 with the highest honor. During her Ph.D., she spent time working at Microsoft Research and Google Daydream. Her research interests lie at the intersection of computer vision, computer graphics, and robotics. She was awarded the Facebook Fellowship in 2014, Siebel Scholar in 2016, Wallace Fellowship in 2017, and Princeton SEAS Award for Excellence in 2017. She was part of the MIT-Princeton team for the Amazon Robotics Challenge, winning the 3rd place in 2016 and 1st place (stow task) in 2017.
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播？”及“如何加入GAMES微信群？”的信息；