GAMES Webinar 2022 – 233期(城市规模三维场景理解与重建) |Liangliang Nan(Delft University of Technology),Qingyong Hu(University of Oxford)

【GAMES Webinar 2022-233期】(几何专题-城市规模三维场景理解与重建)

报告嘉宾:Liangliang Nan(Delft University of Technology)

报告时间:2022年6月16号星期四晚上8:00-8:40(北京时间)

报告题目:Piecewise Planar Object Reconstruction from Point Clouds

报告摘要:

Piecewise planar objects are in widespread use in urban environments and industries. Though advances in laser scanning and 3D computer vision have enabled efficient and effective data acquisition of such objects, obtaining a faithful 3D surface representation of these objects is still an open problem. In this talk, I would like to share my experiences in the past years in the 3D reconstruction of piecewise planar objects. I will present an algorithm for reconstructing simple polygonal surface models for piecewise planar objects, and several extensions (including one based on deep learning) of the algorithm to reconstruct large-scale urban buildings. In the end, we will discuss the trend and some ideas for further development on this topic.

讲者简介:

Liangliang Nan received his B.S. degree in material science and engineering from NUAA in 2003 and a Ph.D. degree in mechatronics engineering from the Graduate University of the Chinese Academy of Sciences in 2009. He was an assistant and then associate researcher at SIAT, Chinese Academy of Sciences, from 2009 to 2013. From 2013 to 2018, he worked at the Visual Computing Center at KAUST as a research scientist. He is currently an assistant professor with TU Delft, where he is leading the AI lab on 3D Urban Understanding (3DUU). His research interests include computer graphics, computer vision, 3D geoinformation, and machine learning.

讲者个人主页:https://3d.bk.tudelft.nl/liangliang/


报告嘉宾:Qingyong Hu(University of Oxford)

报告时间:2022年6月16号星期四晚上8:40-9:20(北京时间)

报告题目:Towards Semantic Understanding of Urban-Scale 3D Point Clouds, Dataset, Benchmarks and Challenges

报告摘要:

An essential prerequisite for unleashing the potential of supervised deep learning algorithms in the area of 3D scene understanding is the availability of large-scale and richly annotated datasets. However, publicly available datasets are either on relatively small spatial scales or have limited semantic annotations due to the expensive cost of data acquisition and data annotation, which severely limits the development of fine-grained semantic understanding in the context of 3D point clouds. In today’s presentation, we will first introduce an urban-scale photogrammetric point cloud dataset SensatUrban, together with several key challenges toward urban-scale point cloud understanding. Next, we will further introduce a richly annotated synthetic 3D aerial photogrammetry point cloud dataset, termed STPLS3D, with more than 16 km2 of landscapes and up to 18 fine-grained semantic categories. We aspire to highlight the challenges faced in the 3D semantic learning on large and dense point clouds of urban environments, sparking innovation in applications such as smart cities, digital twins, autonomous vehicles, automated asset management of large national infrastructures, and intelligent construction sites.

讲者简介:

Qingyong Hu is currently a DPhil candidate in the Department of Computer Science at the University of Oxford. He received his M.Eng. degree in information and communication engineering from the National University of Defense Technology (NUDT) in 2018. His research interests lie in 3D computer vision, large-scale point cloud modeling and semantic understanding.

讲者个人主页:https://yiranran.github.io/


主持人简介:

郭建伟,中科院自动化所模式识别国家重点实验室副研究员,硕士生导师,中科院青促会会员。主要研究方向为计算机图形学和3D视觉,在主流期刊或会议上发表论文50余篇,包括ACM TOG、IEEE TVCG、IEEE TIP、CAD等期刊论文及SIGGRAPH、SIGGRAPH Asia、CVPR、ECCV等会议论文。曾获得形状建模国际会议SMI2014最佳论文提名奖、中国仿真学会优秀博士论文奖、中国图形学大会ChinaGraph优秀论文奖、Computational Visual Media (CVMJ)期刊年度最佳论文奖、中国图学学会科技进步二等奖。曾担任AAAI 2022、CASA 2022、GMP2021、IEEE CAD/Graphics 2019/2021等国内外会议的程序委员会委员。

 

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org

You may also like...