GAMES Webinar 2023 – 274期(AIGC如何照进三维世界) | 高俊(多伦多大学/英伟达),王腾飞(香港科技大学),刘若石(哥伦比亚大学),刘圳(蒙特利尔大学/Mila/马普所)

【GAMES Webinar 2023-274期】(视觉专题-AIGC如何照进三维世界,Talk+Panel形式)

详细日程:

2023年4月27号 上午10:00-11·:40(北京时间)

10:00-10:10    高俊(多伦多大学/英伟达)

10:10-10:20   王腾飞(香港科技大学)

10:20-10:30   刘若石(哥伦比亚大学)

10:30-10:40   刘圳(蒙特利尔大学/Mila/马普所)

10:40-11:40   panel讨论(主持人:陈冠英、韩晓光)


报告嘉宾:高俊(多伦多大学/英伟达)

报告时间:2023年4月27号星期四上午10:00-10:10(北京时间)

报告题目:Machine Learning for 3D Content Creation

报告摘要:

With the increasing demand for creating large-scale 3D virtual worlds in many industries, there is an immense need for diverse and high-quality 3D content. Machine learning is existentially enabling this quest. In this talk, I will discuss how looking from the perspective of combining differentiable iso-surfacing with differentiable rendering could enable 3D content creation at scale and make real-world impact. Towards this end, we first introduce a differentiable 3D representation based on a tetrahedral grid to enable high-quality recovery of 3D mesh with arbitrary topology. By incorporating differentiable rendering, we further design a generative model capable of producing 3D shapes with complex textures and materials for mesh generation. Our framework further paves the way for innovative high-quality 3D mesh creation from text prompt leveraging 2D diffusion models, which democretizes 3D content creation for novice users.

讲者简介:

Jun Gao is a PhD student at the University of Toronto advised by Prof. Sanja Fidler. He also holds the position of Research Scientist at NVIDIA Toronto AI lab. His research interests focus on the intersection of 3D computer vision and computer graphics, particularly developing machine learning tools to facilitate large-scale 3D content creation and drive real-world applications. His work has been presented at prestigious conferences such as NeurIPS, CVPR, ICCV, ECCV, ICLR and SIGGRAPH. Many of his contributions have been implemented in products, including NVIDIA Picasso, GANVerse3D, Neural DriveSim and Toronto Annotation Suite. He will serve as an Area Chair at NeurIPS 2023.

讲者主页:http://www.cs.toronto.edu/~jungao/


报告嘉宾:王腾飞(香港科技大学)

报告时间:2023年4月27号星期四上午10:10-10:20(北京时间)

报告题目:RODIN: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion

报告摘要:

Deep generative models have revolutionized the realm of 2D visual design; however, the development of high-quality three-dimensional generative models remains a challenge. In this talk, we will present RODIN, which is a 3D diffusion model to generate subjects represented by neural radiance fields. RODIN can efficiently generates 360-degree freely viewable 3D avatars and supports multi-modal inputs, such as images and texts, to produce personalized outcomes. This approach can enhance the efficiency of traditional digital avatar modeling process and has potential to be further applied for general 3D objects generation.

讲者简介:

Tengfei Wang is a PhD student at the Hong Kong University of Science and Technology, supervised by Prof. Qifeng Chen. His research interests focus on generative modeling and 3D rendering, particularly on 3D generative models. Some of his works have been published in some computer vision venues such as CVPR and ICCV as Highlight or Oral presentation.

讲者主页:https://tengfei-wang.github.io/


报告嘉宾:刘若石(哥伦比亚大学)

报告时间:2023年4月27号星期四上午10:20-10:30(北京时间)

报告题目:Zero-1-to-3: Zero-shot One Image to 3D Object

报告摘要:

We introduce Zero-1-to-3, a framework for changing the camera viewpoint of an object given just a single RGB image. To perform novel view synthesis in this under-constrained setting, we capitalize on the geometric priors that large-scale diffusion models learn about natural images. Our conditional diffusion model uses a synthetic dataset to learn controls of the relative camera viewpoint, which allow new images to be generated of the same object under a specified camera transformation. Even though it is trained on a synthetic dataset, our model retains a strong zero-shot generalization ability to out-of-distribution datasets as well as in-the-wild images, including impressionist paintings. Our viewpoint-conditioned diffusion approach can further be used for the task of 3D reconstruction from a single image. Qualitative and quantitative experiments show that our method significantly outperforms state-of-the-art single-view 3D reconstruction and novel view synthesis models by leveraging Internet-scale pre-training.

讲者简介:

Ruoshi Liu is a second-year PhD student at Columbia University, advised by Carl Vondrick. He has broad interests in computer vision and deep learning such as video representation learning, 3D reconstruction, differentiable rendering, and recently large-scale generative models. He has experience working in various industry and academic labs such as Snap Research, Sony R&D, CERN, MRSEC. He loves movies, hiking, and cats.

讲者主页:https://ruoshiliu.github.io/


报告嘉宾:刘圳(蒙特利尔大学)

报告时间:2023年4月27号星期四上午10:30-10:40(北京时间)

报告题目:MeshDiffusion: Score-based Generative 3D Mesh Modeling

报告摘要:

Our visual world is made of numerous and diverse 3D shapes, and to create a large-scale 3D virtual world requires an efficient approach to synthesize these shapes. Among possible choices of representations for generation, 3D meshes are favored since they are well optimized for efficient and controllable rendering and editing in modern graphics pipelines. It is tempting to generate high-quality meshes with diffusion models, which recently prove powerful in 2D image and video generation, but they are not able to directly handle topology-varying meshes. In this talk, I will share our efforts in building the first diffusion model for directly generating 3D meshes. Specifically, our method, dubbed MeshDiffusion, is able to perform unconditional and conditional generation of topology-varying 3D meshes with sharp geometric details by leveraging a structured parametrization of meshes. Our work sheds lights on how to apply diffusion models to general 3D representations.

讲者简介:

Zhen Liu is a PhD candicate at Mila and University of Montreal, advised by Yoshua Bengio and Liam Paull. His research interests include novel representations and probabilistic modeling methods for 3D reconstruction and generation as well as other general domains. He published papers at top venues including NeurIPS, ICLR, ICML and CVPR. Currently, he is a visiting student at Max Planck Institute for Intelligent Systems, working with Bernhard Schölkopf and Michael J. Black.

讲者主页:http://itszhen.com


主持人简介:

陈冠英博士,现任香港中文大学(深圳)未来智联网络研究院与理工学院研究助理教授。在此之前,他有于百度视觉技术部、阿里巴巴达摩院、大阪大学的研究经历。他分别于中山大学和香港大学取得学士学位和博士学位。他的研究方向包括计算机视觉和计算机图形学,目前专注于三维视觉、神经渲染、以及底层视觉。近几年以第一作者或通讯作者在该领域的顶级会议(CVPR/ICCV/ECCV/NeurIPS) 和顶刊期刊(TPAMI/IJCV)发表论文10余篇。他目前担任多个人工智能国际期刊和会议的审稿人,也入选了2021百度全球AI华人新星百强榜单。更多细节可见:https://guanyingc.github.io/

 

韩晓光博士,现任香港中文大学(深圳)理工学院与未来智联网络研究院助理教授,校长青年学者,获广东省杰出青年资助。他于2017年获得香港大学计算机科学专业博士学位。其研究方向包括计算机视觉和计算机图形学等,在该方向著名国际期刊和会议发表论文60余篇,包括顶级会议和期刊SIGGRAPH(Asia), CVPR, ICCV, ECCV, NeurIPS, ACM TOG, IEEE TPAMI等。他目前担任IEEE Transactions on Mobile Computing 以及 Computer&Graphics编委,CVPR 2023 以及 NeurIPS 2023的领域主席。他曾获得吴文俊人工智能优秀青年奖,他的工作还曾获得CCF图形开源数据集奖(DeepFashion3D),计算机图形学顶级会议Siggraph Asia 2013新兴技术最佳演示奖,2019年和2020年连续两年入选计算机视觉顶级会议CVPR最佳论文列表(入选率分别为0.8%和0.4%),IEEE VR 2021 最佳论文荣誉提名。更多细节详见https://gaplab.cuhk.edu.cn

 

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org

 

 

You may also like...