GAMES Webinar 2023 – 301期(SVBRDF vs BTF) | Chengzhi Tao(Nanjing University)，樊家辉(南京理工大学)
【GAMES Webinar 2023-301期】(绘制专题-SVBRDF vs BTF)
报告嘉宾：Chengzhi Tao(Nanjing University)
报告题目：Ultra-High Resolution SVBRDF Recovery from a Single Image
This work focuses on the task of estimating the spatially-varying surface reflectance from a single ultra-high resolution (UHR) input image taken by commercial cameras. While existing convolutional neural network-based material estimating methods have proven effective in handling low-resolution (e.g. 256×256) input images, they face challenges when dealing with ultra-high resolution material maps. To address this, we propose an implicit neural reflectance model and a divide-and-conquer solution to tackle this task. We crop the UHR image into low-resolution patches, which are processed by a local feature extractor. To ensure global coherency, our pipeline incorporates a global feature extractor and several coordinate-aware feature assembly modules. In this presentation, we will introduce the structure of our pipeline and demonstrate the fidelity and efficiency of our method through qualitative and quantitative comparisons. Our results show that our method is capable of generating ultra-high resolution SVBRDF maps with fine spatial details and consistent global structures.
Chengzhi Tao is a Ph.D. student at Meta Graphics & 3D Vision Lab of Nanjing University. He received his master’s degree from Southeast University. His research interests include physically-based rendering and material estimation.
报告题目：Neural Biplane Representation for BTF Rendering and Acquisition
Bidirectional Texture Functions (BTFs) are able to represent complex materials with greater generality than traditional analytical models. This holds true for both measured real materials and synthetic ones. Recent advancements in neural BTF representations have significantly reduced storage costs, making them more practical for use in rendering. These representations typically combine spatial feature (latent) textures with neural decoders that handle angular dimensions per spatial location. However, these models have yet to combine fast compression and inference, accuracy, and generality. In this paper, we propose a biplane representation for BTFs, which uses a feature texture in the half-vector domain as well as the spatial domain. This allows the learned representation to encode high-frequency details in both the spatial and angular domains. Our decoder is small yet general, meaning it is trained once and fixed. Additionally, we optionally combine this representation with a neural offset module for parallax and masking effects. Our model can represent a broad range of BTFs and has fast compression and inference due to its lightweight architecture. Furthermore, it enables a simple way to capture BTF data. By taking about 20 cell phone photos with a collocated camera and flash, our model can plausibly recover the entire BTF, despite never observing function values with differing view and light directions. We demonstrate the effectiveness of our model in the acquisition of many measured materials, including challenging materials such as fabrics.
吴鸿智，浙江大学计算机辅助设计与图形系统全国重点实验室教授、博士生导师，获国家优青基金资助。博士毕业于美国耶鲁大学计算机科学系。研究方向为可微分采集重建，研制了多套高密度光源采集设备，相关论文发表于ACM TOG/IEEE TVCG等国际顶尖期刊，合作出版了计算机图形学译著2部，主持了国家自然科学基金重点、优青、面上等多个研究项目，参与国家重大科研仪器研制项目，成果应用于中国国家博物馆馆藏文物数字化。担任Chinagraph程序秘书长，中国图像图形学会国际合作与交流工作委员会秘书长、智能图形专委会委员，中国计算机学会CAD&CG专委会委员，以及中国图形图像学报和VCIBA青年编委，EG、PG、EGSR、CAD/Graphics等多个国际会议的程序委员会委员。
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播？”及“如何加入GAMES微信群？”的信息；