GAMES Webinar 2019 – 86期 | 陈启峰（香港科技大学），许佳（腾讯AI Lab）
【GAMES Webinar 2019-86期】
报告题目：New Perspectives for Processing and Synthesizing Images and Videos
In this talk, I will talk about new perspectives in my recent research on processing and synthesizing images and videos. I will present new approaches for extreme low-light image reconstruction and digital zoom based on raw Bayer mosaic images, deep learning based scheme for accelerating image and video processing, parametric and semi-parametric methods for synthesizing images from semantic layouts, recent video synthesis research on future video prediction and video colorization.
Dr. Qifeng Chen is an assistant professor of CSE and ECE at HKUST. He received his Ph.D. in computer science from Stanford University in 2017, and a bachelor’s degree in computer science and mathematics from HKUST in 2012. His research interests are computer vision, machine learning, optimization, and computer graphics. Four of his papers were selected for full oral presentations in ICCV’15, CVPR’16, ICCV’17, and CVPR’18. He is named one of 35 Innovators under 35 in China in 2018 by MIT Technology Review. He won the Google Faculty Research Awards in 2018. In 2011, he won the 2nd place worldwide at the ACM-ICPC World Finals. He also earned a gold medal at IOI 2007. He co-founded the startup Lino in 2017.
报告题目：Learning Optical Flow with Limited Data
In this talk, I will talk about our recent efforts on learning optical flow from unlabeled data. I will first present DDFlow, which distills reliable predictions from a teacher network, and uses these predictions as annotations to guide a student network to learn optical flow. Unlike existing work relying on handcrafted energy terms to handle occlusion, our approach is data-driven, and learns optical flow for occluded pixels. This enables us to train our model with a much simpler loss function, and achieve a much higher accuracy. We conduct a rigorous evaluation on the challenging Flying Chairs, MPI Sintel, KITTI 2012 and 2015 benchmarks, and show that our approach significantly outperforms all existing unsupervised learning methods. Second, I will introduce our latest SelFlow, a self-supervised optical flow learning method. SelFlow enhances DDFlow with more advanced occlusion hallucination, and utilizes temporal information from multiple frames for better flow estimation. This self-training approach provides a great initialization for supervised training. Our fine-tuned models achieve state-of-the-art results on all leading benchmarks. Most notably, SelFlow achieves EPE=4.26 on the Sintel benchmark, outperforming all submitted methods.
Dr. Jia Xu is a principal researcher at Tencent AI Lab. He obtained his Ph.D. in Computer Sciences from the University of Wisconsin-Madison. Before returning to China, Jia was a senior research scientist in Intel Visual Computing Lab in USA. Jia works on computer vision and deep reinforcement learning, with applications in game AI. He has won the Sintel Optical Flow Benchmark in 2016 and 2018, and the Visual Doom Game AI Competition 2018.
GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播？”及“如何加入GAMES微信群？”的信息；