GAMES Webinar 2023 – 278期(Human-AI Collaboration) | 汪倩雯(Harvard),陈竹天(Harvard)

【GAMES Webinar 2023-278期】(可视化专题-Human-AI Collaboration)

报告嘉宾:汪倩雯(Harvard)

报告时间:2023年5月25号星期四晚上20:00-20:40(北京时间)

报告题目:Interpreting and Steering AI Explanations with Interactive Visualizations

报告摘要:

Artificial Intelligence (AI) has advanced at a rapid pace and is expected to revolutionize many biomedical applications. However, current AI methods are usually developed via a data-centric approach regardless of the usage context and the end users, posing challenges for domain users in interpreting AI, obtaining actionable insights, and collaborating with AI in decision-making and knowledge discovery.As a visualization researcher, I aim to address this challenge by combining interactive visualizations with interpretable AI. In this talk, I discuss and demonstrate the prospects for interactive visual explanations in the application of biomedical AI via real-world case studies. I present two methodologies for achieving this goal: 1) visualizations that explain AI models and predictions and 2) interaction mechanisms that integrate user feedback into AI models. Despite some challenges, I will conclude on an optimistic note: interactive visual explanations should be indispensable for Human-AI collaboration. The methodology discussed can be applied generally to other applications where human-AI collaborations are involved, assisting domain experts in data exploration and insight generation with the help of AI.

讲者简介:

Qianwen Wang is a Postdoctoral Fellow at Harvard University. Her research strives to facilitate the communication and collaboration between domain users and AI through interactive visualizations, with a special interest in their applications in solving biomedical challenges.

Her research has made contributions to visualization, human-computer interaction, and bioinformatics, as demonstrated by 18 publications in top-tier venues (IEEE VIS, TVCG, ACM CHI, Bioinformatics, ISMB). She has received two best abstract awards from BioVis ISMB, one honorable mention from IEEE VIS, and one Best paper award from IMLH@ICML. She is an awardee of the HDSI Postdoctoral Research Fund. Her Research has been covered by MIT News and Nature Technology Features. She serves in a variety of roles within the visualization, HCI and bioinformatics communities, including as the Abstract Chair for the ISMB BioVis COSI, the Poster Chair for the IEEE PacificVis, a Program Committee member for IEEE VIS and ACM ACM IUI, and as an organizer for the Visualization in Biomedical AI workshop held in conjunction with IEEE VIS.

讲者主页:https://qianwen.info


报告嘉宾:陈竹天(Harvard)

报告时间:2023年5月25号星期四晚上20:40-21:20(北京时间)

报告题目:When Data Meets Reality: Augmenting Dynamic Scenes with Visualizations

报告摘要:

We live in a dynamic world that produces a growing volume of accessible data. Visualizing this data within its physical context can aid situational awareness, improve decision-making, enhance daily activities like driving and watching sports, and even save lives in tasks such as performing surgery or navigating hazardous environments. Augmented Reality (AR) offers a unique opportunity to achieve this contextualization of data by overlaying digital content onto the physical world. However, visualizing data in its physical context using AR devices (e.g., headsets or smartphones) is challenging for users due to the complexities involved in creating and accurately placing the visualizations within the physical world. This process can be even more pronounced in dynamic scenarios with temporal constraints.

In this talk, I will introduce a novel approach, which uses sports video streams as a testbed and proxy for dynamic scenes, to explore the design, implementation, and evaluation of AR visualization systems that enable users efficiently visualize data in dynamic scenes. I will first present three systems allowing users to visualize data in sports videos through touch, natural language, and gaze interactions, and then discuss how these interaction techniques can be generalized to other AR scenarios. The designs of these systems collectively form a unified framework that serves as a preliminary solution for helping users visualize data in dynamic scenes using AR. I will next share my latest progress in using Virtual Reality (VR) simulations as a more advanced testbed, compared to videos, for AR visualization research. Finally, building on my framework and testbeds, I will describe my long-term vision and roadmap for using AR visualizations to advance our world in becoming more connected, accessible, and efficient.

讲者简介:

Zhutian Chen is a PostDoc Fellow in the Visual Computing Group at Harvard University. His research is at the intersection of Data Visualization, Human-Computer Interaction, and Augmented Reality, with a focus on advancing human-data interaction in everyday activities. His research has been published as full papers in top venues such as IEEE VIS, ACM CHI, and TVCG, and received one best paper in ACM CHI and three best paper nominations in IEEE VIS, the premier venue in data visualization. Before joining Harvard, he was a PostDoc in the Design Lab at UC San Diego. Zhutian received his Ph.D. in Computer Science and Engineering from the Hong Kong University of Science and Technology.

讲者主页:https://chenzhutian.org


主持人简介:

曾伟博士,香港科技大学(广州)信息枢纽计算媒体与艺术学域(CMA)以及数据科学与分析学域(DSA)双聘助理教授,博士生导师,新加坡南洋理工大学本科和博士学位。研究方向重点关注人、机器和大数据之间的相互作用,研究成果应用于智慧城市、AIGC等领域。先后主持国家自然科学基金、省市级项目,参与“虚拟新加坡”项目、广州市重点实验室等十余项。发表论文40余篇,包括20余篇第一或通讯作者的CCF A/B类高水平期刊与会议论文,获ICIV、VINCI、ChinaVis最佳论文或提名奖。任VINCI’23 程序主席,VINCI’22 宣传主席,PacificVis’19 海报主席,IEEE VIS、EuroVis STARs、ChinaVis等会议和期刊的程序委员会委员及审稿人。

 

GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:http://webinar.games-cn.org

You may also like...