GAMES Webinar 2024 – 323期(研究生成长论坛:智能可视化科研分享) |Talk+Panel形式

【GAMES Webinar 2024-323期】(可视化专题-研究生成长论坛:智能可视化科研分享|Talk+Panel形式)

详细日程:2024年5月16号 20:00-21:30(北京时间)

20:00-20:12   Data-Centric Visual Analytics for Machine Learning(杨维铠,清华大学)

20:12-20:24   KG-PRE-view: Democratizing a TVCG Knowledge Graph through Visual Explorations(涂雅梅,俄亥俄州立大学)

20:24-20:36   LEVA: Using Large Language Models to Enhance Visual Analytics(赵宇恒复旦大学)

20:36-20:48   SalienTime: User-driven Selection of Salient Time Steps for Large-Scale Geospatial Data Visualization(陈俊潼华东师范大学)

20:48-21:30     圆桌研讨

研讨嘉宾:杨维铠、涂雅梅、赵宇恒、陈俊潼


报告嘉宾:杨维铠(清华大学)

报告时间:2024年5月16号星期四20:00-20:12(北京时间)

报告题目:Data-Centric Visual Analytics for Machine Learning

讲者简介:

Weikai Yang is a 5th-year Ph.D. candidate at Tsinghua University, supervised by Prof. Shixia Liu. His research interests lie in integrating machine learning into visual analytics.  In particular, he focuses on lowering the barriers to help practitioners better explore large-scale data and build high-performance machine learning models in real-world applications. Currently, he has published 12 papers (9 CCF-A papers) and 1 book (Springer, second author). Among these, he is the first author of 5 CCF-A papers published in top-tier journals and conferences, such as IEEE TVCG and IEEE VIS.

报告摘要:

While artificial intellegience has achieved great success across various applications, it is still challenging for users to build high-performance models due to the need to analyze large-scale training samples and understand the relationships between samples and models. This process is usually time-consuming and expertise-demanding. Visual analytics offers a promising solution by tightly integrating machine learning with interactive visualization techniques, which loop humans directly into the analysis process. Speficially, I would first introduce FSLDiagnotor, which helps users interactively select high-quality samples and base learners in the few-shot learning scenario. Then I will briefly introduce other work in helping users address annotation quality issues and concept drift.

个人主页:https://vicayang.cc


报告嘉宾:涂雅梅(俄亥俄州立大学)

报告时间:2024年5月16号星期四20:12-20:24(北京时间)

报告题目:KG-PRE-view: Democratizing a TVCG Knowledge Graph through Visual Explorations

讲者简介:

Yamei Tu received a BS degree in Software Engineering from East China Normal University. She has been a Ph.D. student under Professor Han-Wei Shen at The Ohio State University since spring 2019. Her research focuses on visual analytics and knowledge discovery of complex datasets. Specifically, she extracts insights from complex and unstructured data, such as scientific literature, social science survey data, and knowledge graphs of food research networks. These datasets originate from diverse domains, but they are all multidimensional, multirelational, and contain many entities with rich semantics. Over the past four years, Yamei has played a major role in many research projects funded by NSF and developed many innovative ideas in interactive data query, knowledge discovery, and visual analytics to support human-centric, data-driven decision making. Yamei’s research was published in the very top journal in data visualization, IEEE Transactions on Visualization and Computer Graphics (TVCG). 

报告摘要:

IEEE Transactions on Visualization and Computer Graphics (TVCG) publishes cutting-edge research in visualization, computer graphics, and virtual/augmented realities. Different TVCG stakeholders make daily decisions involving research ideas, peer reviewer invitations, editorial board selections, etc. Well-informed, data-driven decisions are necessary, but the IEEE digital library only provides access to individual papers. To facilitate efficient, transparent decision-making, we construct and publicly release a TVCG knowledge graph (TVCG-KG) – a structured representation of heterogeneous information like publication metadata, methods, tasks, and data. While knowledge graphs (KGs) are widely used, a gap exists in visualization literature regarding exploiting KGs’ rich semantics. We propose that knowledge discovery over KGs benefits from multiple visualization techniques and designs. We evaluated TVCG-KG’s quality and demonstrated its utility through real-world cases. Data and code: https://github.com/yasmineTYM/TVCG-KG.git.

个人主页:https://sites.google.com/view/yameitu


报告嘉宾:赵宇恒(复旦大学)

报告时间:2024年5月16号星期四20:24-20:36(北京时间)

报告题目:LEVA: Using Large Language Models to Enhance Visual Analytics

讲者简介:

Yuheng Zhao is a second-year PhD student at School of Data Science, Fudan University, and a member of FDUVIS, supervised by Prof. Siming Chen. Her research interest lies in harnessing the confluence of Visualization, Human-Computer Interaction and Artificial Intelligence to enhance data expression and comprehension. Lately, her primary focus has been on intelligent visual analytics enhanced by large language models. She has published papers in top-tier journals and conferences, e.g., IEEE TVCG, IEEE VIS, ACM CSCW. 

报告摘要:

Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users’ VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.

个人主页:https://yuhengzhao.me/


报告嘉宾:陈俊潼(华东师范大学)

报告时间:2024年5月16号星期四20:36-20:48(北京时间)

报告题目:SalienTime: User-driven Selection of Salient Time Steps for Large-Scale Geospatial Data Visualization

讲者简介:

Juntong Chen (陈俊潼) is currently a second-year master’s student in the School of Computer Science and Technology, East China Normal University, supervised by Prof. Changbo Wang and Assoc. Prof. Chenhui Li. He received his bachelor’s degree at the Software Engineering Institute, East China Normal University. His research interests include spatiotemporal data analytics, large-scale data visualization, and human-computer interaction. Specifically, he’s interested in crafting visualization systems and algorithms that enhance the way people comprehend and interpret large-scale spatiotemporal data, and applying machine learning techniques to aid visualization design, interaction, and analysis. During his master’s study, he has published 5 papers on TVCG and CHI, and was awarded the Apple WWDC scholarship in 2023.

报告摘要:

The voluminous nature of geospatial temporal data from physical monitors and simulation models poses challenges to efficient data access, often resulting in cumbersome temporal selection experiences in web-based data portals. Thus, selecting a subset of time steps for prioritized visualization and pre-loading is highly desirable. Addressing this issue, this paper establishes a multifaceted definition of salient time steps via extensive need-finding studies with domain experts to understand their workflows. Building on this, we propose a novel approach that leverages autoencoders and dynamic programming to facilitate user-driven temporal selections. Structural features, statistical variations, and distance penalties are incorporated to make more flexible selections. User-specified priorities, spatial regions, and aggregations are used to combine different perspectives. We design and implement a web-based interface to enable efficient and context-aware selection of time steps and evaluate its efficacy and usability through case studies, quantitative evaluations, and expert interviews. The code is available at: https://github.com/billchen2k/salientime .

个人主页:https://billc.io


主持人简介:

李晨辉,华东师范大学计算机科学与技术学院副教授,院长助理,CSIG可视化与可视分析专委会委员,CCF计算机辅助设计与图形学专委会执行委员。博士毕业于香港理工大学,专注数据可视化、时空大数据分析、计算机图形学研究,主持国家及校企合作科研项目7项,在IEEE VIS、IEEE TVCG、ACM CHI、IEEE VR等国际会议及期刊上发表学术论文30余篇,曾担任ChinaVis2018会议组织委员,VINCI2019国际会议本地主席,CAD/CG 2023及CAD/Graphics 2023会议组织主席,ChinaVis2024会议论文主席。长期担任VIS、TVCG、CHI等国际期刊及会议的审稿人。曾获2020年度上海市科技进步特等奖,2022年度上海市高等教育教学成果二等奖,主讲课程《数据之美》入选2023年度全国高校美育教学优秀案例。更多信息见:http://chenhui.li

陆旻,深圳大学助理教授,研究方向为数据可视化和人机交互,近年来在可视化、人机交互国际会议IEEE VIS,ACM CHI等发表论文多篇,获ChinaVis 2016最佳海报奖、ICUI 2017最佳论文奖、IEEE PacificVis 2018最佳海报奖、Computational Visual Media 2020最佳论文提名、IEEE PacificVis 2024最佳论文提名等奖项,担任IEEE PacificVis、ChinaVis等会议程序委员,长期担任IEEE VIS,IEEE TVCG, ACM TIST等审稿人,更多信息请访问个人主页:https://deardeer.github.io/


GAMES主页的“使用教程”中有 “如何观看GAMES Webinar直播?”及“如何加入GAMES微信群?”的信息;
GAMES主页的“资源分享”有往届的直播讲座的视频及PPT等。
观看直播的链接:https://live.bilibili.com/h5/24617282

You may also like...