Title: Democratizing Human-Centered AI with Visual Explanation and Interactive Guidance
Date: Friday, July 26, 2024
Time: 1–3 PM Eastern Time (US)
Location: Coda 114 (first floor conference room; just walk in, no special access needed)
Virtual Meeting: Zoom
Zijie Jay Wang
Machine Learning PhD Student
School of Computational Science and Engineering
Georgia Institute of Technology
Committee:
Dr. Duen Horng (Polo) Chau - Advisor, Georgia Tech, Computational Science & Engineering
Dr. Judy Hoffman - Georgia Tech, School of Interactive Computing
Dr. Munmun De Choudhury - Georgia Tech, School of Interactive Computing
Dr. Lauren Wilcox - eBay & Georgia Tech, School of Interactive Computing
Dr. Jenn Wortman Vaughan - Microsoft Research
Dr. Rich Caruana - Microsoft Research
Abstract:
Artificial intelligence (AI) systems have been increasingly integrated into our everyday lives, yet how they make predictions often remains obscure to both their developers and the people they impact. The opacity of AI models contributes to their perception as "mysterious"—rendering both developers and those impacted by these models powerless when it comes to aligning AI models with their values. To address these challenges, my research applies a human-centered approach to explain AI models and empower different stakeholders to align AI models with their knowledge and values. This thesis focuses on three complementary thrusts.
(1) Explain AI to Everyone. We pioneer easy-to-access interactive visualization systems that help AI novices and experts understand AI models (e.g., WizMap and CNN Explainer used by 360k+ novices worldwide). We also present first-of-its-kind resources (e.g., 6.5TB DiffusionDB with 14 million prompt-image pairs) to help AI developers and policymakers understand the impacts of large generative AI models.
(2) Guide AI with Human Values. We introduce GAM Changer (deployed by Microsoft) to empower AI developers to vet and fix problematic model behaviors, and GAM Coach to enable those impacted by AI to receive customizable suggestions to alter unfavorable AI decisions.
(3) Democratize Human-Centered AI. We show how researchers can lower the barrier to adopting human-centered AI practices by integrating these practices into AI practitioners' workflows. We highlight an example: Farsight leverages in-situ interfaces to foster responsible AI awareness during AI prototyping.
Our work is making significant impacts on academia, industry, and society. CNN Explainer has been integrated into deep learning courses across top universities, such as CMU, Georgia Tech, Duke University, and the University of Tokyo, receiving over 7k stars on GitHub. DiffusionDB has received over 2M data requests through the HuggingFace APIs. Furthermore, our work has been recognized with 4 best paper type awards at top conferences like ACL, CHI, and FAccT. Additionally, this thesis has been acknowledged by Apple and J.P. Morgan AI PhD Fellowships.