Centering User Needs at the Intersection of AI and Law
Gennie Mansi
Ph.D. Student in Human-centered Computing
School of Interactive Computing
Georgia Institute of Technology
Date: 21 August 2024
Time: 1-3 pm EST
Location: virtual link
Committee:
Dr. Mark Riedl (advisor), School of Interactive Computing, Georgia Institute of Technology
Prof. Benjamin Sundholm, School of Law, St. John’s University
Dr. Naveena Karusala, School of Interactive Computing, Georgia Institute of Technology
Dr. Agata Rozga, School of Interactive Computing, Georgia Institute of Technology
Abstract:
There are mounting calls for the use of explainable AI (XAI) systems—AI systems that provide human-understandable explanations for their responses—in high-stakes decision making environments to support human oversight of AI decisions. These calls are built on the assumption that explanations change what users know, enabling them to act in response to the system. However, much of XAI has focused on algorithmic development, rather than understanding the connection between users’ context, XAI systems, and users’ actions. This is a barrier to reaping the potential benefits of XAI systems because the resulting interfaces don’t sufficiently account for and satisfy users’ information needs in their decision-making contexts, failing to enable user action. Consequently, we need to study what kinds of information users’ need in XAI systems, what kinds of actions they take in response, and how to design XAI systems in light of under-explored aspects of users’ context.
My work aims to shed light on how to better design XAI systems in light of users’ context, information, and action needs. Building on my prior work developing a user-centered mapping between information XAI systems can provide and actions users take in response, I propose continuing this thread further through the exploration of laws and regulations as an under-explored aspect of context that current AI systems do not account for. Specifically, I propose a user-centered analysis of the role of laws and regulations in shaping their explanation needs from an AI system. In addition, I propose creating a novel SBD approach for use with lawyers in order to distill the legal requirements that should be communicated to users. We will create a set of artifacts that can then be used to triangulate and communicate about users’ needs, legal requirements, and the design of explanations, supporting the improved design of XAI systems.