Title: Interactive and Explainable Machine Learning Methods With Humans

Date: June 12

Time: 1:00 PM (Eastern)

Location

- In Person at Klaus 2456 (Classroom Wing) or 

- Virtual at https://gatech.zoom.us/j/99019110918?pwd=Vi9EMUM4bDBZWnNCczRQamNNcmtUQT09

 

Andrew Silva

Computer Science Ph.D. Candidate

School of Interactive Computing

Georgia Institute of Technology

 

Committee:

Dr. Matthew Gombolay (Advisor) – School of Interactive Computing, Georgia Institute of Technology

Dr. Sonia Chernova – School of Interactive Computing, Georgia Institute of Technology

Dr. Mark Riedl – School of Interactive Computing, Georgia Institute of Technology

Dr. Diyi Yang – Computer Science Department, Stanford University

Dr. Barry Theobald – Machine Learning Research, Apple, Inc.

 

Abstract:

 

This dissertation introduces and evaluates new mechanisms for interactivity and explainability within machine learning, specifically targeting human-in-the-loop learning systems. The contributions of this dissertation aim to substantiate the thesis statement: Interactive and explainable machine learning yields improved experiences for human users of intelligent systems. 

 

The dissertation work will show that machine learning with human expertise offers improved performance in task success rates and reward, introducing a novel neural network architecture and an approach to goal-specification using language commands. I will then discuss how machine learning with explainability improves human perceptions of intelligent agents and enhances user compliance with agent suggestions, detailing technical contributions and a large-scale user study on perceptions of explainability mechanisms. Finally, I will overview my work in personalization for machine learning and the ways in which personalized machine learning enables improved performance for a large heterogeneous population of users. I offer both novel technical methods for interactivity and explainability within machine learning, as well as user studies to empirically validate my technical contributions. My dissertation will conclude with a presentation on recent work for personalizing explainability mechanisms to users in the task-oriented setting of guiding a simulated self-driving car in an unseen environment, navigating a tradeoff between participant-preference and task-performance.