Title: Bridging Methodologies to Address Ethical Challenges of Algorithmic Systems

 

Camille Harris

Ph.D. Candidate in Computer Science 

School of Interactive Computing

Georgia Institute of Technology

 

Date: Thursday, September 11, 2025

Time: 10 AM - 12 PM EST

Location: Virtual

VirtualLink

 

Committee

Dr. Diyi Yang (co-advisor), School of Computer Science, Stanford University

Dr. Neha Kumar (co-advisor) School of Interactive Computing, Georgia Institute of Technology 

Dr. Andre Brock, School Literature Media and Communication, Georgia Institute of Technology

Dr. Karthik Goyal, School Interactive Computing, Georgia Institute of Technology

Dr. Naveena Karusala, School of Interactive Computing, Georgia Institute of Technology

Dr. Allison Koenecke, School of Information, Cornell Tech

 

Abstract

 

As AI systems become increasingly adopted in new domains and a greater part of everyday life, examining, understanding, and ultimately preventing these systems from replicate existing societal harms of sexism, racism, and other oppressive systems is an increasing concern. Within a United States context, these issues pose a large potential to harm to groups historically oppressed marginalized groups such as Black/African people and Indigenous people, and other People of Color (BIPOC). Within Natural Language Processing, dialects of English spoken by minority populations such as African American English and Chicano English that have distinct grammar, vocabulary and other linguistic differences from White Mainstream English. While most works that explore such biases concerns in NLP or other machine learning systems focus on the model level, and audit technologies for their performance across dialects or across groups, most marginalized users who experience the impacts of such systems experience it at the level of the downstream application. At this level, multiple other components including the context of application, the party creating and implementing the model, the user interface, other users, etc., can combine with model performance issues to impact the overall user experience. Algorithmic audits, while a powerful methodology for understanding the risks and errors models may expose some populations to, does little to explore how users are impacted by these systems. On the other hand, studies within human computer interaction that explore user experience can better capture the greater landscape of stakeholders and technological systems impacting users experiences, but gives little guidance on technical improvements. In this work, I demonstrate how both sets of methodologies reveal unique and complementary insights within two topic areas, social media content moderation and large language models. Ultimately, this dissertation argues for greater efforts to bridge the gap between subfields of computing, particularly NLP, algorithmic fairness, and human computer interaction, to adequately explore, understand and prevent ethical challenges of algorithmic systems.