PhD Proposal by Shang-Tse Chen

Ph.D. Thesis Proposal Announcement


Title: AI-infused Security: Robust Defense by Bridging Theory and Practice


Shang-Tse Chen

Computer Science PhD Student

School of Computational Science and Engineering

College of Computing

Georgia Institute of Technology


Date: Friday, November 16th, 2018

Time: 11:30am to 1:30pm (EDT)

Location: KACB 1123




Dr. Polo Chau (Advisor, School of Computational Science and Engineering, Georgia Institute of Technology)

Dr. Maria-Florina Balcan (Co-advisor, School of Computer Science, Carnegie Mellon University)

Dr. Wenke Lee (School of Computer Science, Georgia Institute of Technology)

Dr. Le Song (School of Computational Science and Engineering, Georgia Institute of Technology)

Dr. Kevin A. Roundy (Symantec Research Labs)

Dr. Cory Cornelius (Intel Labs)




The advances in Artificial Intelligence (AI) has made far-reaching impact in almost every industry. Cybersecurity in particular is one of the important fields that AI has revolutionized. However, while AI has tremendous potential as a defense against real-world cybersecurity threats, understanding the capabilities and robustness of AI remains a fundamental challenge. The goal of this proposed thesis is to develop next-generation strong cybersecurity defenses by uniquely combining techniques from AI, cybersecurity, and algorithmic game theory. Our multi-faceted contributions push the frontiers in each area and the intersection.

These contributions can be categorized into the following four inter-related research thrusts:


(1) Theory-guided Decision Making

We develop new theories that guide defense resources allocation to guard against unexpected attacks and catastrophic events, using a novel online decision-making framework that compels players to employ "diversified" mixed strategies.


(2)  Robust Distributed Machine Learning

We develop a communication-efficient distributed boosting algorithm with strong theoretical guarantees in the agnostic learning setting where the data can contain arbitrary noise.


(3) Adversarial Attack and Defense:

We develop ShapeShifter, a physical adversarial attack that fools the state-of-the-art deep-learning-based object detector.

We propose to design efficient methods to protect deep neural networks from such kinds of adversarial attacks.


(4) Enterprise Cyber Threat Detection: We show how AI can be used in real enterprise environment by designing a novel framework called "Virtual Product" to predict potential enterprise cyber threats from telemetry data. 


This thesis research will make multiple important contributions.  First, it improves people's understanding of the capabilities and limitations of AI under adversarial circumstances. Second, it offers guiding principles for future research on how to empower security-critical applications by effectively combining AI, cybersecurity, and algorithmic game theory. Third, our machine learning algorithms are scalable and general, and thus can be used in a wide range of applications.

Event Details


  • Friday, November 16, 2018
    11:30 am - 1:30 pm
Location: KACB 1123