Title: Reliable Decision-Making Under Uncertainty Through the Lens of Statistics and Optimization

Date: April 7th, 2025

Time: 2:00pm – 4:00pm

Location: Groseclose 303

Zoom Meeting Link: https://gatech.zoom.us/j/7377156804?pwd=FaalZkHqEWyzRJVayUUNVeBpOFtOWq.1&omn=98775438085

 

Jie Wang

Industrial Engineering PhD Candidate

H. Milton Stewart School of Industrial and Systems Engineering

Georgia Institute of Technology

 

Committee: 

Dr. Yao Xie (Advisor), H. Milton Stewart School of Industrial and Systems Engineering 

Dr. Xin Chen, H. Milton Stewart School of Industrial and Systems Engineering 

Dr. George Lan, H. Milton Stewart School of Industrial and Systems Engineering 

Dr. Alexander Shapiro, H. Milton Stewart School of Industrial and Systems Engineering 

Dr. Rui Gao, Department of Information, Risk, and Operations Management at the McCombs School of Business at the University of Texas at Austin

 

Abstract: In this thesis, we develop computationally efficient algorithms with statistical guarantees for problems of decision-making under uncertainty, particularly in the presence of large-scale, noisy, and high-dimensional data. In Chapter 2, we propose a kernelized projected Wasserstein distance for high-dimensional hypothesis testing, which finds the nonlinear mapping that maximizes the discrepancy between projected distributions. In Chapter 3, we provide an in-depth analysis of the computational and statistical guarantees of the kernelized projected Wasserstein distance. In Chapter 4, we study the variable selection problem in two-sample testing, aiming to select the most informative variables to determine whether two datasets follow the same distribution. In Chapter 5, we present a novel framework for distributionally robust stochastic optimization (DRO), which seeks an optimal decision that minimizes expected loss under the worst-case distribution within a specified set. This worst-case distribution is modeled using a variant of the Wasserstein distance based on entropic regularization. In Chapter 6, we incorporate Phi-divergence regularization into the infinity-type Wasserstein DRO, which is a formulation particularly useful for adversarial machine learning tasks. Chapter 7 concludes with an overview of future research directions.­­­