Andrew Sedler
BME PhD Defense Presentation
Date: 2023-04-12
Time: 11:00 AM - 1:00 PM
Location / Meeting Link: Coda C1115 Druid Hills (https://gatech.zoom.us/j/99196508491?pwd=VVhMRS9zaTVqMitGTjZYbTh2OWh5UT09)
Committee Members:
Chethan Pandarinath, PhD (Advisor, Department of Biomedical Engineering); Eva Dyer, PhD (Department of Biomedical Engineering); Hannah Choi, PhD (School of Mathematics); Mark Davenport, PhD (School of Electrical and Computer Engineering); Carlos Brody, PhD (Princeton Neuroscience Institute, Princeton University)
Title: General and interpretable models for inferring dynamical computation in biological neural networks
Abstract:
The sequential autoencoder (SAE) has proven to be a powerful deep architecture for learning the latent dynamics of large-scale electrophysiological recordings, with a variety of applications in neuroscience and neural engineering. In particular, latent factor analysis via dynamical systems (LFADS) is an RNN-based architecture that explicitly models latent dynamics and inputs to infer firing rates with state-of-the-art accuracy. In practice, effectively fitting such models requires time consuming and resource-intensive hyperparameter tuning with reference to supervisory information (e.g. behavioral data) for each new dataset. Additionally, the conditions under which the dynamics learned by the SAE faithfully reflect those of the underlying biological system are unclear and under-explored, limiting the interpretability of learned dynamics. The objectives of this dissertation are to simplify training of deep, dynamics-based neural population models on binned spiking activity and to improve the interpretability of the latent dynamics they learn. The first aim of this research was to develop a framework for robust training of SAEs on neural data. Accordingly, we present a novel regularization strategy and an efficient, unsupervised hyperparameter tuning approach which allowed us to reliably obtain high-performing models on a wide variety of datasets. The second aim was to evaluate and improve the interpretability of the latent dynamics learned by SAEs. To address this aim, we show that widely-used recurrent neural networks (RNNs) struggle to accurately recover dynamics from synthetic datasets, and that neural ordinary differential equations solve many of these issues. The last aim was to address several of the practical challenges of training and applying SAEs in neuroscience. We describe two open-source projects that address this aim: a simple, modular, and extensible implementation of LFADS and a deployment framework that makes it easier to leverage managed infrastructure for large-scale training.