Title: Information Extraction on Scientific Literature under Limited Supervision

  

Date: Monday, October 30, 2023

Time: 2:30 PM – 4:30 PM ET

LocationZoom Link

 

Fan Bai

Ph.D. Candidate in Computer Science

School of Interactive Computing

College of Computing

Georgia Institute of Technology

  

Committee

Dr. Alan Ritter (Advisor) – School of Interactive Computing, Georgia Institute of Technology

Dr. Wei Xu – School of Interactive Computing, Georgia Institute of Technology 

Dr. Zsolt Kira – School of Interactive Computing, Georgia Institute of Technology 

Dr. Gabriel Stanovsky – School of Computer Science and Engineering, Hebrew University of Jerusalem

Dr. Hoifung Poon – Microsoft Health Futures

  

Abstract

The exponential growth of scientific literature presents both challenges and opportunities for researchers across various disciplines. Effectively extracting pertinent information from this extensive corpus is crucial for advancing knowledge, enhancing collaboration, and driving innovation. However, manual extraction is a laborious and time-consuming process, underscoring the demand for automated solutions. Information extraction (IE), a sub-field of natural language processing (NLP) focused on automatically extracting structured information from unstructured data sources, plays a crucial role in addressing this challenge. Despite their success, many IE methods often require substantial human-annotated data, which might not be easily accessible, particularly in specialized scientific domains. This highlights the need for adaptable and robust techniques capable of functioning with limited supervision.

 

In this thesis, we study the task of information extraction on scientific literature, particularly addressing the challenge of limited (human) supervision. Specifically, our work has delved into three key dimensions of this problem. First, we explore the potential of harnessing easily accessible resources, like knowledge bases, to develop information extraction systems without direct human supervision. Next, we investigate the balance between the labor expenditure of human annotation and the computational cost of domain-specific pre-training, to achieve optimal performance under budget constraints. Lastly, we capitalize on the emerging capabilities of large pre-trained language models by showcasing how information extraction can be achieved with minimal demonstrations or solely based on a human-crafted data schema. Through these explorations, this thesis aims to lay a solid foundation for the continued advancement of scientific information extraction under limited supervision.