VILA: Improving Structured Content Extraction from Scientific PDFs Using Visual Layout Groups
Published
2022-04-06
Zejiang Shen
,
Kyle Lo
,
Lucy Lu Wang
,
Bailey Kuehl
,
Daniel Sabey Weld
,
Doug Downey
Zejiang Shen
Allen Institute for Artificial Intelligence
Kyle Lo
Allen Institute for Artificial Intelligence
Lucy Lu Wang
Allen Institute for Artificial Intelligence
Bailey Kuehl
Allen Institute for Artificial Intelligence
Daniel Sabey Weld
Allen Institute for Artificial Intelligence; University of Washington
Doug Downey
Allen Institute for Artificial Intelligence
Abstract
Accurately extracting structured content from PDFs is a critical first step for NLP over scientific papers. Recent work has improved extraction accuracy by incorporating elementary layout information, e.g., each token's 2D position on the page, into language model pretraining. We introduce new methods that explicitly model VIsual LAyout (VILA) groups, i.e., text lines or text blocks, to further improve performance. In our I-VILA approach, we show that simply inserting special tokens denoting layout group boundaries into model inputs can lead to a 1.9% Macro F1 improvement in token classification. In the H-VILA approach, we show that hierarchical encoding of layout-groups can result in up-to 47% inference time reduction with less than 0.8% Macro F1 loss. Unlike prior layout-aware approaches, our methods do not require expensive additional pretraining, only fine-tuning, which we show can reduce training cost by up to 95%. Experiments are conducted on a newly curated evaluation suite, S2-VLUE, that unifies existing automatically-labeled datasets and includes a new dataset of manual annotations covering diverse papers from 19 scientific disciplines. Pre-trained weights, benchmark datasets, and source code are available at https://github.com/allenai/VILA.
Presented at ACL 2022
Article at MIT Press