Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale
Published
2022-12-22
Laurent Sartran
,
Samuel Barrett
,
Adhiguna Kuncoro
,
Miloš Stanojević
,
Phil Blunsom
,
Chris Dyer
Laurent Sartran
DeepMind
Samuel Barrett
University of Oxford
Adhiguna Kuncoro
DeepMind
University of Oxford
Miloš Stanojević
DeepMind
Phil Blunsom
University of Oxford
Chris Dyer
DeepMind
Abstract
We introduce Transformer Grammars (TGs), a novel class of Transformer language models that combine (i) the expressive power, scalability, and strong performance of Transformers and (ii) recursive syntactic compositions, which here are implemented through a special attention mask and deterministic transformation of the linearized tree. We find that TGs outperform various strong baselines on sentence-level language modeling perplexity, as well as on multiple syntax-sensitive language modeling evaluation metrics. Additionally, we find that the recursive syntactic composition bottleneck which represents each sentence as a single vector harms perplexity on document-level language modeling, providing evidence that a different kind of memory mechanism---one that is independent of composed syntactic representations---plays an important role in current successful models of long text.
Presented at EMNLP 2022
Article at MIT Press