Exploring Compositional Architectures and Word Vector Representations for Prepositional Phrase Attachment
Published
2014-12-23
Yonatan Belinkov
,
Tao Lei
,
Regina Barzilay
,
Amir Globerson
Yonatan Belinkov
Massachusetts Institute of Technology
Tao Lei
Massachusetts Institute of Technology
Regina Barzilay
Massachusetts Institute of Technology
Amir Globerson
The Hebrew University of Jerusalem
Abstract
Prepositional phrase (PP) attachment disambiguation is a known challenge in syntactic parsing. The lexical sparsity associated with PP attachments motivates research in word representations that can capture pertinent syntactic and semantic features of the word. One promising solution is to use word vectors induced from large amounts of raw text. However, state-of-the-art systems that employ such representations yield modest gains in PP attachment accuracy.
In this paper, we show that word vector representations can yield significant PP attachment performance gains. This is achieved via a non-linear architecture that is discriminatively trained to maximize PP attachment accuracy. The architecture is initialized with word vectors trained from unlabeled data, and relearns those to maximize attachment accuracy. We obtain additional performance gains with alternative representations such as dependency-based word vectors. When tested on both English and Arabic datasets, our method outperforms both a strong SVM classifier and state-of-the-art parsers. For instance, we achieve 82.6% PP attachment accuracy on Arabic, while the Turbo and Charniak self-trained parsers obtain 76.7% and 80.8% respectively.
PDF (presented at NAACL 2015)
Erratum