From Paraphrase Database to Compositional Paraphrase Model and Back
Published
2015-06-12
John Wieting
,
Mohit Bansal
,
Kevin Gimpel
,
Karen Livescu
,
Dan Roth
John Wieting
University of Illinois at Urbana-Champaign
Mohit Bansal
TTI-Chicago
Kevin Gimpel
TTI-Chicago
Karen Livescu
TTI-Chicago
Dan Roth
University of Illinois at Urbana-Champaign
Abstract
The Paraphrase Database (PPDB; Ganitkevitch et al., 2013) is an extensive semantic resource, consisting of a list of phrase pairs with (heuristic) confidence estimates. However, it is still unclear how it can best be used, due to the heuristic nature of the confidences and its necessarily incomplete coverage. We propose models to leverage the phrase pairs from the PPDB to build parametric paraphrase models that score paraphrase pairs more accurately than the PPDB’s internal scores while simultaneously improving its coverage. They allow for learning phrase embeddings as well as improved word embeddings. Moreover, we introduce two new, manually annotated datasets to evaluate short-phrase paraphrasing models. Using our paraphrase model trained using PPDB, we achieve state-of-the-art results on standard word and bigram similarity tasks and beat strong baselines on our new short phrase paraphrase tasks.
PDF (presented at EMNLP 2015)
Erratum
(prior PDF)