Learning to Prune: Exploring the Frontier of Fast and Accurate Parsing

Tim Vieira, Jason Eisner


Pruning hypotheses during dynamic programming is commonly used to speed up inference in settings such as parsing.  Unlike prior work, we train a pruning policy under an objective that measures end-to-end performance: we search for a fast and accurate policy. This poses a difficult machine learning problem, which we tackle with the LOLS algorithm.  LOLS training must continually compute the effects of changing pruning decisions: we show how to make this efficient in the constituency parsing setting, via dynamic programming and change propagation algorithms.  We find that optimizing end-to-end performance in this way leads to a better Pareto frontier---i.e., parsers which are more accurate for a given runtime.


  • There are currently no refbacks.

Copyright (c) 2017 Association for Computational Linguistics

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.