Dependency Parsing with Backtracking using Deep Reinforcement Learning
Published
2022-09-07
Franck Dary
,
Maxime Petit
,
Alexis Nasr
Franck Dary
Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France
Maxime Petit
Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France
Alexis Nasr
Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France
Abstract
Greedy algorithms for nlp such as transition based parsing are prone to error propagation. One way to overcome this problem is to allow the algorithm to backtrack and explore an alternative solution in cases where new evidence contradicts the solution explored so far. In order to implement such a behavior, we use reinforcement learning and let the algorithm backtrack in cases where such an action gets a better reward than continuing to process the sentence. We test this idea on both PoS tagging and dependency parsing and show that backtracking is an effective means to fight against error propagation.
Article at MIT Press
Presented at EMNLP 2022