Hiroki Ouchi
Nara Institute of Science and Technology
Jun Suzuki
Tohoku University
RIKEN
Sosuke Kobayashi
Preferred Networks, Inc.
Tohoku University
Sho Yokoi
Tohoku University
RIKEN
Tatsuki Kuribayashi
Tohoku University
Langsmith, Inc.
Masashi Yoshikawa
Tohoku University
RIKEN
Kentaro Inui
Tohoku University
RIKEN
Abstract
Interpretable rationales for model predictions are crucial in practical applications. We develop neural models that possess an interpretable inference process for dependency parsing. Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set. The training edges are explicitly used for the predictions; thus, it is easy to grasp the contribution of each edge to the predictions. Our experiments show that our instance-based models achieve competitive accuracy with standard neural models and have the reasonable plausibility of instance-based explanations.