QED: A Framework and Dataset for Explanations in Question Answering

Matthew Lamm, Jennimaria Palomaki, Chris Alberti, Daniel Andor, Eunsol Choi, Livio Baldini Soares, Michael Collins


A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility and trust. To this end, we propose QED, a linguistically informed, extensible framework for explanations  in  question answering. A QED explanation specifies the relationship between a question  and  answer  according  to formal  semantic  notions  such  as  referential equality, sentencehood, and entailment. We describe and publicly release an expert-annotated dataset of QED explanations built upon a subset of the Google Natural Questions dataset, and report baseline models on two tasks—post-hoc explanation generation given an answer, and joint question answering  and  explanation  generation. In the joint setting, a promising result suggests that training on a relatively small  amount of QED data can improve question answering.   In addition to describing the  formal, language-theoretic motivations for the QED approach,  we  describe  a  large  user  study showing  that  the  presence  of  QED  explanations significantly improves the ability of untrained  raters  to  spot  errors  made  by  a strong neural QA baseline.


  • There are currently no refbacks.

Copyright (c) 2021 Association for Computational Linguistics

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.