Reducing Conversational Agents' Overconfidence through Linguistic Calibration

Sabrina J Mielke, Arthur Szlam, Emily Dinan, Y-Lan Boureau

Abstract


While improving neural dialogue agents' factual accuracy is the object of much research, another important aspect of communication, less studied in the setting of neural dialogue, is transparency about ignorance.

In this work, we analyze to what extent state-of-the-art chit-chat models are {\it linguistically calibrated} in the sense that their verbalized expression of doubt (or confidence) matches the likelihood that the model's responses are factually incorrect (or correct). We find that these models are poorly calibrated, yet we show that likelihood of correctness can accurately be predicted.

By incorporating such metacognitive features into the training of a controllable generation model, we obtain a dialogue agent with greatly improved linguistic calibration.




Refbacks

  • There are currently no refbacks.


Copyright (c) 2022 Association for Computational Linguistics

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.