Skip to main navigation menu Skip to main content Skip to site footer

Locally Typical Sampling

Abstract

Today's probabilistic language generators fall short when it comes to producing coherent and fluent text, despite the fact that the underlying models perform incredibly well in terms of standard metrics such as perplexity.   This dichotomy has puzzled the language generation community for the last few years. In this work, we posit that the abstraction of natural language generation as a discrete stochastic process can provide new insights into the behavior of probabilistic language generators, e.g., why high-probability texts can be dull or repetitive. Humans use language as a means of communicating information, aiming to do so in a simultaneously efficient and error-minimizing manner; in fact, psycholinguistics research suggests humans choose each word in a string with this subconscious goal in mind. We formally define the set of strings that meet this criterion: those for which each word has an information content close to the \emph{expected} information content, i.e., the conditional entropy of our model. We then propose a simple and efficient procedure for enforcing this criterion when generating from probabilistic models, which we call locally typical sampling. Automatic and human evaluations show that, in comparison to nucleus and top-k sampling, typical sampling offers competitive performance (in both abstractive summarization and story generation) in terms of quality while consistently reducing degenerate repetitions.

Presented at EMNLP 2022 Article at MIT Press

Author Biography

Clara Isabel Meister

PhD student under Ryan Cotterell at ETH Zurich