Sparse, Dense, and Attentional Representations for Text Retrieval

Yi Luan, Jacob Eisenstein, Kristina Toutanova, Michael Collins


Dual encoders perform retrieval by encoding documents and queries into dense low-dimensional vectors, scoring each document by its inner product with the query.  We investigate the capacity of this architecture relative to sparse bag-of-words  models and attentional neural networks. Using both theoretical and empirical analysis, we establish connections between the encoding dimension, the margin between gold and lower-ranked documents, and the document length, suggesting limitations in the capacity of fixed-length encodings to support precise retrieval of long documents. Building on these insights, we propose a simple neural model that combines the efficiency of dual encoders with some of the expressiveness of more costly attentional architectures, and explore sparse-dense hybrids to capitalize on the precision of  sparse retrieval. These models outperform strong alternatives in large-scale retrieval.


  • There are currently no refbacks.

Copyright (c) 2021 Association for Computational Linguistics

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.