Improving Topic Models with Latent Feature Word Representations
Published
2015-06-02
Dat Quoc Nguyen
,
Richard Billingsley
,
Lan Du
,
Mark Johnson
Dat Quoc Nguyen
Macquarie University, Australia
Richard Billingsley
Macquarie University, Australia
Lan Du
Macquarie University, Australia
Mark Johnson
Macquarie University, Australia
Abstract
Probabilistic topic models are widely used to discover latent topics in document collections, while latent feature vector representations of words have been used to obtain high performance in many NLP tasks. In this paper, we extend two different Dirichlet multinomial topic models by incorporating latent feature vector representations of words trained on very large corpora to improve the word-topic mapping learnt on a smaller corpus. Experimental results show that by using information from the external corpora, our new models produce significant improvements on topic coherence, document clustering and document classification tasks, especially on datasets with few or short documents.
PDF (presented at EMNLP 2015)
erratum
(prior PDF)