Encoding Prior Knowledge with Eigenword Embeddings

Dominique Osborne, Shashi Narayan, Shay B. Cohen

Abstract


Canonical correlation analysis (CCA) is a method for reducing the dimension of data represented using two views. It has been previously used to derive word embeddings, where one view indicates a word, and the other view indicates its context. We describe a way to incorporate prior knowledge into CCA, give a theoretical justification for it, and test it by deriving word embeddings and evaluating them on a myriad of datasets.


Refbacks

  • There are currently no refbacks.


Copyright (c) 2016 Association for Computational Linguistics

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.