The Impact of Word Splitting on the Semantic Content of Contextualized Word Representations
Abstract
When deriving contextualized word representations from language models, a decision needs to be made on how to obtain one for out-of-vocabulary (OOV) words that are segmented into subwords. What is the best way to represent these words with a single vector, and are these representations of worse quality than those of in-vocabulary words? We carry out an intrinsic evaluation of embeddings from different models on semantic similarity tasks involving OOV words. Our analysis reveals, among other interesting findings, that the quality of representations of words that are split is often, but not always, worse than that of the embeddings of known words. Their similarity values, however, must be interpreted with caution.
Author Biography
Aina Garí Soler
I’m a Postdoctoral Researcher at Télécom-Paris, France, working with Chloé Clavel and Matthieu Labeau. Before that, I did my PhD at the LISN lab in Orsay, France, under the supervision of Marianna Apidianaki and Alexandre Allauzen.
My broad research area is Natural Language Processing, and more concretely I am working on Computational Lexical Semantics. My interests include representations of words and meaning in context, paraphrasing, lexical style and semantic ambiguity.