Adapting to All Domains at Once: Rewarding Domain Invariance in SMT
Published
2016-04-16
Hoang Cuong
,
Khalil Sima'an
,
Ivan Titov
Hoang Cuong
ILLC, University of Amsterdam
Khalil Sima'an
ILLC, University of Amsterdam
Ivan Titov
ILLC, University of Amsterdam
Abstract
Existing work on domain adaptation for statistical machine translation has consistently assumed access to a small sample from the test distribution (target domain) at training time. In practice, however, the target domain may not be known at training time or it may change to match user needs. In such situations, it is natural to push the system to make safer choices, giving higher preference to domain-invariant translations, which work well across domains, over risky domain-specific alternatives. We encode this intuition by (1) inducing latent subdomains from the training data only; (2) introducing features which measure how specialized phrases are to individual induced sub-domains; (3) estimating feature weights on out-of-domain data (rather than on the target domain). We conduct experiments on three language pairs and a number of different domains. We observe consistent improvements over a baseline which does not explicitly reward domain invariance.
PDF (presented at ACL 2016)