Skip to main navigation menu Skip to main content Skip to site footer

Reproducible and Efficient Benchmarks for Hyperparameter Optimization of Neural Machine Translation Systems

Abstract

Hyperparameter selection is a crucial part of building neural machine translation (NMT) systems across both academia and industry. Fine-grained adjustments to a model's architecture or training recipe can mean the difference between a positive and negative research result or between a state-of-the-art and under-performing system. While recent literature has proposed methods for automatic hyperparameter optimization (HPO), there has been limited work on applying these methods to NMT, due in part to the high costs associated with experiments that train large numbers of model variants. To facilitate research in this space, we introduce a lookup-based approach that uses a library of pre-trained models for fast, low cost HPO experimentation. Our contributions include (1) the release of a large collection of trained NMT models covering a wide range of hyperparameters, (2) the proposal of targeted metrics for evaluating HPO methods on NMT, and (3) a reproducible benchmark of several HPO methods against our model library, including novel graph-based and multi-objective methods.

Article at MIT Press Presented at EMNLP 2020