Samanantar: The Largest Publicly Available Parallel Corpora Collection For 11 Indic Languages
Published
2022-02-09
Gowtham Ramesh
,
Sumanth Doddapaneni
,
Aravinth Bheemaraj
,
Mayank Jobanputra
,
Raghavan AK
,
Ajitesh Sharma
,
Sujit Sahoo
,
Harshita Diddee
,
Mahalakshmi J
,
Divyanshu Kakwani
,
Navneet Kumar
,
Aswin Pradeep
,
Srihari Nagaraj
,
Kumar Deepak
,
Vivek Raghavan
,
Anoop Kunchukuttan
,
Pratyush Kumar
,
Mitesh Shantadevi Khapra
Gowtham Ramesh
Robert Bosch Center for Data Science and Artificial Intelligence
Sumanth Doddapaneni
Robert Bosch Center for Data Science and Artificial Intelligence
Aravinth Bheemaraj
Tarento
Mayank Jobanputra
IIT Madras
Raghavan AK
AI4Bharat
Ajitesh Sharma
Tarento
Sujit Sahoo
Tarento
Harshita Diddee
AI4Bharat
Mahalakshmi J
AI4Bharat
Divyanshu Kakwani
IIT Madras
Navneet Kumar
Tarento
Aswin Pradeep
Tarento
Srihari Nagaraj
Tarento
Kumar Deepak
Tarento
Vivek Raghavan
EkStep Foundation
Anoop Kunchukuttan
Microsoft
Pratyush Kumar
IIT Madras, AI4Bharat
Mitesh Shantadevi Khapra
IIT Madras, AI4Bharat
Abstract
We present Samanantar, the largest publicly available parallel corpora collection for Indic languages. The collection contains a total of 49.7 million sentence pairs between English and 11 Indic languages (from two language families). Specifically, we compile 12.4 million sentence pairs from existing, publicly-available parallel corpora, and additionally mine 37.4 million sentence pairs from the web, resulting in a 4X increase. We mine the parallel sentences from the web by combining many corpora, tools, and methods: (a) web-crawled monolingual corpora, (b) document OCR for extracting sentences from scanned documents, (c) multilingual representation models for aligning sentences, and (d) approximate nearest neighbor search for searching in a large collection of sentences. Human evaluation of samples from the newly mined corpora validate the high quality of the parallel sentences across 11 languages. Further, we extract 83.4 million sentence pairs between all 55 Indic language pairs from the English-centric parallel corpus using English as the pivot language. We trained multilingual NMT models spanning all these languages on Samanantar, which outperform existing models and baselines on publicly available benchmarks, such as FLORES, establishing the utility of Samanantar. Our data and models are available publicly at Samanantar and we hope they will help advance research in NMT and multilingual NLP for Indic languages.
Article at MIT Press
Presented at ACL 2022