Timezone: »

 
Poster
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalisation
Junjie Hu · Sebastian Ruder · Aditya Siddhant · Graham Neubig · Orhan Firat · Melvin Johnson

Tue Jul 14 10:00 AM -- 10:45 AM & Tue Jul 14 09:00 PM -- 09:45 PM (PDT) @

Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We will release the benchmark to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.

Author Information

Junjie Hu (Carnegie Mellon University)
Sebastian Ruder (DeepMind)
Aditya Siddhant (Google Research)
Graham Neubig (Carnegie Mellon University)
Orhan Firat (Google)
Melvin Johnson (Google)

More from the Same Authors