Timezone: »
Recent language models have shown impressive multilingual performance, even when not explicitly trained for it. Despite this, there are concerns about the quality of their outputs across different languages. In this paper, we show how disparity in the treatment of languages arises at the tokenization stage, well before a model is even invoked. The same text translated into different languages can have drastically different tokenization lengths, with differences up to 15 times in some cases. These disparities persist across the 17 tokenizers we evaluate, even if they are intentionally trained for multilingual support. Character-level and byte-level models also exhibit over 4 times the difference in the encoding length for some language pairs. This induces unfair treatment for some language communities in regard to the cost of accessing commercial language services, the processing time and latency, as well as the amount of content that can be provided as context to the models. Therefore, we make the case that we should train future language models using multilingually fair subword tokenizers
Author Information
Aleksandar Petrov (University of Oxford)
Emanuele La Malfa (Department of Computer Science, University of Oxford)
Phil Torr (Oxford)
Adel Bibi (University of Oxford)
More from the Same Authors
-
2021 : Combating Adversaries with Anti-Adversaries »
Motasem Alfarra · Juan C Perez · Ali Thabet · Adel Bibi · Phil Torr · Bernard Ghanem -
2021 : Detecting and Quantifying Malicious Activity with Simulation-based Inference »
Andrew Gambardella · Naeemullah Khan · Phil Torr · Atilim Gunes Baydin -
2022 : Make Some Noise: Reliable and Efficient Single-Step Adversarial Training »
Pau de Jorge Aranda · Adel Bibi · Riccardo Volpi · Amartya Sanyal · Phil Torr · Gregory Rogez · Puneet Dokania -
2022 : Catastrophic overfitting is a bug but also a feature »
Guillermo Ortiz Jimenez · Pau de Jorge Aranda · Amartya Sanyal · Adel Bibi · Puneet Dokania · Pascal Frossard · Gregory Rogez · Phil Torr -
2022 : Illusionary Attacks on Sequential Decision Makers and Countermeasures »
Tim Franzmeyer · Joao Henriques · Jakob Foerster · Phil Torr · Adel Bibi · Christian Schroeder -
2022 : How robust are pre-trained models to distribution shift? »
Yuge Shi · Imant Daunhawer · Julia Vogt · Phil Torr · Amartya Sanyal -
2022 : How robust are pre-trained models to distribution shift? »
Yuge Shi · Imant Daunhawer · Julia Vogt · Phil Torr · Amartya Sanyal -
2023 : Illusory Attacks: Detectability Matters in Adversarial Attacks on Sequential Decision-Makers »
Tim Franzmeyer · Stephen Mcaleer · Joao Henriques · Jakob Foerster · Phil Torr · Adel Bibi · Christian Schroeder -
2023 : Certified Calibration: Bounding Worst-Case Calibration under Adversarial Attacks »
Cornelius Emde · Francesco Pinto · Thomas Lukasiewicz · Phil Torr · Adel Bibi -
2023 : Certifying Ensembles: A General Certification Theory with S-Lipschitzness »
Aleksandar Petrov · Francisco Eiras · Amartya Sanyal · Phil Torr · Adel Bibi -
2023 : Who to imitate: Imitating desired behavior from diverse multi-agent datasets »
Tim Franzmeyer · Jakob Foerster · Edith Elkind · Phil Torr · Joao Henriques -
2023 : Provably Correct Physics-Informed Neural Networks »
Francisco Girbal Eiras · Adel Bibi · Rudy Bunel · Krishnamurthy Dvijotham · Phil Torr · M. Pawan Kumar -
2023 Poster: Graph Inductive Biases in Transformers without Message Passing »
Liheng Ma · Chen Lin · Derek Lim · Adriana Romero Soriano · Puneet Dokania · Mark Coates · Phil Torr · Ser Nam Lim -
2023 Poster: Certifying Ensembles: A General Certification Theory with S-Lipschitzness »
Aleksandar Petrov · Francisco Eiras · Amartya Sanyal · Phil Torr · Adel Bibi -
2022 : Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS »
Christian Schroeder · Yongchao Huang · Phil Torr · Martin Strohmeier -
2022 : Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS »
Christian Schroeder · Yongchao Huang · Phil Torr · Martin Strohmeier -
2022 Poster: Adversarial Masking for Self-Supervised Learning »
Yuge Shi · Siddharth N · Phil Torr · Adam Kosiorek -
2022 Spotlight: Adversarial Masking for Self-Supervised Learning »
Yuge Shi · Siddharth N · Phil Torr · Adam Kosiorek -
2022 Poster: Communicating via Markov Decision Processes »
Samuel Sokota · Christian Schroeder · Maximilian Igl · Luisa Zintgraf · Phil Torr · Martin Strohmeier · Zico Kolter · Shimon Whiteson · Jakob Foerster -
2022 Spotlight: Communicating via Markov Decision Processes »
Samuel Sokota · Christian Schroeder · Maximilian Igl · Luisa Zintgraf · Phil Torr · Martin Strohmeier · Zico Kolter · Shimon Whiteson · Jakob Foerster -
2017 Poster: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Talk: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson