Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DMLR Workshop: Data-centric Machine Learning Research

MultiLegalPile: A 689GB Multilingual Legal Corpus

Joel Niklaus · Veton Matoshi · Matthias Stürmer · Ilias Chalkidis · Daniel Ho


Abstract:

Large, high-quality datasets are crucial for training \acp{LLM}. However, so far, there are few datasets available for specialized critical domains such as law and the available ones are often only for the English language. We curate and release \textsc{MultiLegalPile}, a 689GB corpus in 24 languages from 17 jurisdictions. The \textsc{MultiLegalPile} corpus, which includes diverse legal data sources with varying licenses, allows for pretraining NLP models under fair use, with more permissive licenses for the Eurlex Resources and Legal mC4 subsets. We pretrain two RoBERTa models and one Longformer multilingually, and 24 monolingual models on each of the language-specific subsets and evaluate them on LEXTREME. Additionally, we evaluate the English and multilingual models on LexGLUE. Our multilingual models set a new SotA on LEXTREME and our English models on LexGLUE. We release the dataset, the trained models, and all of the code under the most open possible licenses.

Chat is not available.