XL-Suite: cross-lingual synthetic training and evaluation data for open-ended generation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Cross-lingual open-ended generation -- responding in a language different from that of the query -- is an important yet understudied problem. This work proposes XL-Instruct, a novel technique for generating high-quality synthetic data, and introduces XL-AlpacaEval, a new benchmark for evaluating cross-lingual generation capabilities of large language models (LLMs). Our experiments show that fine-tuning with just 8K instructions generated using XL-Instruct significantly improves model performance, increasing the win rate against GPT-4o-mini from 7.4% to 21.5% and improving on several fine-grained quality metrics. Moreover, base LLMs fine-tuned on XL-Instruct exhibit strong zero-shot improvements to same-language question answering, as shown on our machine-translated m-AlpacaEval. These consistent gains highlight the promising role of XL-Instruct in the post-training of multilingual LLMs. Finally, we publicly release XL-Suite, a collection of training and evaluation data to facilitate research in cross-lingual open-ended generation.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2025
EditorsChristos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Place of PublicationSuzhou, China
PublisherAssociation for Computational Linguistics
Pages10418-10432
Number of pages15
ISBN (Print) 9798891763357
DOIs
Publication statusPublished - 01 Nov 2025

Fingerprint

Dive into the research topics of 'XL-Suite: cross-lingual synthetic training and evaluation data for open-ended generation'. Together they form a unique fingerprint.

Cite this