Abstract
Cross-lingual open-ended generation -- responding in a language different from that of the query -- is an important yet understudied problem. This work proposes XL-Instruct, a novel technique for generating high-quality synthetic data, and introduces XL-AlpacaEval, a new benchmark for evaluating cross-lingual generation capabilities of large language models (LLMs). Our experiments show that fine-tuning with just 8K instructions generated using XL-Instruct significantly improves model performance, increasing the win rate against GPT-4o-mini from 7.4% to 21.5% and improving on several fine-grained quality metrics. Moreover, base LLMs fine-tuned on XL-Instruct exhibit strong zero-shot improvements to same-language question answering, as shown on our machine-translated m-AlpacaEval. These consistent gains highlight the promising role of XL-Instruct in the post-training of multilingual LLMs. Finally, we publicly release XL-Suite, a collection of training and evaluation data to facilitate research in cross-lingual open-ended generation.
| Original language | English |
|---|---|
| Title of host publication | Findings of the Association for Computational Linguistics: EMNLP 2025 |
| Editors | Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng |
| Place of Publication | Suzhou, China |
| Publisher | Association for Computational Linguistics |
| Pages | 10418-10432 |
| Number of pages | 15 |
| ISBN (Print) | 9798891763357 |
| DOIs | |
| Publication status | Published - 01 Nov 2025 |