Despite great progress in the field of Natural Language Processing (NLP), the field is still struggling to ensure the robustness and fairness of models. So far, NLP has prioritized data size over data quality. Yet there is growing evidence suggesting that the diversity of data, a key dimension of data quality, is crucial for fair and robust NLP models. Many researchers are therefore trying to create more diverse datasets, but there is no clear path for them to follow. Even the fundamental question “How can we measure the diversity of a dataset? ” is currently wide open. We still lack the tools and theoretical insights to understand, improve, and leverage data diversity in NLP.
DataDivers will 1) develop the first ever framework to measure data diversity in NLP datasets; 2) investigate how data diversity impacts NLP model behavior; and 3) develop novel approaches that harness data diversity for fairer and more robust NLP models. We operationally define the diversity of a text collection as the variability of texts along specific dimensions (e.g., semantic, lexical, and sociolinguistic). Sociolinguistic diversity in particular, is an overlooked but crucial dimension, which this project aims to address.
DataDivers will break new ground by taking a comprehensive view of data diversity, which is urgently needed for robust and fair NLP. Its approach will be both theoretical and empirical. It will combine insights from disciplines that have developed methodologies to quantify data diversity with rigorous empirical experimentation. DataDivers will take a unique view on data diversity: measuring it at the dataset level, and across contexts for individual features. Finally, DataDivers will use its framework to develop diversity-informed data collection and model training methods.
This 5-year project (2025-2029) is funded with an ERC Starting Grant.