Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DMLR Workshop: Data-centric Machine Learning Research

Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias

Yue Yu · Yuchen Zhuang · Jieyu Zhang · Yu Meng · Alex Ratner · Ranjay Krishna · Jiaming Shen · Chao Zhang


Abstract:

Large language models (LLMs) have been recently leveraged as training data generators for text classification. While previous research has explored different approaches to training models using generated data, there is a tendency to rely on simple class-conditional prompts, which may limit the diversity of the generated data and inherit systematic biases of LLM. Thus, we investigate training data generation with diversely attributed prompts (e.g., specifying the length and style), which have the potential to yield diverse and attributed generated data. Our investigation focuses on datasets with high cardinality and diverse domains, wherein we demonstrate that attributed prompts outperform simple class-conditional prompts in terms of the resulting model's performance. Additionally, we present a comprehensive empirical study on data generation encompassing vital aspects like bias, diversity, and efficiency. Importantly, our findings highlight two key observations: firstly, synthetic datasets generated by simple prompts exhibit significant biases, such as regional bias; secondly, attribute diversity plays a pivotal role in enhancing model performance.

Chat is not available.