Skip to yearly menu bar Skip to main content


Afternoon Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction

Creating a Bias-Free Dataset of Food Delivery App Reviews with Data Poisoning Attacks

Hyunmin Lee · SeungYoung Oh · JinHyun Han · Hyunggu Jung


Abstract:

Although artificial intelligence (AI) models created many benefits and achievements in our time, they also have a possibility of causing unexpected consequences when the models are biased. One of the main reasons why AI models are biased is due to data poisoning attacks. Therefore, it is important for AI model developers to understand how biased their training data is when determining the training data set to develop fair AI models. While the researchers reported several datasets for the purpose of the training dataset, the existing studies did not consider the possibility of data poisoning attacks that the dataset might have due to biases within the dataset. To reduce this gap, we created and validated a dataset which reflects the possibility of bias in individual reviews of food delivery apps. This study contributes to the community of AI model developers aiming at creating fair AI models by proposing a bias-free dataset of food delivery app reviews with data poisoning attacks as an example.

Chat is not available.