Skip to yearly menu bar Skip to main content


Workshop

Incentives in Machine Learning

Boi Faltings · Yang Liu · David Parkes · Goran Radanovic · Dawn Song

Keywords:  incentives    federated learning    economic perspective of data    information elicitation  

Artificial Intelligence (AI), and Machine Learning systems in particular, often depend on the information provided by multiple agents. The most well-known example is federated learning, but also sensor data, crowdsourced human computation, or human trajectory inputs for inverse reinforcement learning. However, eliciting accurate data can be costly, either due to the effort invested in obtaining it, as in crowdsourcing, or due to the need to maintain automated systems, as in distributed sensor systems. Low-quality data not only degrades the performance of AI systems, but may also pose safety concerns. Thus, it becomes important to verify the correctness of data and be smart in how data is aggregated, and to provide incentives to promote effort and high-quality data. During the recent workshop on Federated Learning at NeurIPS 2019, 4 of 6 panel members mentioned incentives as the most important open issue.

This workshop is proposed to understand this aspect of Machine Learning, both theoretically and empirically. We particularly encourage contributions on the following aspects:
- How to collect high quality and credible data for machine learning systems from self-interested and possibly malicious agents, considering the game-theoretical properties of the problem?
- How to evaluate the quality of data supplied by self-interested and possibly malicious agents and how to optimally aggregate it?
- How to make use of machine learning in game-theoretic mechanisms that will facilitate the collection of high-quality data?

Chat is not available.
Timezone: America/Los_Angeles

Schedule