Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Human-Machine Collaboration and Teaming

CrowdPlay: Crowdsourcing demonstrations for learning human-AI interaction

Matthias Gerstgrasser


Abstract:

Crowdsourcing has been instrumental for driving AI advances that rely on large-scale data. At the same time, reinforcement learning has seen rapid progress through the development of an almost plug-and-play software ecosystem around standard libraries such as OpenAI Gym and Baselines. In this paper, we aim to fill a gap at the intersection of these two: Enabling large-scale collection of human behavioral data in standard AI environments and together with AI agents trained with standard libraries, with the aim of enabling novel advancements in offline learing and human-AI interaction research. To this end, we present CrowdPlay, a complete crowdsourcing pipeline for any standard RL environment including OpenAI Gym (made available under an open-source license) and a large-scale publicly available crowdsourced dataset of human gameplay demonstrations in Atari 2600 games, including human-AI multiagent data. For pairing human and AI agents in the same environment, CrowdPlay can directly interface with standard RL training pipelines, allowing deployment of trained agents with minimal overhead. We hope that this will drive the improvement in design of algorithms that account for the complexity of human, behavioral data, and as a platform for evaluation of human-AI cooperation methods. Our code and dataset are available under at (URL redacted for blind review).

Chat is not available.