Machine learning algorithms are increasingly used to guide decisions by human experts, including judges, doctors, and managers. Researchers and policymakers, however, have raised concerns that these systems might inadvertently exacerbate societal biases. To measure and mitigate such potential bias, there has recently been an explosion of competing mathematical definitions of what it means for an algorithm to be fair. But there’s a problem: nearly all of the prominent definitions of fairness suffer from subtle shortcomings that can lead to serious adverse consequences when used as an objective. In this tutorial, we illustrate these problems that lie at the foundation of this nascent field of algorithmic fairness, drawing on ideas from machine learning, economics, and legal theory. In doing so we hope to offer researchers and practitioners a way to advance the area.