Online learning typically aims to perform nearly as well as the best hypothesis in hindsight. For some hypothesis classes, though, even finding the best hypothesis offline is challenging. In such offline cases, local search techniques are often employed and only local optimality guaranteed. For online decision-making with such hypothesis classes, we introduce local regret, a generalization of regret that aims to perform nearly as well as only nearby hypotheses. We then present a general algorithm that can minimize local regret for arbitrary locality graphs. We also show that certain forms of structure in the graph can be exploited to drastically simplify learning. These algorithms are then demonstrated on a diverse set of online problems (some previously unexplored): online disjunct learning, online Max-SAT, and online decision tree learning.