Group fairness definitions make assumptions about the underlying decision-problem that restrict them to classification problems. Numerous bespoke interpretations of group fairness definitions exist as attempts to extend them to specific applications. In an effort to generalize group fairness definitions beyond classification, Blandin & Kash (2021) explore using utility functions to define group fairness measures. In addition to the decision-maker's utility function, they introduce a benefit function that represents the individual's utility from encountering a given decision-maker policy. Using this framework, we interpret fairness problems as a multi-objective optimization, where we aim to optimize for both the decision-maker's utility and the individual's benefit, as well as reduce the individual benefit difference across protected groups. We demonstrate our instantiation of this multi-objective approach in a reinforcement learning simulation.