Deep Flow Networks
Ozan Candogan ⋅ Ayoub Foussoul
Abstract
We introduce Deep Flow Networks (DFNs), a new class of discrete function approximators. DFNs are inspired by and generalize minimum-cost flow value functions that map node imbalances on a subset of nodes to the optimal flow cost. Such functions are known to be M-convex (Murota2003) and admit efficient optimization. On the theoretical side, we prove that DFNs are universal approximators for discrete functions on $\mathbb{Z}^d$ that admit convex extensions to $\mathbb{R}^d$, and characterize their optimization complexity in terms of their deviation from the M-convex regime. Guided by these results, we develop a practical DFN implementation for learning from data. Finally, we evaluate our implementation empirically on data from different ground-truth functions, showing that DFNs achieve strong approximation accuracy while being substantially faster to optimize than benchmark approaches.
Successful Page Load