Uncertainty-Constrained Trustworthiness for Graph Learning
Abstract
Graph learning has been increasingly deployed in critical and sensitive domains, raising pressing demands for trustworthiness-robustness, fairness, and beyond. However, these properties are often undermined by various perturbations, which induce distributional uncertainty and compromise the trustworthiness of graph learning. To address this, we propose DICT, a novel framework that models distributional uncertainty to achieve trustworthy graph learning. Specifically, DICT formulates a unified optimization objective that captures perturbation-induced distributional shifts in graph topology, node features, and labels, and minimizes the worst-case risk over the uncertainty set. However, directly optimizing this objective in its primal form leads to an infinite-dimensional problem. To make this problem tractable, we integrate strong duality and local Lipschitz continuity of the loss, reformulating the objective as a finite-dimensional min-max problem. We focus on robustness and fairness as primary instantiations of DICT because they are not only critical in real-world applications, but also provide transferable modeling principles for broader trustworthiness objectives. By formulating fairness in the form of an uncertainty set, DICT pioneers unified robustness and fairness within a single optimization framework. Extensive experiments across diverse benchmarks and backbones demonstrate that DICT consistently improves both robustness and fairness, validating the effectiveness and adaptability of the DICT framework.