Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Geometry-grounded Representation Learning and Generative Modeling

Joint Diffusion Processes as an Inductive Bias in Sheaf Neural Networks

Ferran Hernandez Caralt · Guillermo Bernardez · Iulia Duta · Eduard Alarcon · Pietro LiĆ³

Keywords: [ Sheaf Neural Networks ] [ Geometry ] [ Sheaves ] [ Topology ] [ Heterophily ] [ Opinion Dynamics ] [ ODEs ] [ Oversmoothing ] [ Graph Neural Networks ] [ synthetic data ]


Abstract: Sheaf Neural Networks (SNNs) naturally extend Graph Neural Networks (GNNs) by endowing a cellular sheaf over the graph, equipping nodes and edges with vector spaces and defining linear mappings between them. While the attached geometric structure has proven to be useful in analyzing heterophily and oversmoothing, so far the methods by which the sheaf is computed do not always guarantee a good performance in such settings. In this work, drawing inspiration from opinion dynamics concepts, we propose two novel sheaf learning approaches that (i) provide a more intuitive understanding of the involved structure maps, (ii) introduce a useful inductive bias for heterophily and oversmoothing, and (iii) infer the sheaf in a way that does not scale with the number of features, thus using fewer learnable parameters than existing methods. In our evaluation, we show the limitations of the real-world benchmarks used so far on SNNs, and design a new synthetic task --leveraging the symmetries of $n$-dimensional ellipsoids-- that enables us to better assess the strengths and weaknesses of sheaf-based models. Our extensive experimentation on these novel datasets reveals valuable insights into the scenarios and contexts where SNNs in general --and our proposed approaches in particular-- can be beneficial.

Chat is not available.