Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Tue Jul 15 11:00 AM -- 01:30 PM (PDT) @ East Exhibition Hall A-B #E-3009
Propagate and Inject: Revisiting Propagation-Based Feature Imputation for Graphs with Partially Observed Features
Daeho Um · Sunoh Kim · Jiwoong Park · Jongin Lim · Seong Jin Ahn · Seulki Park
[ OpenReview

In this paper, we address learning tasks on graphs with missing features, enhancing the applicability of graph neural networks to real-world graph-structured data. We identify a critical limitation of existing imputation methods based on feature propagation: they produce channels with nearly identical values within each channel, and these low-variance channels contribute very little to performance in graph learning tasks. To overcome this issue, we introduce synthetic features that target the root cause of low-variance channel production, thereby increasing variance in these channels. By preventing propagation-based imputation methods from generating meaningless feature values shared across all nodes, our synthetic feature propagation scheme mitigates significant performance degradation, even under extreme missing rates. Extensive experiments demonstrate the effectiveness of our approach across various graph learning tasks with missing features, ranging from low to extremely high missing rates. Additionally, we provide both empirical evidence and theoretical proof to validate the low-variance problem. The source code is available at https://github.com/daehoum1/fisf.