Skip to yearly menu bar Skip to main content


Talk

Re-revisiting Learning on Hypergraphs: Confidence Interval and Subgradient Method

Chenzi Zhang · Shuguang Hu · Zhihao Gavin Tang · Hubert Chan

C4.6 & C4.7

Abstract:

We revisit semi-supervised learning on hypergraphs. Same as previous approaches, our method uses a convex program whose objective function is not everywhere differentiable. We exploit the non-uniqueness of the optimal solutions, and consider confidence intervals which give the exact ranges that unlabeled vertices take in any optimal solution. Moreover, we give a much simpler approach for solving the convex program based on the subgradient method. Our experiments on real-world datasets confirm that our confidence interval approach on hypergraphs outperforms existing methods, and our sub-gradient method gives faster running times when the number of vertices is much larger than the number of edges.

Live content is unavailable. Log in and register to view live content