Skip to yearly menu bar Skip to main content


Poster

Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective

Yang Chen · Cong Fang · Zhouchen Lin · Bing Liu

Hall C 4-9 #917
[ ] [ Paper PDF ]
Thu 25 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

Foundation Models (FMs) have demonstrated remarkable insights into the relational dynamics of the world, leading to the crucial question: how do these models acquire an understanding of world hybrid relations? Traditional statistical learning, particularly for prediction problems, may overlook the rich and inherently structured information from the data, especially regarding the relationships between objects. We introduce a mathematical model that formalizes relational learning as hypergraph recovery to study pre-training of FMs. In our framework, the world is represented as a hypergraph, with data abstracted as random samples from hyperedges. We theoretically examine the feasibility of a Pre-Trained Model (PTM) to recover this hypergraph and analyze the data efficiency in a minimax near-optimal style. By integrating rich graph theories into the realm of PTMs, our mathematical framework offers powerful tools for an in-depth understanding of pre-training from a unique perspective and can be used under various scenarios. As an example, we extend the framework to entity alignment in multimodal learning.

Chat is not available.