Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Knowledge and Logical Reasoning in the Era of Data-driven Learning

Revealing the Intrinsic Ability of Generative Language Models in Relation Prediction

Qi Li · Lyuwen Wu · Luoyi Fu · Xinbing Wang · SHIYU LIANG


Abstract:

Traditional paradigms for the relation prediction usually require the concatenation of a pre-trained architecture with a specialized relation predictor, and further fine-tuning to adapt to the new domain. Recently, large generative language models (GLMs) have exhibited powerful capabilities in text generation across general domains without the need of further fine-tuning. Then a natural question arises: can we develop an accurate relation predictor using pre-trained GLMs without further fine-tuning? To answer this question, we first establish a data pipeline to obtain four relation prediction datasets from text generation datasets and that GLMs are further pre-trained on the same domain. Second, we propose a closed-form relation predictor, which do not require additional fine-tuning. Finally, we conduct experiments using BART and T5 models of different sizes to compare our method with the baseline. We observe significant improvements in performance. For example, on the Delve (1K) dataset and with the BART-large model, our method achieves an FPR of 5.30% at 95% TPR, whereas the baseline yields approximately 40% FPR.

Chat is not available.