Poster
in
Workshop: AI for Science: Scaling in AI for Scientific Discovery
Retrieve to Explain: Evidence-driven Predictions with Language Models
Ravi Patel · Angus Brayne · Rogier Hintzen · Daniel Jaroslawicz · Georgiana Neculae · Dane Corneil
Keywords: [ Shapley values ] [ Drug discovery ] [ Language Model ] [ Machine Learning ] [ explainability ] [ retrieval ] [ Multimodal ] [ biomedical ] [ target identification ] [ R2E ] [ Retrieve to Explain ] [ genetics ] [ human-in-the-loop ]
Language models hold incredible promise for enabling scientific discovery by synthesizing massive research corpora. Many complex scientific research questions have multiple plausible answers, each supported by evidence of varying strength. However, existing language models lack the capability to quantitatively and faithfully compare answer plausibility in terms of supporting evidence. To address this issue, we introduce Retrieve to Explain (R2E), a retrieval-based language model. R2E scores and ranks all possible answers to a research question based on evidence retrieved from a document corpus. The architecture represents each answer only in terms of its supporting evidence, with the answer itself masked. This allows us to extend feature attribution methods, such as Shapley values, to transparently attribute each answer's score back to its supporting evidence at inference time. The architecture also allows R2E to incorporate new evidence without retraining, including non-textual data modalities templated into natural language. We assess on the challenging task of drug target identification from scientific literature, a human-in-the-loop process where failures are extremely costly and explainability is paramount. When predicting whether drug targets will subsequently be confirmed as efficacious in clinical trials, R2E not only matches non-explainable literature-based models but also surpasses a genetics-based target identification approach used throughout the pharmaceutical industry.