Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: AI for Science: Scaling in AI for Scientific Discovery

Overconfident Oracles: Limitations of In Silico Sequence Design Benchmarking

Shikha Surana · Nathan Grinsztajn · Timothy Atkinson · Paul Duckworth · Thomas Barrett

Keywords: [ evaluation ] [ Benchmarks ] [ Biological sequence design ]


Abstract:

Machine learning methods can automate the in silico design of biological sequences, aiming to reduce costs and accelerate medical research. Given the limited access to wet labs, in silico design methods commonly use an oracle model to evaluate de novo generated sequences. However, the use of different oracle models across methods makes it challenging to compare them reliably, motivating the question: are in silico sequence design benchmarks reliable? In this work, we examine 12 sequence design methods that utilise ML oracles common in the literature and find that there are significant challenges with their cross-consistency and reproducibility. Indeed, oracles differing by architecture, or even just training seed, are shown to yield conflicting relative performance with our analysis suggesting poor out-of-distribution generalisation as a key issue. To address these challenges, we propose supplementing the evaluation with a suite of biophysical measures to assess the viability of generated sequences and limit out-of-distribution sequences the oracle is required to score, thereby improving the robustness of the design procedure. Our work aims to highlight potential pitfalls in the current evaluation process and contribute to the development of robust benchmarks, ultimately driving the improvement of in silico design methods.

Chat is not available.