Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: Scaling in AI for Scientific Discovery

ScaLES: Scalable Latent Exploration Score for Pre-Trained Generative Networks

Omer Ronen · Ahmed Imtiaz Humayun · Randall Balestriero · Richard Baraniuk · Bin Yu

Keywords: [ VAE ] [ Drug discovery ] [ Bayesian Optimization ] [ latent space optimization ]


Abstract:

We develop Scalable Latent Exploration Score (ScaLES) to mitigate over-exploration in Latent Space Optimization (LSO), a popular method for solving black-box discrete optimization problems. LSO utilizes continuous optimization within the latent space of a Variational Autoencoder (VAE) and is known to be susceptible to over-exploration, which manifests in unrealistic solutions that reduce its practicality. ScaLES is an exact and theoretically motivated method leveraging the trained decoder's approximation of the data distribution. ScaLES can be calculated with any existing decoder, e.g. from a VAE, without additional training, architectural changes, or access to the training data. Our evaluation across five LSO benchmark tasks and three VAE architectures demonstrates that ScaLES enhances the quality of the solutions while maintaining high objective values, leading to improvements over existing solutions. We believe that new avenues to LSO will be opened by ScaLES’ ability to identify out of distribution areas, differentiability, and computational tractability. To help the reviewers assess ScaLES, we include an anonymous Colab replicating some results.

Chat is not available.