Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Accessible and Efficient Foundation Models for Biological Discovery

High-Resolution In Silico Painting with Generative Models

Trang Le

Keywords: [ bioimage analysis ] [ label free ] [ microscopy ] [ cell painting ] [ vqgan ] [ generative model ]


Abstract:

Label-free organelle prediction presents a longstanding challenge in cellular imaging, given the promise to to circumvent the numerous drawbacks associated with fluorescent microscopy, including its high costs, cytotoxicity, and time-consuming nature. Recent advancements in deep learning have introduced numerous effective algorithms, primarily deterministic, for predicting fluorescent patterns from transmitted light microscopy images. However, existing models frequently suffer from poor performance or are limited to specific datasets, image modalities, and magnifications, thus lacking a universal solution. In this paper, we present a simplified VQGAN training scheme that is easily adapted with different input/output channels for image-to-image translation tasks. We applied the algorithm to generate multi-channel organelle staining outputs from bright field inputs, equivalent to the popular Cell Painting assay. The same algorithm also participated and placed first in the ISBI 2024 Light My Cell challenge.

Chat is not available.