Timezone: »
Understanding which inductive biases could be helpful for the unsupervised learning of object-centric representations of natural scenes is challenging. In this paper, we use neural style transfer to generate datasets where objects have complex textures while still retaining ground-truth annotations. We find that methods that use a single module to reconstruct both the shape and visual appearance of each object learn more useful representations and achieve better object separation. In addition, we observe that adjusting the latent space size is insufficient to improve segmentation performance. Finally, the downstream usefulness of the representations is significantly more strongly correlated with segmentation quality than with reconstruction accuracy.
Author Information
Samuele Papa (University of Amsterdam)
I have a background in information engineering, computer engineering, and artificial intelligence. In 2021 I completed both an MSc (cum laude) in Computer Engineering at the University of Padova and an MSc in Human-Centered Artificial Intelligence at the Technical University of Denmark. During my studies, I focused on fundamental research in the field of Deep Learning, specifically on how to obtain useful representations of images to enable the automation of higher-level cognitive tasks. I am now a PhD candidate under the POP-AART Lab (2021-2024), a collaboration between Elekta, the University of Amsterdam, and the Netherlands Cancer Institute. The aim of the collaboration is personalized online radiotherapy using artificial intelligence methods. The lab is supervised by Jan-Jakob Sonke and Efstratios Gavves. I will focus on using deep generative models to improve the quality of Cone Beam Computed Tomography (CBCT) while enforcing geometric and pathological integrity.
Ole Winther (DTU and KU)
Andrea Dittadi (Technical University of Denmark)
More from the Same Authors
-
2021 : Representation Learning for Out-of-distribution Generalization in Downstream Tasks »
Frederik Träuble · Andrea Dittadi · Manuel Wuthrich · Felix Widmaier · Peter V Gehler · Ole Winther · Francesco Locatello · Olivier Bachem · Bernhard Schölkopf · Stefan Bauer -
2021 : Representation Learning for Out-of-distribution Generalization in Downstream Tasks »
Frederik Träuble · Andrea Dittadi · Manuel Wüthrich · Felix Widmaier · Peter Gehler · Ole Winther · Francesco Locatello · Olivier Bachem · Bernhard Schölkopf · Stefan Bauer -
2022 Poster: SCHA-VAE: Hierarchical Context Aggregation for Few-Shot Generation »
Giorgio Giannone · Ole Winther -
2022 Poster: Generalization and Robustness Implications in Object-Centric Learning »
Andrea Dittadi · Samuele Papa · Michele De Vita · Bernhard Schölkopf · Ole Winther · Francesco Locatello -
2022 Spotlight: SCHA-VAE: Hierarchical Context Aggregation for Few-Shot Generation »
Giorgio Giannone · Ole Winther -
2022 Spotlight: Generalization and Robustness Implications in Object-Centric Learning »
Andrea Dittadi · Samuele Papa · Michele De Vita · Bernhard Schölkopf · Ole Winther · Francesco Locatello -
2021 Poster: On Disentangled Representations Learned from Correlated Data »
Frederik Träuble · Elliot Creager · Niki Kilbertus · Francesco Locatello · Andrea Dittadi · Anirudh Goyal · Bernhard Schölkopf · Stefan Bauer -
2021 Oral: On Disentangled Representations Learned from Correlated Data »
Frederik Träuble · Elliot Creager · Niki Kilbertus · Francesco Locatello · Andrea Dittadi · Anirudh Goyal · Bernhard Schölkopf · Stefan Bauer