Fully Zero-Shot Image Dehazing
Abstract
Image dehazing, an important image restoration problem, aims to recover clear scene content from images degraded by atmospheric haze. Existing dehazing methods rely on observing the distribution of hazy images during training: supervised approaches typically depend on synthetic datasets, leading to poor generalization in real-world scenarios; unsupervised methods are constrained by the limited diversity of observed haze conditions due to the difficulty of collecting real hazy images, and fail to generalize to unseen haze types. To address these challenges, we propose the first fully zero-shot dehazing framework that is trained without any hazy images. The framework is built upon a set of representations that remain invariant across clean and hazy images to bridge the two domains, which is both theoretically derived and experimentally validated. Consequently, we formulate dehazing as a conditional generative modeling problem and train a diffusion model solely with the invariant representations of the abundant and readily available clean images. During testing, the same representations extracted from hazy images serve as the conditional input to guide the diffusion process toward the clean image distribution. Quantitative analyses verify the effectiveness of the proposed representations, and extensive experiments across various real-world hazy datasets demonstrate our framework’s remarkable generalization ability, significantly outperforming existing methods. Our code will be available after the review process.