NanoFLUX: Distillation-Driven Compression of Large Text-to-Image Generation Models for Mobile Devices
Ruchika Chavhan ⋅ Malcolm Chadwick ⋅ Alberto Gil Couto Pimentel Ramos ⋅ Luca Morreale ⋅ Mehdi Noroozi ⋅ Abhinav Mehrotra
Abstract
While large-scale text-to-image diffusion models continue to improve in visual quality, their increasing scale has widened the gap between state-of-the-art models and on-device solutions. To address this gap, we introduce NanoFLUX, a **2.4B** text-to-image flow-matching model distilled from **17B** FLUX.1-Schnell using a progressive compression pipeline designed to preserve generation quality. Our contributions include: (1) A model compression strategy driven by pruning redundant components in the diffusion transformer, reducing its size from 12B to 2B; (2) A ResNet-based token downsampling mechanism that reduces latency by allowing intermediate blocks to operate on lower-resolution tokens while preserving high-resolution processing elsewhere; (3) A novel text encoder distillation approach that leverages visual signals from early layers of the denoiser during sampling. Empirically, NanoFLUX generates $512 \times 512$ images in approximately 2.5 seconds on mobile devices, demonstrating the feasibility of high-quality on-device text-to-image generation.
Successful Page Load