Poster
UniMC: Taming Diffusion Transformer for Unified Keypoint-Guided Multi-Class Image Generation
Qin Guo · Ailing Zeng · Dongxu Yue · Ceyuan Yang · Yang Cao · Hanzhong Guo · SHEN FEI · Wei Liu · Xihui Liu · Dan Xu
West Exhibition Hall B2-B3 #W-218
Although significant advancements have been achieved in the progress of keypoint-guided Text-to-Image diffusion models, existing mainstream keypoint-guided models encounter challenges in controlling the generation of more general non-rigid objects beyond humans (e.g., animals). Moreover, it is difficult to generate multiple overlapping humans and animals based on keypoint controls solely. These challenges arise from two main aspects: the inherent limitations of existing controllable methods and the lack of suitable datasets. First, we design a DiT-based framework, named UniMC, to explore unifying controllable multi-class image generation. UniMC integrates instance- and keypoint-level conditions into compact tokens, incorporating attributes such as class, bounding box, and keypoint coordinates. This approach overcomes the limitations of previous methods that struggled to distinguish instances and classes due to their reliance on skeleton images as conditions. Second, we propose HAIG-2.9M, a large-scale, high-quality, and diverse dataset designed for keypoint-guided human and animal image generation. HAIG-2.9M includes 786K images with 2.9M instances. This dataset features extensive annotations such as keypoints, bounding boxes, and fine-grained captions for both humans and animals, along with rigorous manual inspection to ensure annotation accuracy. Extensive experiments demonstrate the high quality of HAIG-2.9M and the effectiveness of UniMC, particularly in heavy occlusions and multi-class scenarios.
Our approach endows text-to-image diffusion models with the ability to generate humans and animals guided by user-provided keypoints, thereby enhancing the usability of controllable generative models.
Live content is unavailable. Log in and register to view live content