ACO-MoE-LoRA: Evolving-while-Training for Adapting Segment Anything Model 2 to Specialized Domains
Abstract
Static fine-tuning paradigms impose rigid structural constraints on foundation models like the Segment Anything Model 2 (SAM2), limiting their adaptability to the varying complexity of specialized downstream tasks. To overcome this limitation, we propose ACO-MoE-LoRA, a dynamic framework that introduces an "Evolving-while-Training" strategy by synergizing Ant Colony Optimization (ACO) with a Latent Space Mixture-of-Experts (MoE) architecture. Central to our method is the ACO-ConvLoRA module, which employs a pheromone-guided routing mechanism to actively govern expert selection and topological evolution. By formulating expert assignment as an evolutionary pathfinding problem, this module effectively mitigates the standard routing collapse issue and enables elastic adjustment of LoRA ranks via weight slicing, bridging discrete structural search with continuous parameter training. Extensive experiments across 16 challenging datasets demonstrate that our framework consistently outperforms leading static adapters, while effectively addressing the local optimality limitations of recent dynamic heuristics. This work presents a self-organizing solution that harmonizes swarm intelligence with gradient optimization for efficiently adapting foundation models to specialized domains.