Structured Progressive Knowledge Activation for LLM-Driven Neural Architecture Search
Zhen Liu ⋅ Yuhan Liu ⋅ Jinjun Wang ⋅ Wei Song ⋅ Jianyi Liu ⋅ Jingwen Fu
Abstract
This paper focuses on a key challenge in Neural Architecture Search (NAS): integrating established architectural knowledge while exploring new designs under expensive evaluations. Large language models (LLMs) are a promising assistant for NAS because they can translate rich architectural and coding priors into executable code edits. However, in practice, seemingly local revisions often propagate into non-local behavioral and performance shifts because a single edit can inadvertently couple multiple interacting functional factors, a phenomenon we refer to as functional entanglement. To make LLM knowledge usable under such entanglement, we propose Structured Progressive Knowledge Activation (SPARK), which activates relevant priors by explicitly selecting the functional factor to modify and conditioning the edit on that factor. This factor-conditioned editing reduces entangled side effects and yields more targeted, reliable architecture modifications. On CLRS-DFS, SPARK reduces the number of training evaluations by {28.1$\times$} over EvoPrompting and improves OOD accuracy by {+15.6} points, with essentially unchanged compute ({$\sim$453K MACs}).
Successful Page Load