Skip to yearly menu bar Skip to main content


Oral
in
Workshop: ES-FoMo II: 2nd Workshop on Efficient Systems for Foundation Models

Prompt-prompted Adaptive Structured Pruning for Efficient LLM Generation

Harry Dong · Beidi Chen · Yuejie Chi

[ ] [ Project Page ]
Fri 26 Jul 12:30 a.m. PDT — 12:45 a.m. PDT

Abstract: Large language models (LLMs) have remarkable utility, but this comes at a considerable computational cost at deployment. Fortunately, some methods such as pruning or mixture of experts exploit sparsity in transformer feedforward (FF) blocks to gain boosts in speed and reduce memory, yet these techniques can be costly and inflexible in practice, as they often require training or are restricted to specific types of architectures. To address this, we introduce GRIFFIN, a novel training-free method that selects unique FF experts at the sequence level for efficient generation across a plethora of LLMs with different non-ReLU activation functions. This is possible due to a critical observation that many trained LLMs naturally produce highly structured FF activation patterns within a sequence, which we call flocking. GRIFFIN maintains the original model's performance with little to no degradation on a variety of tasks, all while improving latency (e.g. 1.29$\times$ and 1.25$\times$ speed-ups in Gemma 7B and Llama 2 13B, respectively, on an NVIDIA L40). Code can be found at \url{https://github.com/hdong920/GRIFFIN}.

Chat is not available.