effGen: Enabling Small Language Models as Capable Autonomous Agents
Gaurav Srivastava ⋅ Aafiya Hussain ⋅ Chi Wang ⋅ Yingyan (Celine) Lin ⋅ Xuan Wang
Abstract
Most existing language model agentic systems today are built and optimized for large language models (e.g., GPT, Claude, Gemini) via API calls. While powerful, this approach faces several limitations including high token costs and privacy concerns for sensitive applications. We introduce $\textbf{effGen}$, an open-source agentic framework optimized for small language models (SLMs) that enables effective, efficient, and secure local deployment. $\textbf{effGen}$ makes four major contributions: $\textbf{(1) Enhanced tool-calling}$ with prompt optimization that compresses contexts by 70-80% while preserving task semantics, $\textbf{(2) Intelligent task decomposition}$ that breaks complex queries into parallel or sequential subtasks based on dependencies, $\textbf{(3) Complexity-based routing}$ using five factors to make smart pre-execution decisions, and $\textbf{(4) Unified memory system}$ combining short-term, long-term, and vector-based storage. Additionally, $\textbf{effGen}$ unifies multiple agent protocols (MCP, A2A, ACP) for cross-protocol communication. Results on 13 benchmarks show $\textbf{effGen}$ outperforms LangChain, AutoGen, and Smolagents with $\textbf{higher success rates}$, $\textbf{faster execution}$, and $\textbf{lower memory}$. Our results reveal that prompt optimization and complexity routing have complementary scaling behavior: optimization benefits SLMs more (11.2% gain at 1.5B vs 2.4% at 32B), while routing benefits large models more (3.6% at 1.5B vs 7.9% at 32B), providing consistent gains across all scales when combined.
Successful Page Load