Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Interactive Learning with Implicit Human Feedback

SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks

Yuchen Lin · Yicheng Fu · Karina Yang · Prithviraj Ammanabrolu · Faeze Brahman · Shiyu Huang · Chandra Bhagavatula · Yejin Choi · Xiang Ren


Abstract:

We introduce SwiftSage, a novel agent framework inspired by the dual-process theory of human cognition, designed to excel in action planning for complex interactive reasoning tasks. SwiftSage integrates the strengths of behavior cloning and prompting large language models (LLMs) to enhance task completion performance. The framework comprises two primary modules: the Swift module, representing fast and intuitive thinking, and the Sage module, emulating deliberate thought processes. The Swift module is a small encoder-decoder LM fine-tuned on the oracle agent's action trajectories, while the Sage module employs LLMs such as GPT-4 for subgoal planning and grounding. We develop a heuristic method to harmoniously integrate the two modules, resulting in a more efficient and robust problem-solving process. In 30 tasks from the ScienceWorld benchmark, SwiftSage significantly outperforms other methods such as SayCan, ReAct, and Reflexion, demonstrating its effectiveness in solving complex real-world tasks.

Chat is not available.