Skip to yearly menu bar Skip to main content


Poster

A Hierarchical Adaptive Multi-Task Reinforcement Learning Framework for Multiplier Circuit Design

Zhihai Wang · Jie Wang · Dongsheng Zuo · Ji Yunjie · Xilin Xia · Yuzhe Ma · Jianye Hao · Mingxuan Yuan · Yongdong Zhang · Feng Wu

Hall C 4-9 #2717
[ ] [ Paper PDF ]
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

Multiplier design---which aims to explore a large combinatorial design space to simultaneously optimize multiple conflicting objectives---is a fundamental problem in the integrated circuits industry. Although traditional approaches tackle the multi-objective multiplier optimization problem by manually designed heuristics, reinforcement learning (RL) offers a promising approach to discover high-speed and area-efficient multipliers. However, the existing RL-based methods struggle to find Pareto-optimal circuit designs for all possible preferences, i.e., weights over objectives, in a sample-efficient manner. To address this challenge, we propose a novel hierarchical adaptive (HAVE) multi-task reinforcement learning framework. The hierarchical framework consists of a meta-agent to generate diverse multiplier preferences, and an adaptive multi-task agent to collaboratively optimize multipliers conditioned on the dynamic preferences given by the meta-agent. To the best of our knowledge, HAVE is the first to well approximate Pareto-optimal circuit designs for the entire preference space with high sample efficiency. Experiments on multipliers across a wide range of input widths demonstrate that HAVE significantly Pareto-dominates state-of-the-art approaches, achieving up to 28% larger hypervolume. Moreover, experiments demonstrate that multipliers designed by HAVE can well generalize to large-scale computation-intensive circuits.

Chat is not available.