Timezone: »
We present Circuit-GNN, a graph neural network (GNN) model for designing distributed circuits. Today, designing distributed circuits is a slow process that can take months from an expert engineer. Our model both automates and speeds up the process. The model learns to simulate the electromagnetic (EM) properties of distributed circuits. Hence, it can be used to replace traditional EM simulators, which typically take tens of minutes for each design iteration. Further, by leveraging neural networks' differentiability, we can use our model to solve the inverse problem -- i.e., given desirable EM specifications, we propagate the gradient to optimize the circuit parameters and topology to satisfy the specifications. We exploit the flexibility of GNN to create one model that works for different circuit topologies. We compare our model with a commercial simulator showing that it reduces simulation time by four orders of magnitude. We also demonstrate the value of our model by using it to design a Terahertz channelizer, a difficult task that requires a specialized expert. The results show that our model produces a channelizer whose performance is as good as a manually optimized design, and can save the expert several weeks of iterative topology exploration and parameter optimization. Most interestingly, our model comes up with new designs that differ from the limited templates commonly used by engineers in the field, hence significantly expanding the design space. We exploit the flexibility of GNN to enable our model applicable to circuits with different number of sub-components. This allows our neural network to support a much larger design space in comparison to previous deep learning circuit design methods. Applying gradient descent on graph structures is non-trivial; we develop a novel multi-loop gradient descent algorithm with local reparameterization to solve this challenge. We compare our model with a commercial simulator showing that it reduces simulation time by five orders of magnitude. We also demonstrate the value of our model by using it to design a Terahertz channelizer, a difficult task that requires a specialized expert. The results show that our model produces a channelizer whose performance is as good as a manually optimized design, and can save the expert several weeks of iterative topology exploration and parameter optimization.
Author Information
GUO ZHANG (MIT)
Hao He (Massachusetts Institute of Technology)
Dina Katabi (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Circuit-GNN: Graph Neural Networks for Distributed Circuit Design »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #248
More from the Same Authors
-
2023 Poster: Change is Hard: A Closer Look at Subpopulation Shift »
Yuzhe Yang · Haoran Zhang · Dina Katabi · Marzyeh Ghassemi -
2023 Poster: Taxonomy-Structured Domain Adaptation »
Tianyi Liu · Zihao Xu · Hao He · Guang-Yuan Hao · Guang-He Lee · Hao Wang -
2021 Poster: Delving into Deep Imbalanced Regression »
Yuzhe Yang · Kaiwen Zha · YINGCONG CHEN · Hao Wang · Dina Katabi -
2021 Oral: Delving into Deep Imbalanced Regression »
Yuzhe Yang · Kaiwen Zha · YINGCONG CHEN · Hao Wang · Dina Katabi -
2020 Poster: Continuously Indexed Domain Adaptation »
Hao Wang · Hao He · Dina Katabi -
2019 Poster: ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation »
Yuzhe Yang · GUO ZHANG · Zhi Xu · Dina Katabi -
2019 Oral: ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation »
Yuzhe Yang · GUO ZHANG · Zhi Xu · Dina Katabi -
2017 Poster: Learning Sleep Stages from Radio Signals: A Conditional Adversarial Architecture »
Mingmin Zhao · Shichao Yue · Dina Katabi · Tommi Jaakkola · Matt Bianchi -
2017 Talk: Learning Sleep Stages from Radio Signals: A Conditional Adversarial Architecture »
Mingmin Zhao · Shichao Yue · Dina Katabi · Tommi Jaakkola · Matt Bianchi