Timezone: »
We propose Neuro-Symbolic Hierarchical Rule Induction, an efficient interpretable neuro-symbolic model, to solve Inductive Logic Programming (ILP) problems. In this model, which is built from a pre-defined set of meta-rules organized in a hierarchical structure, first-order rules are invented by learning embeddings to match facts and body predicates of a meta-rule. To instantiate, we specifically design an expressive set of generic meta-rules, and demonstrate they generate a consequent fragment of Horn clauses. As a differentiable model, HRI can be trained both via supervised learning and reinforcement learning. To converge to interpretable rules, we inject a controlled noise to avoid local optima and employ an interpretability-regularization term. We empirically validate our model on various tasks (ILP, visual genome, reinforcement learning) against relevant state-of-the-art methods, including traditional ILP methods and neuro-symbolic models.
Author Information
Claire Glanois (IT University Copenhagen)
Zhaohui Jiang (Shanghai Jiao Tong University)
Xuening Feng (Shanghai Jiao Tong University)
Paul Weng (Shanghai Jiao Tong University)
Matthieu Zimmer (Shanghai Jiao Tong University)
[Actively looking for a research scientist position.] Matthieu Zimmer received the Ph.D. degree in computer science in 2018 from the University of Lorraine and the M.S. degree in computer science from the University Pierre and Marie Curie in 2014. Since 2018, he is a postdoctoral researcher at the joint institute of the University of Michigan and the Shanghai Jiao Tong University in China. His current research interests include deep reinforcement learning, transfer learning, multi-agent systems and meta learning.
Dong Li (Huawei Noah's Ark Lab)
Wulong Liu (Huawei Noah's Ark Lab)
Jianye Hao (Huawei Noah's Ark Lab)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Neuro-Symbolic Hierarchical Rule Induction »
Thu. Jul 21st 03:35 -- 03:40 PM Room Room 301 - 303
More from the Same Authors
-
2022 Poster: Plan Your Target and Learn Your Skills: Transferable State-Only Imitation Learning via Decoupled Policy Optimization »
Minghuan Liu · Zhengbang Zhu · Yuzheng Zhuang · Weinan Zhang · Jianye Hao · Yong Yu · Jun Wang -
2022 Spotlight: Plan Your Target and Learn Your Skills: Transferable State-Only Imitation Learning via Decoupled Policy Optimization »
Minghuan Liu · Zhengbang Zhu · Yuzheng Zhuang · Weinan Zhang · Jianye Hao · Yong Yu · Jun Wang -
2022 Poster: Learning Pseudometric-based Action Representations for Offline Reinforcement Learning »
Pengjie Gu · Mengchen Zhao · Chen Chen · Dong Li · Jianye Hao · Bo An -
2022 Spotlight: Learning Pseudometric-based Action Representations for Offline Reinforcement Learning »
Pengjie Gu · Mengchen Zhao · Chen Chen · Dong Li · Jianye Hao · Bo An -
2021 Poster: Learning Fair Policies in Decentralized Cooperative Multi-Agent Reinforcement Learning »
Matthieu Zimmer · Claire Glanois · Umer Siddique · Paul Weng -
2021 Spotlight: Learning Fair Policies in Decentralized Cooperative Multi-Agent Reinforcement Learning »
Matthieu Zimmer · Claire Glanois · Umer Siddique · Paul Weng -
2020 Poster: Learning Fair Policies in Multi-Objective (Deep) Reinforcement Learning with Average and Discounted Rewards »
Umer Siddique · Paul Weng · Matthieu Zimmer