Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Knowledge and Logical Reasoning in the Era of Data-driven Learning

Explanatory Learning: Towards Artificial Scientific Discovery

Antonio Norelli · Giorgio Mariani · Luca Moschella · Andrea Santilli · Giambattista Parascandolo · Simone Melzi · Emanuele Rodola


Abstract:

Explanations are the fuel of progress, the fundamental tool through which humans have increased their agency, earning more and more control over their future throughout history. So far, the production of explanations has been a unique prerogative of humans, who greatly improved the process over the last centuries with the emergence of the scientific method. In this work, we try to formalize this epistemological breakthrough to make it digestible by a machine, with the ultimate goal of building an artificial scientist and breaking the monopoly of humans in producing new symbolic explanations. Our Explanatory Learning (EL) construction stands over the Machine Learning field. Unlike traditional AI methods based on human-coded interpreters--such as program synthesis--EL builds upon the notion that a true artificial scientist can only emerge when a machine is capable of autonomously interpreting symbols. Consequently, EL necessitates a learned interpreter, trained on a limited set of raw strings hiding explanations, paired with observations of the corresponding phenomena--akin to a science book written in hieroglyphic. To exemplify the challenges of EL, we present Odeen, a basic environment that simulates a small universe full of phenomena to explain. Finally, we introduce Critical Rationalist Networks, a deep learning approach to EL aligned with the Popperian view of knowledge acquisition. Using Odeen as a testbed, we show how CRNs outperform standard empiricist end-to-end approaches of similar size and architecture (Transformers) in discovering explanations for unseen phenomena.

Chat is not available.