Poster
in
Workshop: Interpretable Machine Learning in Healthcare
Solving inverse problems with deep neural networks driven by sparse signal decomposition in a physics-based dictionary
Gaetan Rensonnet
Deep neural networks (DNN) have an impressive ability to invert very complex models, i.e. to learn the generative parameters from a model's output. Once trained, the forward pass of a DNN is often much faster than traditional, optimization-based methods used to solve inverse problems. This is however done at the cost of lower interpretability, a fundamental limitation in most medical applications. We propose an approach for solving general inverse problems which combines the efficiency of DNN and the interpretability of traditional analytical methods. The measurements are first projected onto a dense dictionary of model-based responses. The resulting sparse representation is then fed to a DNN with an architecture driven by the problem's physics for fast parameter learning. Our method can handle generative forward models that are costly to evaluate and exhibits similar performance in accuracy and computation time as a fully-learned DNN, while maintaining high interpretability and being easier to train. Concrete results are shown on an example of model-based brain parameter estimation from magnetic resonance imaging (MRI).