Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Synergy of Scientific and Machine Learning Modelling (SynS & ML) Workshop

An $\mathcal{A}$-adaptive Loop Unrolled Architecture for Solving Inverse Problems with Forward Model Mismatch

Peimeng Guan · Naveed Iqbal · Mark Davenport · Mudassir Masood

Keywords: [ model inexactness ] [ forward model mismatch ] [ variable splitting ] [ loop unrolling ] [ inverse problem ]


Abstract: In inverse problems (IP) we aim to recover the underlying signal from noisy measurements that are generated according to a known forward model. Classical methods for solving IPs usually involve minimizing a least-squares data fidelity term together with a predetermined regularization function, which often leads to unsatisfactory reconstructions. \emph{loop unrolling} (LU) architecture addresses this issue by unrolling the optimization iterations into a sequence of neural networks that in effect learn a regularization function from data. While LU is currently a state-of-the-art method in many applications, the accuracy of the forward model is crucial for its success. This assumption can be limiting in many physical applications due to model simplifications or uncertainties in the apparatus. To address forward model mismatch, this work introduces a forward model residual network, and with an extra variable splitting step, the proposed method can adapt to uncertain forward models accordingly. The method achieves $\sim$ 2 dB PSNR increment in image blind deblurring and seismic blind deconvolution tasks by effectively learning the updates in reconstruction and forward model jointly.

Chat is not available.