Timezone: »
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration. They are capable of solving standard inverse problems such as denoising and compressive sensing with excellent results by simply fitting a neural network model to measurements from a single image or signal without the need for any additional training data. For some applications, this critically requires additional regularization in the form of early stopping the optimization. For signal recovery from a few measurements, however, un-trained convolutional networks have an intriguing self-regularizing property: Even though the network can perfectly fit any image, the network recovers a natural image from few measurements when trained with gradient descent until convergence. In this paper, we provide numerical evidence for this property and study it theoretically. We show that---without any further regularization---an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
Author Information
Reinhard Heckel (Rice University)
Mahdi Soltanolkotabi (University of Southern California)
Mahdi Soltanolkotabi is an assistant professor in the Ming Hsieh Department of Electrical and Computer Engineering and Computer Science at the University of Southern California where he holds an Andrew and Erna Viterbi Early Career Chair. Prior to joining USC, he completed his PhD in electrical engineering at Stanford in 2014. He was a postdoctoral researcher in the EECS department at UC Berkeley during the 2014-2015 academic year. His research focuses on developing the mathematical foundations of data analysis at the confluence of optimization, machine learning, signal processing, high dimensional statistics, computational imaging and artificial intelligence. Mahdi is the recipient of the Packard Fellowship in Science and Engineering, a Sloan Research Fellowship, an NSF Career award, an Airforce Office of Research Young Investigator award (AFOSR-YIP), and a Google faculty research award.
More from the Same Authors
-
2021 Poster: PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models »
Chaoyang He · Shen Li · Mahdi Soltanolkotabi · Salman Avestimehr -
2021 Spotlight: PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models »
Chaoyang He · Shen Li · Mahdi Soltanolkotabi · Salman Avestimehr -
2021 Poster: Measuring Robustness in Deep Learning Based Compressive Sensing »
Mohammad Zalbagi Darestani · Akshay Chaudhari · Reinhard Heckel -
2021 Poster: Generalization Guarantees for Neural Architecture Search with Train-Validation Split »
Samet Oymak · Mingchen Li · Mahdi Soltanolkotabi -
2021 Oral: Measuring Robustness in Deep Learning Based Compressive Sensing »
Mohammad Zalbagi Darestani · Akshay Chaudhari · Reinhard Heckel -
2021 Spotlight: Generalization Guarantees for Neural Architecture Search with Train-Validation Split »
Samet Oymak · Mingchen Li · Mahdi Soltanolkotabi -
2021 Poster: Data augmentation for deep learning based accelerated MRI reconstruction with limited data »
Zalan Fabian · Reinhard Heckel · Mahdi Soltanolkotabi -
2021 Spotlight: Data augmentation for deep learning based accelerated MRI reconstruction with limited data »
Zalan Fabian · Reinhard Heckel · Mahdi Soltanolkotabi -
2019 Poster: Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path? »
Samet Oymak · Mahdi Soltanolkotabi -
2019 Oral: Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path? »
Samet Oymak · Mahdi Soltanolkotabi