Timezone: »

Beyond Backprop: Online Alternating Minimization with Auxiliary Variables
Anna Choromanska · Benjamin Cowen · Sadhana Kumaravel · Ronny Luss · Mattia Rigotti · Irina Rish · Paolo DiAchille · Viatcheslav Gurev · Brian Kingsbury · Ravi Tejwani · Djallel Bouneffouf

Tue Jun 11 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #57

Despite significant recent advances in deep neural networks, training them remains a challenge due to the highly non-convex nature of the objective function. State-of-the-art methods rely on error backpropagation, which suffers from several well-known issues, such as vanishing and exploding gradients, inability to handle non-differentiable nonlinearities and to parallelize weight-updates across layers, and biological implausibility. These limitations continue to motivate exploration of alternative training algorithms, including several recently proposed auxiliary-variable methods which break the complex nested objective function into local subproblems. However, those techniques are mainly offline (batch), which limits their applicability to extremely large datasets, as well as to online, continual or reinforcement learning. The main contribution of our work is a novel online (stochastic/mini-batch) alternating minimization (AM) approach for training deep neural networks, together with the first theoretical convergence guarantees for AM in stochastic settings and promising empirical results on a variety of architectures and datasets.

Author Information

Anna Choromanska (New York University)
Benjamin Cowen (NYU)
Sadhana Kumaravel (IBM Research)
Ronny Luss (IBM Research)
Mattia Rigotti (IBM Research AI)
Irina Rish (IBM Research AI)
Paolo DiAchille (IBM Research)
Viatcheslav Gurev (IBM Research)
Brian Kingsbury (IBM Research)
Ravi Tejwani (MIT)
Djallel Bouneffouf (IBM Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors