Timezone: »
Talk
Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs
Li Jing · Yichen Shen · Tena Dubcek · John E Peurifoy · Scott Skirlo · Yann LeCun · Max Tegmark · Marin Solja\v{c}i\'{c}
Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data. This approach appears particularly promising for Recurrent Neural Networks (RNNs). In this work, we present a new architecture for implementing an Efficient Unitary Neural Network (EUNNs); its main advantages can be summarized as follows. Firstly, the representation capacity of the unitary space in an EUNN is fully tunable, ranging from a subspace of SU(N) to the entire unitary space. Secondly, the computational complexity for training an EUNN is merely $\mathcal{O}(1)$ per parameter. Finally, we test the performance of EUNNs on the standard copying task, the pixel-permuted MNIST digit recognition benchmark as well as the Speech Prediction Test (TIMIT). We find that our architecture significantly outperforms both other state-of-the-art unitary RNNs and the LSTM architecture, in terms of the final performance and/or the wall-clock training speed. EUNNs are thus promising alternatives to RNNs and LSTMs for a wide variety of applications.
Author Information
Li Jing (Massachusetts Institute of Technology)
Yichen Shen (MIT)
Tena Dubcek (MIT)
John E Peurifoy (MIT)
Scott Skirlo (MIT)
Yann LeCun (New York University)
Max Tegmark (MIT)
Marin Solja\v{c}i\'{c} (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs »
Tue. Aug 8th 08:30 AM -- 12:00 PM Room Gallery #47
More from the Same Authors
-
2022 : Deep Learning and Symbolic Regression for Discovering Parametric Equations »
Samuel Kim · Michael Zhang · Peter Y. Lu · Marin Solja\v{c}i\'{c} -
2022 : Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior »
Ravid Shwartz-Ziv · Micah Goldblum · Hossein Souri · Sanyam Kapoor · Chen Zhu · Yann LeCun · Andrew Wilson -
2022 : What Do We Maximize In Self-Supervised Learning? »
Ravid Shwartz-Ziv · Ravid Shwartz-Ziv · Randall Balestriero · Yann LeCun · Yann LeCun -
2022 : Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior »
Ravid Shwartz-Ziv · Micah Goldblum · Hossein Souri · Sanyam Kapoor · Chen Zhu · Yann LeCun · Andrew Wilson -
2018 Poster: Adversarially Regularized Autoencoders »
Jake Zhao · Yoon Kim · Kelly Zhang · Alexander Rush · Yann LeCun -
2018 Oral: Adversarially Regularized Autoencoders »
Jake Zhao · Yoon Kim · Kelly Zhang · Alexander Rush · Yann LeCun -
2018 Poster: Comparing Dynamics: Deep Neural Networks versus Glassy Systems »
Marco Baity-Jesi · Levent Sagun · Mario Geiger · Stefano Spigler · Gerard Arous · Chiara Cammarota · Yann LeCun · Matthieu Wyart · Giulio Biroli -
2018 Oral: Comparing Dynamics: Deep Neural Networks versus Glassy Systems »
Marco Baity-Jesi · Levent Sagun · Mario Geiger · Stefano Spigler · Gerard Arous · Chiara Cammarota · Yann LeCun · Matthieu Wyart · Giulio Biroli