Timezone: »
Latent prediction models, exemplified by multi-layer networks, employ hidden variables that automate abstract feature discovery. They typically pose nonconvex optimization problems and effective semi-definite programming (SDP) relaxations have been developed to enable global solutions (Aslan et al., 2014).However, these models rely on nonparametric training of layer-wise kernel representations, and are therefore restricted to transductive learning which slows down test prediction. In this paper, we develop a new inductive learning framework for parametric transfer functions using matching losses. The result for ReLU utilizes completely positive matrices, and the inductive learner not only delivers superior accuracy but also offers an order of magnitude speedup over SDP with constant approximation guarantees.
Author Information
Vignesh Ganapathiraman (University of Illinois at Chicago)
Zhan Shi (University of Illinois at Chicago)
Xinhua Zhang (University of Illinois at Chicago)
Yaoliang Yu (University of Waterloo)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Inductive Two-Layer Modeling with Parametric Bregman Transfer »
Thu. Jul 12th 11:30 -- 11:50 AM Room A6
More from the Same Authors
-
2023 Poster: Actor-Critic Alignment for Offline-to-Online Reinforcement Learning »
Zishun Yu · Xinhua Zhang -
2023 Poster: Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks »
Yiwei Lu · Gautam Kamath · Yaoliang Yu -
2023 Poster: Poisoning Generative Replay in Continual Learning to Promote Forgetting »
Siteng Kang · Zhan Shi · Xinhua Zhang -
2021 Poster: Generalised Lipschitz Regularisation Equals Distributional Robustness »
Zac Cranko · Zhan Shi · Xinhua Zhang · Richard Nock · Simon Kornblith -
2021 Spotlight: Generalised Lipschitz Regularisation Equals Distributional Robustness »
Zac Cranko · Zhan Shi · Xinhua Zhang · Richard Nock · Simon Kornblith -
2020 Poster: Tails of Lipschitz Triangular Flows »
Priyank Jaini · Ivan Kobyzev · Yaoliang Yu · Marcus Brubaker -
2020 Poster: Convex Representation Learning for Generalized Invariance in Semi-Inner-Product Space »
Yingyi Ma · Vignesh Ganapathiraman · Yaoliang Yu · Xinhua Zhang -
2020 Poster: Stronger and Faster Wasserstein Adversarial Attacks »
Kaiwen Wu · Allen Wang · Yaoliang Yu -
2019 Poster: Monge blunts Bayes: Hardness Results for Adversarial Training »
Zac Cranko · Aditya Menon · Richard Nock · Cheng Soon Ong · Zhan Shi · Christian Walder -
2019 Poster: Sum-of-Squares Polynomial Flow »
Priyank Jaini · Kira A. Selby · Yaoliang Yu -
2019 Oral: Monge blunts Bayes: Hardness Results for Adversarial Training »
Zac Cranko · Aditya Menon · Richard Nock · Cheng Soon Ong · Zhan Shi · Christian Walder -
2019 Oral: Sum-of-Squares Polynomial Flow »
Priyank Jaini · Kira A. Selby · Yaoliang Yu -
2019 Poster: Distributional Reinforcement Learning for Efficient Exploration »
Borislav Mavrin · Hengshuai Yao · Linglong Kong · Kaiwen Wu · Yaoliang Yu -
2019 Oral: Distributional Reinforcement Learning for Efficient Exploration »
Borislav Mavrin · Hengshuai Yao · Linglong Kong · Kaiwen Wu · Yaoliang Yu -
2018 Poster: Efficient and Consistent Adversarial Bipartite Matching »
Rizal Fathony · Sima Behpour · Xinhua Zhang · Brian Ziebart -
2018 Oral: Efficient and Consistent Adversarial Bipartite Matching »
Rizal Fathony · Sima Behpour · Xinhua Zhang · Brian Ziebart