Timezone: »
Poster
Lossless or Quantized Boosting with Integer Arithmetic
Richard Nock · Robert C Williamson
In supervised learning, efficiency often starts with the choice of a good loss: support vector machines popularised Hinge loss, Adaboost popularised
the exponential loss, etc. Recent trends in machine learning have
highlighted the necessity for training routines to meet
tight requirements on communication, bandwidth, energy, operations,
encoding, among others. Fitting the often decades-old state of the art
training routines into these new constraints does not go without pain and uncertainty or
reduction in the original guarantees.
Our
paper starts with the design of a new strictly proper canonical, twice differentiable loss called the
Q-loss. Importantly, its mirror update over
(arbitrary) rational inputs uses only integer arithmetics --
more precisely, the sole use of $+, -, /, \times, |.|$. We build a
learning algorithm which is able, under mild assumptions, to achieve a
lossless boosting-compliant training. We
give conditions for a quantization of its main memory footprint,
weights, to be done while keeping the whole algorithm boosting-compliant. Experiments
display that the algorithm can achieve a fast convergence
during the early boosting rounds compared to AdaBoost, even with a weight storage
that can be 30+ times smaller. Lastly, we show that the Bayes risk of the
Q-loss can be used as node splitting criterion for decision trees and
guarantees optimal boosting convergence.
Author Information
Richard Nock (Data61, The Australian National University and the University of Sydney)
Bob C Williamson (ANU)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Lossless or Quantized Boosting with Integer Arithmetic »
Wed Jun 12th 09:30 -- 09:35 PM Room Room 103
More from the Same Authors
-
2020 Poster: Supervised learning: no loss no cry »
Richard Nock · Aditya Menon -
2019 Poster: Fairness risk measures »
Robert C Williamson · Aditya Menon -
2019 Oral: Fairness risk measures »
Robert C Williamson · Aditya Menon -
2019 Poster: Monge blunts Bayes: Hardness Results for Adversarial Training »
Zac Cranko · Aditya Menon · Richard Nock · Cheng Soon Ong · Zhan Shi · Christian Walder -
2019 Oral: Monge blunts Bayes: Hardness Results for Adversarial Training »
Zac Cranko · Aditya Menon · Richard Nock · Cheng Soon Ong · Zhan Shi · Christian Walder -
2019 Poster: Boosted Density Estimation Remastered »
Zac Cranko · Richard Nock -
2019 Oral: Boosted Density Estimation Remastered »
Zac Cranko · Richard Nock -
2018 Poster: Variational Network Inference: Strong and Stable with Concrete Support »
Amir Dezfouli · Edwin Bonilla · Richard Nock -
2018 Oral: Variational Network Inference: Strong and Stable with Concrete Support »
Amir Dezfouli · Edwin Bonilla · Richard Nock -
2017 Workshop: Human in the Loop Machine Learning »
Richard Nock · Cheng Soon Ong