Timezone: »
Oral
Generative Trees: Adversarial and Copycat
Richard Nock · Mathieu Guillame-Bert
Wed Jul 20 10:15 AM -- 10:35 AM (PDT) @ Room 327 - 329
While Generative Adversarial Networks (GANs) achieve spectacular results on unstructured data like images, there is still a gap on \textit{tabular data}, data for which state of the art \textit{supervised learning} still favours decision tree (DT)-based models. This paper proposes a new path forward for the generation of tabular data, exploiting decades-old understanding of the supervised task's best components for DT induction, from losses (properness), models (tree-based) to algorithms (boosting). The \textit{properness} condition on the supervised loss -- which postulates the optimality of Bayes rule -- leads us to a variational GAN-style loss formulation which is \textit{tight} when discriminators meet a calibration property trivially satisfied by DTs, and, under common assumptions about the supervised loss, yields "one loss to train against them all" for the generator: the $\chi^2$. We then introduce tree-based generative models, \textit{generative trees} (GTs), meant to mirror on the generative side the good properties of DTs for classifying tabular data, with a boosting-compliant \textit{adversarial} training algorithm for GTs. We also introduce \textit{copycat training}, in which the generator copies at run time the underlying tree (graph) of the discriminator DT and completes it for the hardest discriminative task, with boosting compliant convergence. We test our algorithms on tasks including fake/real distinction and missing data imputation.
Author Information
Richard Nock (Google Research)
Mathieu Guillame-Bert (Google)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Generative Trees: Adversarial and Copycat »
Wed. Jul 20th through Thu the 21st Room Hall E #1207
More from the Same Authors
-
2022 Poster: Neural Network Poisson Models for Behavioural and Neural Spike Train Data »
Moein Khajehnejad · Forough Habibollahi · Richard Nock · Ehsan Arabzadeh · Peter Dayan · Amir Dezfouli -
2022 Spotlight: Neural Network Poisson Models for Behavioural and Neural Spike Train Data »
Moein Khajehnejad · Forough Habibollahi · Richard Nock · Ehsan Arabzadeh · Peter Dayan · Amir Dezfouli -
2022 Poster: Being Properly Improper »
Tyler Sypherd · Richard Nock · Lalitha Sankar -
2022 Spotlight: Being Properly Improper »
Tyler Sypherd · Richard Nock · Lalitha Sankar -
2021 Poster: The Impact of Record Linkage on Learning from Feature Partitioned Data »
Richard Nock · Stephen J Hardy · Wilko Henecka · Hamish Ivey-Law · Jakub Nabaglo · Giorgio Patrini · Guillaume Smith · Brian Thorne -
2021 Spotlight: The Impact of Record Linkage on Learning from Feature Partitioned Data »
Richard Nock · Stephen J Hardy · Wilko Henecka · Hamish Ivey-Law · Jakub Nabaglo · Giorgio Patrini · Guillaume Smith · Brian Thorne -
2021 Poster: Generalised Lipschitz Regularisation Equals Distributional Robustness »
Zac Cranko · Zhan Shi · Xinhua Zhang · Richard Nock · Simon Kornblith -
2021 Spotlight: Generalised Lipschitz Regularisation Equals Distributional Robustness »
Zac Cranko · Zhan Shi · Xinhua Zhang · Richard Nock · Simon Kornblith -
2020 Poster: Supervised learning: no loss no cry »
Richard Nock · Aditya Menon -
2019 Poster: Monge blunts Bayes: Hardness Results for Adversarial Training »
Zac Cranko · Aditya Menon · Richard Nock · Cheng Soon Ong · Zhan Shi · Christian Walder -
2019 Poster: Lossless or Quantized Boosting with Integer Arithmetic »
Richard Nock · Robert C Williamson -
2019 Oral: Lossless or Quantized Boosting with Integer Arithmetic »
Richard Nock · Robert C Williamson -
2019 Oral: Monge blunts Bayes: Hardness Results for Adversarial Training »
Zac Cranko · Aditya Menon · Richard Nock · Cheng Soon Ong · Zhan Shi · Christian Walder -
2019 Poster: Boosted Density Estimation Remastered »
Zac Cranko · Richard Nock -
2019 Oral: Boosted Density Estimation Remastered »
Zac Cranko · Richard Nock -
2018 Poster: Variational Network Inference: Strong and Stable with Concrete Support »
Amir Dezfouli · Edwin Bonilla · Richard Nock -
2018 Oral: Variational Network Inference: Strong and Stable with Concrete Support »
Amir Dezfouli · Edwin Bonilla · Richard Nock -
2017 Workshop: Human in the Loop Machine Learning »
Richard Nock · Cheng Soon Ong