Timezone: »
Landmark codes underpin reliable physical layer communication, e.g., Reed-Muller, BCH, Convolution, Turbo, LDPC, and Polar codes: each is a linear code and represents a mathematical breakthrough. The impact on humanity is huge: each of these codes has been used in global wireless communication standards (satellite, WiFi, cellular). Reliability of communication over the classical additive white Gaussian noise (AWGN) channel enables benchmarking and ranking of the different codes. In this paper, we construct KO codes, a computationally efficient family of deep-learning driven (encoder, decoder) pairs that outperform the state-of-the-art reliability performance on the standardized AWGN channel. KO codes beat state-of-the-art Reed-Muller and Polar codes, under the low-complexity successive cancellation decoding, in the challenging short-to-medium block length regime on the AWGN channel. We show that the gains of KO codes are primarily due to the nonlinear mapping of information bits directly to transmit symbols (bypassing modulation) and yet possess an efficient, high-performance decoder. The key technical innovation that renders this possible is design of a novel family of neural architectures inspired by the computation tree of the {\bf K}ronecker {\bf O}peration (KO) central to Reed-Muller and Polar codes. These architectures pave way for the discovery of a much richer class of hitherto unexplored nonlinear algebraic structures.
Author Information
Ashok Vardhan Makkuva (UIUC)
Xiyang Liu (University of Washington)
Mohammad Vahid Jamali (University of Michigan)
Hessam Mahdavifar (University of Michigan)
Sewoong Oh (University of Washington)
Pramod Viswanath (UIUC)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: KO codes: inventing nonlinear encoding and decoding for reliable wireless communication via deep-learning »
Thu. Jul 22nd 01:20 -- 01:25 AM Room
More from the Same Authors
-
2021 : Robust and Differentially Private Mean Estimation »
Xiyang Liu -
2023 : Improving multimodal datasets with image captioning »
Thao Nguyen · · Gabriel Ilharco · Sewoong Oh · Ludwig Schmidt -
2023 Poster: CRISP: Curriculum based Sequential neural decoders for Polar code family »
S Ashwin Hebbar · Viraj Nadkarni · Ashok Vardhan Makkuva · Suma Bhat · Sewoong Oh · Pramod Viswanath -
2022 Poster: MAML and ANIL Provably Learn Representations »
Liam Collins · Aryan Mokhtari · Sewoong Oh · Sanjay Shakkottai -
2022 Spotlight: MAML and ANIL Provably Learn Representations »
Liam Collins · Aryan Mokhtari · Sewoong Oh · Sanjay Shakkottai -
2022 Poster: De novo mass spectrometry peptide sequencing with a transformer model »
Melih Yilmaz · William Fondrie · Wout Bittremieux · Sewoong Oh · William Noble -
2022 Spotlight: De novo mass spectrometry peptide sequencing with a transformer model »
Melih Yilmaz · William Fondrie · Wout Bittremieux · Sewoong Oh · William Noble -
2021 Poster: Defense against backdoor attacks via robust covariance estimation »
Jonathan Hayase · Weihao Kong · Raghav Somani · Sewoong Oh -
2021 Spotlight: Defense against backdoor attacks via robust covariance estimation »
Jonathan Hayase · Weihao Kong · Raghav Somani · Sewoong Oh -
2020 Poster: Optimal transport mapping via input convex neural networks »
Ashok Vardhan Makkuva · Amirhossein Taghvaei · Sewoong Oh · Jason Lee -
2020 Poster: InfoGAN-CR and ModelCentrality: Self-supervised Model Training and Selection for Disentangling GANs »
Zinan Lin · Kiran Thekumparampil · Giulia Fanti · Sewoong Oh -
2020 Poster: Meta-learning for Mixed Linear Regression »
Weihao Kong · Raghav Somani · Zhao Song · Sham Kakade · Sewoong Oh -
2019 Poster: Breaking the gridlock in Mixture-of-Experts: Consistent and Efficient Algorithms »
Ashok Vardhan Makkuva · Pramod Viswanath · Sreeram Kannan · Sewoong Oh -
2019 Oral: Breaking the gridlock in Mixture-of-Experts: Consistent and Efficient Algorithms »
Ashok Vardhan Makkuva · Pramod Viswanath · Sreeram Kannan · Sewoong Oh