Convolutional Learnable-Group Weightless Neural Network
Abstract
Weightless Neural Networks (WNNs) based on interconnected Lookup Tables (LUTs) have attracted attention for inference in extremely compact models, but achieving competitive accuracy under such tight resource budgets remains challenging. To address these issues, we introduce the Convolutional Learnable-Group Weightless Neural Network (CLGN). CLGN constructs convolutional layers using LUTs and incorporates a learnable GroupSum connection, thereby enhancing the accuracy of WNNs while maintaining low implementation resource consumption. Moreover, we propose a hierarchical training strategy to improve the training efficiency. We evaluate CLGN in two edge computing scenarios: (1) FPGA, where we evaluate accuracy, latency, throughput, power consumption, LUTs usage, and parameter size; and (2) Microprocessor, where we evaluate latency and memory usage. Compared with the state-of-the-art solutions, the proposed CLGN achieves superior accuracy while maintaining lower implementation resource consumption.