Two-Stage Unit Tying for Simplifying Differentiable Logic Gate Networks
Abstract
Differentiable logic gate networks map learned models directly to gate-level circuits, enabling ultra-low-latency inference, yet their logic footprint often exceeds FPGA capacity budgets. Tightly fitting a trained model to a target FPGA requires a post-training mechanism to trade off network complexity and accuracy—analogous to pruning in standard neural networks. To this end, we introduce unit tying: a simplification that forces selected gates to constants (0 or 1), enabling constant propagation and downstream logic elimination. However, we observe that naively extending pruning criteria to logic networks is unreliable under such near-discrete modifications. We therefore propose a two-stage algorithm for unit tying: (i) a fast Gauss–Newton screening step under a teacher-referenced logit-distortion objective that constructs a high-recall overshoot set and (ii) a refinement step that corrects approximation and interaction-driven errors using a small number of finite-difference evaluations. On CIFAR-10 and MNIST, our method consistently improves the accuracy–area trade-off over common saliency baselines, yielding substantial post-synthesis LUT reductions of up to 48% on CIFAR-10 and 43% on MNIST, with modest accuracy degradation.