Activation with Intrinsic-Extrinsic Consensus
Abstract
Artificial Neural Networks (ANNs) are powerful tools for complex decision-making tasks. While existing activation mechanisms often promote sparsity through thresholding, they lack explicit awareness of feature channel relevance, causing networks to continuously suffer from interference by noisy channels. Such irrelevant activation signals can propagate through the network and adversely affect the final decision. Inspired by observations that channel relevance can be reflected in both intrinsic activity levels and extrinsic decision weights, and that there is strong consensus between these two aspects, we propose AIEC (Activation with Intrinsic-Extrinsic Consensus), a novel activation mechanism that has the ability to identify and suppress irrelevant feature channels during training. With a basic threshold activation, AIEC leverages an intrinsic Activation-Counting Unit that tracks channel activation statistics, an extrinsic Decision-Making Unit that learns channel decision weights, and a Consensus Gatekeeping Unit that suppresses irrelevant channels based on the agreement between intrinsic and extrinsic channel relevance assessments. Extensive experiments demonstrate that AIEC can effectively suppress irrelevant channels and encourage sparser representations. Furthermore, AIEC is compatible with a wide range of mainstream ANN architectures and achieves superior performance compared to existing activation mechanisms across multiple tasks and domains.