Automated Neuron Labels For Protein Language Models Enable Generative Steering
Abstract
Protein language models (PLMs) encode rich biological information, yet their internal neuron representations are poorly understood. We introduce the first automated framework for labeling every neuron in a PLM with concise, biologically grounded natural language descriptions. Unlike prior approaches relying on sparse autoencoders or manual annotation, our method scales to hundreds of thousands of neurons and reveals that individual neurons are selectively sensitive to diverse biochemical properties and structural features—including charge, GRAVY score, secondary structure, and canonical domains. Building on this interpretability, we develop a novel activation-guided steering method that directly manipulates neuron activations to generate proteins with desired traits. This approach enables rapid convergence to target biochemical properties such as hydrophobicity and instability, and successfully produces structural motifs like Zinc Fingers.