Newton-coupled Dual-Teacher Semi-supervised Learning Framework
Abstract
Most semi-supervised learning frameworks rely on a single teacher that transfers zero-order supervision through pseudo-labels, constraining the student to imitate categorical outputs without perceiving the loss geometry. This design often leads to unstable optimization and limited generalization under scarce labels. We propose TTN (Two-Teachers Newton-guided Learning), a dual-teacher framework that integrates complementary supervision from MAE and DINOv3 and optimizes the student through a Newton step update. The two teachers provide multi-scale structural and semantic cues whose pseudo-labels and local Hessians are fused by confidence weighting, forming a unified second-order supervision signal. The student updates parameters preconditioned by the fused curvature, enabling stable convergence and geometry-consistent learning. TTN consistently improves over existing single-teacher and consistency-based semi-supervised learning methods on ImageNet, CIFAR-10, SVHN, and STL-10, demonstrating that combining multi-view self-supervised teachers with curvature-guided optimization yields robust and efficient semi-supervised learning.