Coupled Training with Privileged Features and Unlabeled Data
Abstract
In many prediction problems, we have extra information during training (for example, measurements that are expensive or slow to collect) that will not be available when the model is deployed. A common strategy is to first train a model that uses all training information, then use its predictions on unlabeled examples to train a second model that only uses the inputs available at test time. However, when the extra training-only information is weak or noisy, this two-step approach can mislead the deployable model and even hurt accuracy. We propose a joint training method that learns the two models together, so the deployable model can benefit from the extra information only when it actually helps, instead of inheriting its mistakes. We provide guarantees that describe when joint training improves prediction accuracy and analyze a simple alternating training algorithm for large, high-dimensional models. Experiments on synthetic data and medical prediction tasks show that our approach avoids these failures and consistently outperforms standard two-step baselines.