Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: Challenges in Deploying and monitoring Machine Learning Systems

MLDemon: Deployment Monitoring for Machine Learning Systems

Tony Ginart


Abstract:

Post-deployment monitoring of the performance ML systems is critical for ensuring reliability, especially as new user inputs can differ from the training distribution. Here we propose a novel approach, MLDemon, for ML DEployment MONitoring. MLDemon integrates both unlabeled features and a small amount of labeled examples which arrive over time to produce a real-time estimate of the ML model's current performance. Subject to budget constraints, MLDemon decides when to acquire additional, potentially costly, labels to verify the model. On temporal datasets with diverse distribution drifts and models, MLDemon substantially outperforms existing monitoring approaches. Moreover, we provide theoretical analysis to show that MLDemon is minimax rate optimal up to logarithmic factors and is provably robust against broad distribution drifts whereas prior approaches are not.

Authors Tony Ginart ( Stanford University ) Martin Zhang ( Harvard School of Public Health ) James Zou ( Stanford University )