Skip to yearly menu bar Skip to main content

( events)   Timezone:  
Fri Jul 17 05:00 AM -- 01:35 PM (PDT)
Challenges in Deploying and Monitoring Machine Learning Systems
Alessandra Tosi · Nathan Korda · Neil Lawrence

Workshop Home Page

Until recently, Machine Learning has been mostly applied in industry by consulting academics, data scientists within larger companies, and a number of dedicated Machine Learning research labs within a few of the world’s most innovative tech companies. Over the last few years we have seen the dramatic rise of companies dedicated to providing Machine Learning software-as-a-service tools, with the aim of democratizing access to the benefits of Machine Learning. All these efforts have revealed major hurdles to ensuring the continual delivery of good performance from deployed Machine Learning systems. These hurdles range from challenges in MLOps, to fundamental problems with deploying certain algorithms, to solving the legal issues surrounding the ethics involved in letting algorithms make decisions for your business.

This workshop will invite papers related to the challenges in deploying and monitoring ML systems. It will encourage submission on: subjects related to MLOps for deployed ML systems (such as testing ML systems, debugging ML systems, monitoring ML systems, debugging ML Models, deploying ML at scale); subjects related to the ethics around deploying ML systems (such as ensuring fairness, trust and transparency of ML systems, providing privacy and security on ML Systems); useful tools and programming languages for deploying ML systems; specific challenges relating to deploying reinforcement learning in ML systems
and performing continual learning and providing continual delivery in ML systems;
and finally data challenges for deployed ML systems.

Opening remarks (Talk)
Deploying Machine Learning Models in a Developing Country (Invited talk)
System-wide Monitoring Architectures with Explanations (Invited talk)
First Break (Break)
Bridging the gap between research and production in machine learning (Invited talk)
Monitoring and explainability of models in production (Contributed talk)
Gradient-Based Monitoring of Learning Machines (Contributed talk)
Not Your Grandfather's Test Set: Reducing Labeling Effort for Testing (Contributed talk)
Carbontracker: Tracking and Predicting the Carbon Footprint of Training Deep Learning Models (Contributed talk)
Serverless inferencing on Kubernetes (Contributed talk)
Do You Sign Your Model? (Contributed talk)
PareCO: Pareto-aware Channel Optimization for Slimmable Neural Networks (Contributed talk)
Technology Readiness Levels for Machine Learning Systems (Contributed talk)
Poster session
Second Break (Break)
Open Problems Panel (Panel)
Third break (Break)
Conservative Exploration in Bandits and Reinforcement Learning (Invited talk)
Successful Data Science in Production Systems: It’s All About Assumptions (Invited talk)
Panel discussion (Panel)