Skip to yearly menu bar Skip to main content


Contributed talk
in
Workshop: Challenges in Deploying and Monitoring Machine Learning Systems

Serverless inferencing on Kubernetes

Clive Cox


Abstract:

Organisations are increasingly putting machine learning models into production at scale. The increasing popularity of serverless scale-to-zero paradigms presents an opportunity for deploying machine learning models to help mitigate infrastructure costs when many models may not be in continuous use. We will discuss the KFServing project which builds on the KNative serverless paradigm to provide a serverless machine learning inference solution that allows a consistent and simple interface for data scientists to deploy their models. We will show how it solves the challenges of autoscaling GPU based inference and discuss some of the lessons learnt from using it in production.

Chat is not available.