Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Localized Learning: Decentralized Model Updates via Non-Global Objectives

Towards Modular Machine Learning Pipelines

Aditya Modi · JIVAT NEET KAUR · Maggie Makar · Pavan Mallapragada · Amit Sharma · Emre Kiciman · Adith Swaminathan

Keywords: [ causal ] [ consistent ] [ modularity ] [ ML components ] [ pipeline ] [ coupling ] [ regularizer ] [ independently trainable ]


Abstract:

Pipelines of Machine Learning (ML) components are a popular and effective approach to divide and conquer many business-critical problems. A pipeline architecture implies a specific division of the overall problem, however current ML training approaches do not enforce this implied division. Consequently ML components can become coupled to one another after they are trained, which causes insidious effects. For instance, even when one coupled ML component in a pipeline is improved in isolation, the end-to-end pipeline performancecan degrade. In this paper, we develop a conceptual framework to study ML coupling in pipelines and design new modularity regularizersthat can eliminate coupling during ML training. We show that the resulting ML pipelines become modular (i.e., their components can be trained independently of one another) and discuss the tradeoffs of our approach versus existing approaches to pipeline optimization.

Chat is not available.