Skip to yearly menu bar Skip to main content


Poster

Optimizing Complex Machine Learning Systems with Black-box and Differentiable Components

Zhiliang Chen · Chuan-Sheng Foo · Bryan Kian Hsiang Low


Abstract:

Machine learning (ML) models in the real world typically do not exist in isolation. They are usually part of a complex system (e.g., healthcare systems, self-driving cars) containing multiple ML and black-box components. Unfortunately, optimizing the system performance, which requires us to jointly train all ML components, presents a significant challenge because the number of system parameters is extremely high and the system has no analytical form. To circumvent this, we introduce A-BAD-BO, a novel algorithm which uses each ML component's local loss as an auxiliary indicator for system performance. A-BAD-BO uses Bayesian optimization (BO) to optimize the local loss configuration of a system in a smaller dimensional space and exploits the differentiable structure of ML components to recover optimal system parameters from the optimized configuration. We show A-BAD-BO converges to optimal system parameters by theoretically showing that it is asymptotically no regret. We use A-BAD-BO to optimize several synthetic and real-world complex systems, including a prompt engineering pipeline for large language models containing millions of system parameters. Our results demonstrate that A-BAD-BO yields better system optimality than gradient-driven baselines and is more sample-efficient than pure BO algorithms.

Live content is unavailable. Log in and register to view live content