Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators

TaskMet: Task-Driven Metric Learning for Model Learning

Dishank Bansal · Ricky T. Q. Chen · Mustafa Mukadam · Brandon Amos


Abstract:

Deep learning models are often used with some downstream task. Models solely trained to achieve accurate predictions may struggle to perform well on thedesired downstream tasks. We propose using the task'sloss to learn a metric which parameterizes a loss to train the model.This approach does not alter the optimal prediction modelitself, but rather changes the model learning to emphasizethe information important for the downstream task.This enables us to achieve the best of both worlds:a prediction model trained in the original prediction space whilealso being valuable for the desired downstream task.We validate our approach through experimentsconducted in two main settings: 1) decision-focused model learningscenarios involving portfolio optimization and budget allocation, and2) reinforcement learning in noisy environments with distractingstates.

Chat is not available.