Timezone: »

PromptBoosting: Black-Box Text Classification with Ten Forward Passes
Bairu Hou · Joe O'Connor · Jacob Andreas · Shiyu Chang · Yang Zhang

Tue Jul 25 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #828

We describe PromptBoosting, a query-efficient procedure for building a text classifier from a neural language model (LM) without access to the LM's parameters, gradients, or hidden representations. This form of "black-box" classifier training has become increasingly important as the cost of training and inference in large-scale LMs has grown. But existing black-box LM classifier learning approaches are themselves computationally inefficient, typically specializing LMs to the target task by searching in a large space of (discrete or continuous) prompts using zeroth-order optimization methods. Instead of directly optimizing in prompt space, PromptBoosting obtains a small pool of prompts via a gradient-free approach and then constructs a large pool of weak learners by pairing these prompts with different elements of the LM's output distribution. These weak learners are then ensembled using the AdaBoost algorithm. The entire learning process requires only a small number of forward passes and no backward pass. Experiments show that PromptBoosting achieves state-of-the-art performance in multiple black-box few-shot classification tasks, and matches or outperforms full fine-tuning in both few-shot and standard learning paradigms, while training 10x faster than existing black-box methods.

Author Information

Bairu Hou (University of California, Santa Barbara)
Joe O'Connor (Massachusetts Institute of Technology)
Jacob Andreas (MIT)
Shiyu Chang (UCSB)
Yang Zhang (MIT-IBM Watson AI Lab)

More from the Same Authors