Timezone: »

Meta-learning Hyperparameter Performance Prediction with Neural Processes
Ying WEI · Peilin Zhao · Junzhou Huang

Wed Jul 21 05:45 AM -- 05:50 AM (PDT) @

The surrogate that predicts the performance of hyperparameters has been a key component for sequential model-based hyperparameter optimization. In practical applications, a trial of a hyper-parameter configuration may be so costly that a surrogate is expected to return an optimal configuration with as few trials as possible. Observing that human experts draw on their expertise in a machine learning model by trying configurations that once performed well on other datasets, we are inspired to build a trial-efficient surrogate by transferring the meta-knowledge learned from historical trials on other datasets. We propose an end-to-end surrogate named as Transfer NeuralProcesses (TNP) that learns a comprehensive set of meta-knowledge, including the parameters of historical surrogates, historical trials, and initial configurations for other datasets. Experiments on extensive OpenML datasets and three computer vision datasets demonstrate that the proposed algorithm achieves state-of-the-art performance in at least one order of magnitude less trials.

Author Information

Ying WEI (City University of Hong Kong)
Peilin Zhao (Tencent AI Lab)
Junzhou Huang (University of Texas at Arlington / Tencent AI Lab)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors