Timezone: »

 
Fast Certified Robust Training with Short Warmup
Zhouxing Shi · Yihan Wang · Huan Zhang · Jinfeng Yi · Cho-Jui Hsieh

State-of-the-art (SOTA) methods for certified robust training including interval bound propagation (IBP) and CROWN-IBP usually use a long warmup schedule with hundreds or thousands epochs and are thus costly. In this paper, we identify two important issues, namely exploded bounds at initialization, and the imbalance in ReLU activation states, which make certified training difficult and unstable, and thereby long warmup was previously needed. For fast training with short warmup, we propose three improvements, including a weight initialization for IBP training, fully adding Batch Normalization (BN), and regularization during warmup to tighten certified bounds and balance ReLU activation states. With a short warmup for fast training, we are already able to outperform literature SOTA trained with hundreds or thousands epochs under the same network architecture.

Author Information

Zhouxing Shi (University of California, Los Angeles)
Yihan Wang (UCLA)
Huan Zhang (UCLA)
Jinfeng Yi (JD AI Research)
Cho-Jui Hsieh (UCLA)

More from the Same Authors