Timezone: »

Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Arjun Nitin Bhagoji · Daniel Cullina · Vikash Sehwag · Prateek Mittal

Thu Jul 22 07:25 AM -- 07:30 AM (PDT) @

Understanding the fundamental limits of robust supervised learning has emerged as a problem of immense interest, from both practical and theoretical standpoints. In particular, it is critical to determine classifier-agnostic bounds on the training loss to establish when learning is possible. In this paper, we determine optimal lower bounds on the cross-entropy loss in the presence of test-time adversaries, along with the corresponding optimal classification outputs. Our formulation of the bound as a solution to an optimization problem is general enough to encompass any loss function depending on soft classifier outputs. We also propose and provide a proof of correctness for a bespoke algorithm to compute this lower bound efficiently, allowing us to determine lower bounds for multiple practical datasets of interest. We use our lower bounds as a diagnostic tool to determine the effectiveness of current robust training methods and find a gap from optimality at larger budgets. Finally, we investigate the possibility of using of optimal classification outputs as soft labels to empirically improve robust training.

Author Information

Arjun Nitin Bhagoji (University of Chicago)
Daniel Cullina (Penn State University)
Vikash Sehwag (Princeton University)
Prateek Mittal (Princeton University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors