Skip to Main content Skip to Navigation
Conference papers

A PAC-Bayes Analysis of Adversarial Robustness

Abstract : We propose the first general PAC-Bayesian generalization bounds for adversarial robustness, that estimate, at test time, how much a model will be invariant to imperceptible perturbations in the input. Instead of deriving a worst-case analysis of the risk of a hypothesis over all the possible perturbations, we leverage the PAC-Bayesian framework to bound the averaged risk on the perturbations for majority votes (over the whole class of hypotheses). Our theoretically founded analysis has the advantage to provide general bounds (i) that are valid for any kind of attacks (i.e., the adversarial attacks), (ii) that are tight thanks to the PAC-Bayesian framework, (iii) that can be directly minimized during the learning phase to obtain a robust model on different attacks at test time.
Complete list of metadata
Contributor : Paul Viallard Connect in order to contact the contributor
Submitted on : Tuesday, October 26, 2021 - 8:26:05 PM
Last modification on : Monday, July 4, 2022 - 9:10:31 AM


Files produced by the author(s)


  • HAL Id : hal-03145332, version 2
  • ARXIV : 2102.11069


Paul Viallard, Guillaume Vidot, Amaury Habrard, Emilie Morvant. A PAC-Bayes Analysis of Adversarial Robustness. Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021), NIPS: Neural Information Processing Systems Foundation, Dec 2021, Virtual-only Conference, Australia. ⟨hal-03145332v2⟩



Record views


Files downloads