Advocating for Multiple Defense Strategies against Adversarial Examples - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Advocating for Multiple Defense Strategies against Adversarial Examples

Résumé

It has been empirically observed that defense mechanisms designed to protect neural networks against ∞ adversarial examples offer poor performance against 2 adversarial examples and vice versa. In this paper we conduct a geometrical analysis that validates this observation. Then, we provide a number of empirical insights to illustrate the effect of this phenomenon in practice. Then, we review some of the existing defense mechanism that attempts to defend against multiple attacks by mixing defense strategies. Thanks to our numerical experiments, we discuss the relevance of this method and state open questions for the adversarial examples community.
Fichier principal
Vignette du fichier
2012.02632.pdf (369.53 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03118649 , version 1 (22-01-2021)

Identifiants

  • HAL Id : hal-03118649 , version 1

Citer

Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne. Advocating for Multiple Defense Strategies against Adversarial Examples. Workshop on Machine Learning for CyberSecurity (MLCS@ECML-PKDD), Sep 2020, Ghent, Belgium. ⟨hal-03118649⟩
42 Consultations
83 Téléchargements

Partager

Gmail Facebook X LinkedIn More