Automatic Face Anonymization in Visual Data: Are we really well protected? - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Automatic Face Anonymization in Visual Data: Are we really well protected?

Natacha Ruchaud
  • Fonction : Auteur
Jean-Luc Dugelay
  • Fonction : Auteur
  • PersonId : 1016456

Résumé

With the proliferation of digital visual data in diverse domains (video surveillance, social networks, medias, etc.), privacy concerns increase. Obscuring faces in images and videos is one option to preserve privacy while keeping a certain level of quality and intelligibility of the video. Most popular filters are blackener (black masking), pixelization and blurring. Even if it appears efficient at first sight, in terms of human perception, we demonstrate in this article that as soon as the category and the strength of the filter used to obscure faces can be (automatically) identified, there exist in the literature ad-hoc powerful approaches enable to partially cancel the impact of such filters with regards to automatic face recognition. Hence, evaluation is expressed in terms of face recognition rate associated with clean, obscured and de-obscured face images. Figure 1: Respectively, " 20 minutes " a French magazine using pixelization filter, " crimes " a French program using blurring filter and Street view by google using blurring filter.
Fichier principal
Vignette du fichier
sec-publi-4914.pdf (543.16 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01367565 , version 1 (16-09-2016)

Identifiants

  • HAL Id : hal-01367565 , version 1

Citer

Natacha Ruchaud, Jean-Luc Dugelay. Automatic Face Anonymization in Visual Data: Are we really well protected?. Electronic Imaging, Feb 2016, San francisco, United States. ⟨hal-01367565⟩

Collections

CNRS EURECOM
166 Consultations
700 Téléchargements

Partager

Gmail Facebook X LinkedIn More