Distillation défensive


Révision datée du 15 décembre 2020 à 18:08 par Pitpitt (discussion | contributions) (Page créée avec « ==en construction== == Définition == XXXXXXXXX == Français == ''' XXXXXXXXX ''' == Anglais == ''' Defensive Distillation ''' Defensive distillation is an adversarial... »)
(diff) ← Version précédente | Voir la version actuelle (diff) | Version suivante → (diff)

en construction

Définition

XXXXXXXXX

Français

XXXXXXXXX

Anglais

Defensive Distillation

Defensive distillation is an adversarial training technique that adds flexibility to an algorithm’s classification process so the model is less susceptible to exploitation. In distillation training, one model is trained to predict the output probabilities of another model that was trained on an earlier, baseline standard to emphasize accuracy.

The first model is trained with “hard” labels to achieve maximum accuracy, for example requiring a 100% probability threshold that the biometric scan matches the fingerprint on record. The problem is, the algorithm doesn’t match every single pixel, since that would take too much time. If and when an attacker learns what features and parameters the system is scanning for, the scammer can send a fake fingerprint image with just a handful of the right pixels that meet the system’s programming, which generates a false positive match.

The first model then provides “soft” labels with a 95% probability that a fingerprint matches the biometric scan on record. This uncertainty is used to train the second model to act as an additional filter. Since now there’s an element of randomness to gaining a perfect match, the second or “distilled” algorithm is far more robust and can spot spoofing attempts easier. It’s now far more difficult for a scammer to “game the system” and artificially create a perfect match for both algorithms by just mimicking the first model’s training scheme.


Source : DeepAI.org

Contributeurs: Claire Gorjux, wiki