« Hugging Face » : différence entre les versions


(Page créée avec « ==en construction== == '''CYBERSÉCURITÉ'''== == Définition == XXXXXXXXX == Français == ''' Piratage IA ''' == Anglais == ''' AI Jacking''' AI jacking is a new cybersecurity term that explains a specific kind of cyberattack targeting artificial intelligence (AI) systems. It primarily affects popular AI platforms like Hugging Face. This kind of attack is concerning because it can affect a lot of users at once. Th... »)
 
Aucun résumé des modifications
Ligne 15 : Ligne 15 :
   
   
  The attack happens when someone maliciously takes advantage of the way Hugging Face renames its models or datasets. Normally, when a model or dataset gets a new name, the old name redirects to the new one.
  The attack happens when someone maliciously takes advantage of the way Hugging Face renames its models or datasets. Normally, when a model or dataset gets a new name, the old name redirects to the new one.
But if a hacker takes the old name for their use, they can replace the original content with something harmful or incorrect. This is dangerous, especially in machine learning, where data integrity is very important.
But if a hacker takes the old name for their use, they can replace the original content with something harmful or incorrect. This is dangerous, especially in machine learning, where data integrity is very important.


<small>
<small>

Version du 8 janvier 2024 à 16:12

en construction

CYBERSÉCURITÉ

Définition

XXXXXXXXX

Français

Piratage IA

Anglais

AI Jacking

AI jacking is a new cybersecurity term that explains a specific kind of cyberattack targeting artificial intelligence (AI) systems. It primarily affects popular AI platforms like Hugging Face. This kind of attack is concerning because it can affect a lot of users at once.

The attack happens when someone maliciously takes advantage of the way Hugging Face renames its models or datasets. Normally, when a model or dataset gets a new name, the old name redirects to the new one.
But if a hacker takes the old name for their use, they can replace the original content with something harmful or incorrect. This is dangerous, especially in machine learning, where data integrity is very important.

Source : techopedia



GLOSSAIRE DE LA CYBERSÉCURITÉ

Contributeurs: Imane Meziani, wiki