« Algorithme FP-growth » : différence entre les versions


Aucun résumé des modifications
m (Remplacement de texte : « ↵↵<small> » par « ==Sources== »)
 
(7 versions intermédiaires par 3 utilisateurs non affichées)
Ligne 1 : Ligne 1 :
== Définition ==
== Définition ==
XXXXXXXXX
FP (frequent pattern).
Version améliorée de l'algorithme Apriori qui est largement utilisé pour le minage de modèles fréquent. Il est utilisé comme un processus analytique qui trouve des modèles ou des associations fréquents à partir d'ensembles de données.


== Français ==
== Français ==
Ligne 8 : Ligne 9 :
''' FP-growth algorithm'''
''' FP-growth algorithm'''


FP stands for frequent pattern.[23]
'''frequent pattern growth algorithm'''
 
==Sources==
In the first pass, the algorithm counts the occurrences of items (attribute-value pairs) in the dataset of transactions, and stores these counts in a 'header table'. In the second pass, it builds the FP-tree structure by inserting transactions into a trie.
 
Items in each transaction have to be sorted by descending order of their frequency in the dataset before being inserted so that the tree can be processed quickly. Items in each transaction that do not meet the minimum support requirement are discarded. If many transactions share most frequent items, the FP-tree provides high compression close to tree root.
 
Recursive processing of this compressed version of the main dataset grows frequent item sets directly, instead of generating candidate items and testing them against the entire database (as in the apriori algorithm).
 
Growth begins from the bottom of the header table i.e. the item with the smallest support by finding all sorted transactions that end in that item. Call this item {\displaystyle I}I.
 
A new conditional tree is created which is the original FP-tree projected onto {\displaystyle I}I. The supports of all nodes in the projected tree are re-counted with each node getting the sum of its children counts. Nodes (and hence subtrees) that do not meet the minimum support are pruned. Recursive growth ends when no individual items conditional on {\displaystyle I}I meet the minimum support threshold. The resulting paths from root to {\displaystyle I}I will be frequent itemsets. After this step, processing continues with the next least-supported header item of the original FP-tree.
 
Once the recursive process has completed, all frequent item sets will have been found, and association rule creation begins.[24]
 
<small>
 
[https://en.wikipedia.org/wiki/Association_rule_learning#FP-growth_algorithm  Source : Wikipedia  Machine Learning ]
[https://en.wikipedia.org/wiki/Association_rule_learning#FP-growth_algorithm  Source : Wikipedia  Machine Learning ]


[https://ichi.pro/fr/comprendre-et-creer-un-algorithme-fp-growth-en-python-238291391263554  Source : ichi ]  
[https://ichi.pro/fr/comprendre-et-creer-un-algorithme-fp-growth-en-python-238291391263554  Source : ichi ]  


 
[https://khaledtannir.net/blog/2012/07/12/lalgorithme-fp-growth-les-bases-13/  Source : khaledtannir ]  
[[Catégorie:vocabulary]]
[[Catégorie:GRAND LEXIQUE FRANÇAIS]]
[[Catégorie:Wikipedia-IA‎]]

Dernière version du 27 janvier 2024 à 17:31

Définition

FP (frequent pattern). Version améliorée de l'algorithme Apriori qui est largement utilisé pour le minage de modèles fréquent. Il est utilisé comme un processus analytique qui trouve des modèles ou des associations fréquents à partir d'ensembles de données.

Français

algorithme FP-growth

Anglais

FP-growth algorithm

frequent pattern growth algorithm

Sources

Source : Wikipedia Machine Learning

Source : ichi

Source : khaledtannir



Contributeurs: Claire Gorjux, Imane Meziani, wiki