« Recherche de règles d'association » : différence entre les versions
Aucun résumé des modifications Balise : Éditeur de wikicode 2017 |
Aucun résumé des modifications Balise : Éditeur de wikicode 2017 |
||
Ligne 2 : | Ligne 2 : | ||
[[Catégorie:Intelligence artificielle]] | [[Catégorie:Intelligence artificielle]] | ||
[[Catégorie:vocabulary]] | [[Catégorie:vocabulary]] | ||
== Définition == | == Définition == | ||
... | |||
== Français == | == Français == | ||
... | |||
== Anglais == | == Anglais == | ||
''' | '''Association Rule Learning''' | ||
discovering rules for observing A given B. This is closely related to clustering in that we’re attempting to find connections between events. However, the difference is the approach. Instead of drawing bounds on a region and seeing if someone fits in that bucket, we use the frequency of a collection of discrete observations to create priors: what’s the probability of observing A, or B, or C? From these, we figure out what the probability is of observing A given B, or observing C given A and B. This is just Bayesian statistics: P(A), P(B | A), P(C | A, B), and so forth. | discovering rules for observing A given B. This is closely related to clustering in that we’re attempting to find connections between events. However, the difference is the approach. Instead of drawing bounds on a region and seeing if someone fits in that bucket, we use the frequency of a collection of discrete observations to create priors: what’s the probability of observing A, or B, or C? From these, we figure out what the probability is of observing A given B, or observing C given A and B. This is just Bayesian statistics: P(A), P(B | A), P(C | A, B), and so forth. |
Version du 17 septembre 2019 à 08:12
en construction
Définition
...
Français
...
Anglais
Association Rule Learning
discovering rules for observing A given B. This is closely related to clustering in that we’re attempting to find connections between events. However, the difference is the approach. Instead of drawing bounds on a region and seeing if someone fits in that bucket, we use the frequency of a collection of discrete observations to create priors: what’s the probability of observing A, or B, or C? From these, we figure out what the probability is of observing A given B, or observing C given A and B. This is just Bayesian statistics: P(A), P(B | A), P(C | A, B), and so forth.
Machine Learning for Beginners – a How-to Guide https://opendatascience.com/machine-learning-for-beginners/
Contributeurs: Gérard Pelletier, Imane Meziani, Jean Benoît Morel, wiki