« Valeur informationelle » : différence entre les versions


m (Pitpitt a déplacé la page Informativeness vers Contenu en information)
m (Remplacement de texte : « ↵<small> » par «  ==Sources== »)
 
(4 versions intermédiaires par le même utilisateur non affichées)
Ligne 1 : Ligne 1 :
== Définition ==
== Définition ==
Caractère informatif. Qui vise à informer. Qui a pour but de donner des renseignements.
Caractère informatif. Qui a pour but de donner des renseignements.


== Français ==
== Français ==
'''Valeur informationelle'''
''' contenu informatif '''
''' contenu informatif '''


Ligne 14 : Ligne 16 :
<!-- The learning of mixture models can be viewed as a clustering problem. Indeed, given data samples independently generated from a mixture of distributions, we often would like to find the {\it correct target clustering} of the samples according to which component distribution they were generated from. For a clustering problem, practitioners often choose to use the simple k-means algorithm. k-means attempts to find an {\it optimal clustering} that minimizes the sum-of-squares distance between each point and its cluster center. In this paper, we consider fundamental (i.e., information-theoretic) limits of the solutions (clusterings) obtained by optimizing the sum-of-squares distance. In particular, we provide sufficient conditions for the closeness of any optimal clustering and the correct target clustering assuming that the data samples are generated from a mixture of spherical Gaussian distributions. We also generalize our results to log-concave distributions. Moreover, we show that under similar or even weaker conditions on the mixture model, any optimal clustering for the samples with reduced dimensionality is also close to the correct target clustering. These results provide intuition for the informativeness of k-means (with and without dimensionality reduction) as an algorithm for learning mixture models. -->
<!-- The learning of mixture models can be viewed as a clustering problem. Indeed, given data samples independently generated from a mixture of distributions, we often would like to find the {\it correct target clustering} of the samples according to which component distribution they were generated from. For a clustering problem, practitioners often choose to use the simple k-means algorithm. k-means attempts to find an {\it optimal clustering} that minimizes the sum-of-squares distance between each point and its cluster center. In this paper, we consider fundamental (i.e., information-theoretic) limits of the solutions (clusterings) obtained by optimizing the sum-of-squares distance. In particular, we provide sufficient conditions for the closeness of any optimal clustering and the correct target clustering assuming that the data samples are generated from a mixture of spherical Gaussian distributions. We also generalize our results to log-concave distributions. Moreover, we show that under similar or even weaker conditions on the mixture model, any optimal clustering for the samples with reduced dimensionality is also close to the correct target clustering. These results provide intuition for the informativeness of k-means (with and without dimensionality reduction) as an algorithm for learning mixture models. -->


<small>
==Sources==


[https://arxiv.org/abs/1703.10534  Source : arxiv]
[https://arxiv.org/abs/1703.10534  Source : arxiv]


[[Catégorie:Publication]]
[[Catégorie:GRAND LEXIQUE FRANÇAIS]]

Dernière version du 28 janvier 2024 à 14:01

Définition

Caractère informatif. Qui a pour but de donner des renseignements.

Français

Valeur informationelle

contenu informatif

contenu en information

Anglais

informativeness


Sources

Source : arxiv

Contributeurs: Patrick Drouin, wiki