« Optimisation par essaim de particules » : différence entre les versions


(nouveau terme)
Balise : Éditeur de wikicode 2017
mAucun résumé des modifications
Balise : Éditeur de wikicode 2017
Ligne 1 : Ligne 1 :


== Domaine ==
== Domaine ==
[[Category:Vocabulary]]
[[Category:Vocabulary]]<br>
[[Catégorie:Algorithme d'optimisation]]Algorithme d'optimisation<br>
[[Category:Intelligence artificielle]]Intelligence artificielle<br>
[[Category:Coulombe]]Coulombe<br>
[[Catégorie:Scotty]]<br>
   
   
== Définition ==
== Définition ==

Version du 18 avril 2019 à 19:47

Domaine


Algorithme d'optimisation
Intelligence artificielle
Coulombe

Définition

L’optimisation par essaim de particules (OEP) est une méthode d’optimisation stochastique basée sur la reproduction d’un comportement social d'animaux dans un essaim. L'OEP optimise un problème en essayant de façon itérative de trouver une solution candidate parmi une population de solutions candidates, en déplaçant des particules dans l'espace de recherche selon des formules mathématiques simples sur la position et la vitesse des particules. Le mouvement de chaque particule est influencé par sa position et son histoire, mais est également influencée par son voisinage. On s'attend à ce que cela déplace l'essaim vers les meilleures solutions.

Français

optimisation par essaim de particules

Source:

https://archipel.uqam.ca/6189/1/D2572.pdf

Anglais

Particle swarm optimization

In computer science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

PSO is originally attributed to Kennedy, Eberhart and Shi[1][2] and was first intended for simulating social behaviour,[3] as a stylized representation of the movement of organisms in a bird flock or fish school. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart[4] describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli.[5][6] Recently, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.[7]

PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found. Also, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods.