« Apprentissage automatique minuscule » : différence entre les versions


(Page créée avec « ==en construction== == Définition == XXXXXXXXX == Français == ''' TinyML''' == Anglais == ''' TinyML''' Massive models trained on millions of instances like GPT–... »)
 
m (Imeziani a déplacé la page TinyML vers Apprentissage automatique minuscule)
(Aucune différence)

Version du 15 septembre 2022 à 10:40

en construction

Définition

XXXXXXXXX

Français

TinyML

Anglais

TinyML

Massive models trained on millions of instances like GPT–3 or DALL-E might grab the headlines, but TinyML is on the rise. Simply put, TinyML is the long-awaited fusion of embedded systems with machine learning. The IoT paradigm has largely relied on raw data from edge devices, from smartwatches to electricity meters, being shuttled to large conventional servers that would then execute complex machine learning algorithms. However, over the last few years, the cost (and size) of processing power has rapidly decreased, while the cost of data transfer has remained largely the same. TinyML is a natural answer to computationally expensive models.

Bigger is not always better when it comes to models. A low-power, low-latency model run on an edge device might be a better choice when data transfer is costly or difficult (e.g. due to a lack of cellular or wired networking in the area), a rapid response is desirable and the models can be reduced to a relatively small size. A trail camera used by wildlife researchers to photograph a particular species does not need to have a state-of-the-art deep learning image recognition model onboard — but it does need to be able to operate in austere settings for a prolonged period of time. Similarly, devices used in predictive maintenance and anomaly detection for e.g. oil pipelines or overland high-voltage networks often have to operate outside the boundaries of ubiquitous wireless connectivity. TinyML is a trend that evolved in response to these challenges.

Source : medium.com



Contributeurs: Imane Meziani, wiki