Quantification


en construction

Définition

XXXXXXXXX

Français

XXXXXXXXX

Anglais

Quantisation

 allows us to reduce the size of our neural networks by converting the network’s weights and biases from their original floating-point format (e.g. 32-bit) to a lower precision format (e.g. 8-bit). The original floating point format can vary depending on several factors such as the model’s architecture and training processes. The ultimate purpose of quantisation is to reduce the size of our model, thereby reducing memory and computational requirements to run inference and train our model. Quantisation can very quickly become fiddly if you are attempting to quantise the models yourself. 

Source : towardsdatascience

Source : mathworks

Contributeurs: Claude Coulombe, Marie Alfaro, wiki