<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="fr">
	<id>https://datafranca.org/wiki/index.php?action=history&amp;feed=atom&amp;title=DINO</id>
	<title>DINO - Historique des versions</title>
	<link rel="self" type="application/atom+xml" href="https://datafranca.org/wiki/index.php?action=history&amp;feed=atom&amp;title=DINO"/>
	<link rel="alternate" type="text/html" href="https://datafranca.org/wiki/index.php?title=DINO&amp;action=history"/>
	<updated>2026-04-10T01:50:04Z</updated>
	<subtitle>Historique des versions pour cette page sur le wiki</subtitle>
	<generator>MediaWiki 1.39.5</generator>
	<entry>
		<id>https://datafranca.org/wiki/index.php?title=DINO&amp;diff=116520&amp;oldid=prev</id>
		<title>Pitpitt : Page créée avec « ==en construction==  == Définition == XXXXXXXXX  == Français == &#039;&#039;&#039; DINO v3 &#039;&#039;&#039;  == Anglais == &#039;&#039;&#039;DINO v3&#039;&#039;&#039;   A self-supervised model trained without the need for manual data annotations. The method leverages simple yet effective strategies to scale both dataset and model size, achieving state-of-the-art performance across a broad range of vision tasks without requiring fine-tuning. The paper presents a versatile vision foundation model that significantly outp... »</title>
		<link rel="alternate" type="text/html" href="https://datafranca.org/wiki/index.php?title=DINO&amp;diff=116520&amp;oldid=prev"/>
		<updated>2025-08-18T13:37:47Z</updated>

		<summary type="html">&lt;p&gt;Page créée avec « ==en construction==  == Définition == XXXXXXXXX  == Français == &amp;#039;&amp;#039;&amp;#039; DINO v3 &amp;#039;&amp;#039;&amp;#039;  == Anglais == &amp;#039;&amp;#039;&amp;#039;DINO v3&amp;#039;&amp;#039;&amp;#039;   A self-supervised model trained without the need for manual data annotations. The method leverages simple yet effective strategies to scale both dataset and model size, achieving state-of-the-art performance across a broad range of vision tasks without requiring fine-tuning. The paper presents a versatile vision foundation model that significantly outp... »&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Nouvelle page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==en construction==&lt;br /&gt;
&lt;br /&gt;
== Définition ==&lt;br /&gt;
XXXXXXXXX&lt;br /&gt;
&lt;br /&gt;
== Français ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; DINO v3 &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Anglais ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;DINO v3&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
 A self-supervised model trained without the need for manual data annotations. The method leverages simple yet effective strategies to scale both dataset and model size, achieving state-of-the-art performance across a broad range of vision tasks without requiring fine-tuning. The paper presents a versatile vision foundation model that significantly outperforms specialized approaches on dense prediction tasks while maintaining competitive performance on global recognition tasks.&lt;br /&gt;
 &lt;br /&gt;
 Self-supervised learning holds the promise of eliminating the need for manual data annotation, enabling models to scale effortlessly to massive datasets and larger architectures. By not being tailored to specific tasks or domains, this training paradigm has the potential to learn visual representations from diverse sources, ranging from natural to aerial images—using a single algorithm.&lt;br /&gt;
 &lt;br /&gt;
 DINOv3 represents a significant step forward in self-supervised learning, demonstrating that carefully designed training at scale can produce versatile vision foundation models that match or exceed specialized approaches across diverse tasks.&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
&lt;br /&gt;
[https://ai.meta.com/research/publications/dinov3/  Source : ai.meta.]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Catégorie:vocabulary]]&lt;/div&gt;</summary>
		<author><name>Pitpitt</name></author>
	</entry>
</feed>