<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="fr">
	<id>https://datafranca.org/wiki/index.php?action=history&amp;feed=atom&amp;title=Mixture-of-Recursions</id>
	<title>Mixture-of-Recursions - Historique des versions</title>
	<link rel="self" type="application/atom+xml" href="https://datafranca.org/wiki/index.php?action=history&amp;feed=atom&amp;title=Mixture-of-Recursions"/>
	<link rel="alternate" type="text/html" href="https://datafranca.org/wiki/index.php?title=Mixture-of-Recursions&amp;action=history"/>
	<updated>2026-04-10T10:34:35Z</updated>
	<subtitle>Historique des versions pour cette page sur le wiki</subtitle>
	<generator>MediaWiki 1.39.5</generator>
	<entry>
		<id>https://datafranca.org/wiki/index.php?title=Mixture-of-Recursions&amp;diff=115163&amp;oldid=prev</id>
		<title>Pitpitt le 24 juillet 2025 à 13:08</title>
		<link rel="alternate" type="text/html" href="https://datafranca.org/wiki/index.php?title=Mixture-of-Recursions&amp;diff=115163&amp;oldid=prev"/>
		<updated>2025-07-24T13:08:45Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;fr&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Version précédente&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Version du 24 juillet 2025 à 09:08&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l10&quot;&gt;Ligne 10 :&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Ligne 10 :&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Mixture-of-Recursions&amp;#039;&amp;#039;&amp;#039;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Mixture-of-Recursions&amp;#039;&amp;#039;&amp;#039;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br/&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br/&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt; Learning Dynamic Recursive Depths for Adaptive Token-Level Computation &lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt; &lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  A unified framework that combines the two axes of efficiency inside a single Recursive Transformer. MoR reuses a shared stack of layers across recursion steps to achieve parameter efficiency, while lightweight routers enable adaptive token-level thinking by dynamically assigning different recursion depths to individual tokens. This allows MoR to focus quadratic attention computation only among tokens still active at a given recursion depth, further improving memory access efficiency by selectively caching only their key-value pairs. Beyond these core mechanisms, we also propose a KV sharing variant that reuses KV pairs from the first recursion, specifically designed to decrease prefill latency and memory footprint. Across model scales ranging from 135M to 1.7B parameters, MoR forms a new Pareto frontier: at equal training FLOPs and smaller model sizes, it significantly lowers validation perplexity and improves fewshot accuracy, while delivering higher throughput compared with vanilla and existing recursive baselines. These gains demonstrate that MoR is an effective path towards large-model quality without incurring large-model cost&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  A unified framework that combines the two axes of efficiency inside a single Recursive Transformer. MoR reuses a shared stack of layers across recursion steps to achieve parameter efficiency, while lightweight routers enable adaptive token-level thinking by dynamically assigning different recursion depths to individual tokens. This allows MoR to focus quadratic attention computation only among tokens still active at a given recursion depth, further improving memory access efficiency by selectively caching only their key-value pairs. Beyond these core mechanisms, we also propose a KV sharing variant that reuses KV pairs from the first recursion, specifically designed to decrease prefill latency and memory footprint. Across model scales ranging from 135M to 1.7B parameters, MoR forms a new Pareto frontier: at equal training FLOPs and smaller model sizes, it significantly lowers validation perplexity and improves fewshot accuracy, while delivering higher throughput compared with vanilla and existing recursive baselines. These gains demonstrate that MoR is an effective path towards large-model quality without incurring large-model cost&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br/&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br/&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pitpitt</name></author>
	</entry>
	<entry>
		<id>https://datafranca.org/wiki/index.php?title=Mixture-of-Recursions&amp;diff=115162&amp;oldid=prev</id>
		<title>Pitpitt : Page créée avec « ==en construction==  == Définition == XXXXXXXXX  == Français == &#039;&#039;&#039; XXXXXXXXX &#039;&#039;&#039;  == Anglais == &#039;&#039;&#039;Mixture-of-Recursions&#039;&#039;&#039;   Learning Dynamic Recursive Depths for Adaptive Token-Level Computation     A unified framework that combines the two axes of efficiency inside a single Recursive Transformer. MoR reuses a shared stack of layers across recursion steps to achieve parameter efficiency, while lightweight routers enable adaptive token-level thinking by dyna... »</title>
		<link rel="alternate" type="text/html" href="https://datafranca.org/wiki/index.php?title=Mixture-of-Recursions&amp;diff=115162&amp;oldid=prev"/>
		<updated>2025-07-24T13:06:46Z</updated>

		<summary type="html">&lt;p&gt;Page créée avec « ==en construction==  == Définition == XXXXXXXXX  == Français == &amp;#039;&amp;#039;&amp;#039; XXXXXXXXX &amp;#039;&amp;#039;&amp;#039;  == Anglais == &amp;#039;&amp;#039;&amp;#039;Mixture-of-Recursions&amp;#039;&amp;#039;&amp;#039;   Learning Dynamic Recursive Depths for Adaptive Token-Level Computation     A unified framework that combines the two axes of efficiency inside a single Recursive Transformer. MoR reuses a shared stack of layers across recursion steps to achieve parameter efficiency, while lightweight routers enable adaptive token-level thinking by dyna... »&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Nouvelle page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==en construction==&lt;br /&gt;
&lt;br /&gt;
== Définition ==&lt;br /&gt;
XXXXXXXXX&lt;br /&gt;
&lt;br /&gt;
== Français ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; XXXXXXXXX &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Anglais ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Mixture-of-Recursions&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
 Learning Dynamic Recursive Depths for Adaptive Token-Level Computation &lt;br /&gt;
 &lt;br /&gt;
 A unified framework that combines the two axes of efficiency inside a single Recursive Transformer. MoR reuses a shared stack of layers across recursion steps to achieve parameter efficiency, while lightweight routers enable adaptive token-level thinking by dynamically assigning different recursion depths to individual tokens. This allows MoR to focus quadratic attention computation only among tokens still active at a given recursion depth, further improving memory access efficiency by selectively caching only their key-value pairs. Beyond these core mechanisms, we also propose a KV sharing variant that reuses KV pairs from the first recursion, specifically designed to decrease prefill latency and memory footprint. Across model scales ranging from 135M to 1.7B parameters, MoR forms a new Pareto frontier: at equal training FLOPs and smaller model sizes, it significantly lowers validation perplexity and improves fewshot accuracy, while delivering higher throughput compared with vanilla and existing recursive baselines. These gains demonstrate that MoR is an effective path towards large-model quality without incurring large-model cost&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
&lt;br /&gt;
[https://www.alphaxiv.org/abs/2507.10524  Source : alphaxiv]&lt;br /&gt;
&lt;br /&gt;
[[Catégorie:vocabulary]]&lt;/div&gt;</summary>
		<author><name>Pitpitt</name></author>
	</entry>
</feed>