<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="fr">
	<id>https://datafranca.org/wiki/index.php?action=history&amp;feed=atom&amp;title=InternVL</id>
	<title>InternVL - Historique des versions</title>
	<link rel="self" type="application/atom+xml" href="https://datafranca.org/wiki/index.php?action=history&amp;feed=atom&amp;title=InternVL"/>
	<link rel="alternate" type="text/html" href="https://datafranca.org/wiki/index.php?title=InternVL&amp;action=history"/>
	<updated>2026-04-09T17:46:05Z</updated>
	<subtitle>Historique des versions pour cette page sur le wiki</subtitle>
	<generator>MediaWiki 1.39.5</generator>
	<entry>
		<id>https://datafranca.org/wiki/index.php?title=InternVL&amp;diff=117036&amp;oldid=prev</id>
		<title>Pitpitt : Page créée avec « ==en construction==  == Définition == XXXXXXXXX  == Français == &#039;&#039;&#039;  InternVL&#039;&#039;&#039;  == Anglais == &#039;&#039;&#039; InternVL&#039;&#039;&#039;   A new family of open-source multimodal large language models that significantly advances capabilities in versatility, reasoning, and efficiency. The models range from 1B to 241B parameters and achieve state-of-the-art performance among open-source models while narrowing the gap with commercial systems like GPT-5.  InternVL3.5 achieves impressive per... »</title>
		<link rel="alternate" type="text/html" href="https://datafranca.org/wiki/index.php?title=InternVL&amp;diff=117036&amp;oldid=prev"/>
		<updated>2025-09-20T14:10:02Z</updated>

		<summary type="html">&lt;p&gt;Page créée avec « ==en construction==  == Définition == XXXXXXXXX  == Français == &amp;#039;&amp;#039;&amp;#039;  InternVL&amp;#039;&amp;#039;&amp;#039;  == Anglais == &amp;#039;&amp;#039;&amp;#039; InternVL&amp;#039;&amp;#039;&amp;#039;   A new family of open-source multimodal large language models that significantly advances capabilities in versatility, reasoning, and efficiency. The models range from 1B to 241B parameters and achieve state-of-the-art performance among open-source models while narrowing the gap with commercial systems like GPT-5.  InternVL3.5 achieves impressive per... »&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Nouvelle page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==en construction==&lt;br /&gt;
&lt;br /&gt;
== Définition ==&lt;br /&gt;
XXXXXXXXX&lt;br /&gt;
&lt;br /&gt;
== Français ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;  InternVL&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Anglais ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; InternVL&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
 A new family of open-source multimodal large language models that significantly advances capabilities in versatility, reasoning, and efficiency. The models range from 1B to 241B parameters and achieve state-of-the-art performance among open-source models while narrowing the gap with commercial systems like GPT-5.&lt;br /&gt;
 InternVL3.5 achieves impressive performance across multiple benchmarks. The largest model, InternVL3.5-241B-A28B, attains state-of-the-art results among open-source models and narrows the performance gap with GPT-5 to just 3.9% on general multimodal tasks. On reasoning benchmarks, the models show substantial improvements, with InternVL3.5-8B achieving 73.4 on MMMU and InternVL3.5-241B-A28B reaching 77.7. The Cascade RL framework provides up to 16.0% improvement in overall reasoning performance compared to the predecessor InternVL3.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
&lt;br /&gt;
[https://huggingface.co/papers/2508.18265   Source : huggingface]&lt;br /&gt;
&lt;br /&gt;
[[Catégorie:vocabulary]]&lt;/div&gt;</summary>
		<author><name>Pitpitt</name></author>
	</entry>
</feed>