<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="fr">
	<id>https://datafranca.org/wiki/index.php?action=history&amp;feed=atom&amp;title=Botshit</id>
	<title>Botshit - Historique des versions</title>
	<link rel="self" type="application/atom+xml" href="https://datafranca.org/wiki/index.php?action=history&amp;feed=atom&amp;title=Botshit"/>
	<link rel="alternate" type="text/html" href="https://datafranca.org/wiki/index.php?title=Botshit&amp;action=history"/>
	<updated>2026-04-03T19:06:21Z</updated>
	<subtitle>Historique des versions pour cette page sur le wiki</subtitle>
	<generator>MediaWiki 1.39.5</generator>
	<entry>
		<id>https://datafranca.org/wiki/index.php?title=Botshit&amp;diff=113485&amp;oldid=prev</id>
		<title>Pitpitt : Page créée avec « ==en construction==  == Définition == XXXXXXXXX  == Français == &#039;&#039;&#039; Botshit&#039;&#039;&#039;  == Anglais == &#039;&#039;&#039;Botshit&#039;&#039;&#039;    Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by ‘predicting’ responses rather than ‘knowing’ the meaning of their responses. This means chatbots can produce coherent sounding but inaccurate or fabricated content, referred to as ‘hallucinations... »</title>
		<link rel="alternate" type="text/html" href="https://datafranca.org/wiki/index.php?title=Botshit&amp;diff=113485&amp;oldid=prev"/>
		<updated>2025-07-08T13:07:51Z</updated>

		<summary type="html">&lt;p&gt;Page créée avec « ==en construction==  == Définition == XXXXXXXXX  == Français == &amp;#039;&amp;#039;&amp;#039; Botshit&amp;#039;&amp;#039;&amp;#039;  == Anglais == &amp;#039;&amp;#039;&amp;#039;Botshit&amp;#039;&amp;#039;&amp;#039;    Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by ‘predicting’ responses rather than ‘knowing’ the meaning of their responses. This means chatbots can produce coherent sounding but inaccurate or fabricated content, referred to as ‘hallucinations... »&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Nouvelle page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==en construction==&lt;br /&gt;
&lt;br /&gt;
== Définition ==&lt;br /&gt;
XXXXXXXXX&lt;br /&gt;
&lt;br /&gt;
== Français ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Botshit&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Anglais ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Botshit&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
  Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by ‘predicting’ responses rather than ‘knowing’ the meaning of their responses. This means chatbots can produce coherent sounding but inaccurate or fabricated content, referred to as ‘hallucinations’. When humans use this untruthful content for tasks, it becomes what we call ‘botshit’. This article focuses on how to use chatbots for content generation work while mitigating the epistemic (i.e., the process of producing knowledge) risks associated with botshit. &lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Catégorie:ENGLISH]]&lt;/div&gt;</summary>
		<author><name>Pitpitt</name></author>
	</entry>
</feed>