People are getting concerned about whether or not the news stories they read were really written by a person. A new study looked into how labeling AI written stories as such might affect the readership levels.
We have all heard about the downside of artificial intelligence, especially when it comes to AI generated content. Hollywood writers went on strike a year ago over the possibility of being replaced by computers. Lawyers have even been caught using AI program like ChatGPT from OpenAI to write their legal briefs, when the AI made mistakes.
And how do you even know if what you are reading right now was written by a person or a computer program? Well, obviously an AI program would never be so critical of AI, would it? Maybe it would so the reader would not suspect that the story was written by AI.
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at [email protected].
Thank you.
News consumers are wary of AI-generated headlines, often perceiving them as potentially inaccurate. With the proliferation of AI-generated content online, social media platforms have begun to label such content. Sacha Altay and Fabrizio Gilardi conducted two pre-registered online experiments involving 4,976 participants from the US and UK to investigate the impact of labeling headlines as AI-generated. Respondents evaluated 16 headlines, each categorized as true, false, AI-generated, or human-generated.
In the first study, participants were randomly assigned to conditions in which no headline was labeled as AI, AI-generated headlines were labeled as AI, human-generated headlines were labeled as AI, and false headlines were labeled as false. The results show that respondents rated headlines labeled as AI-generated as less accurate and were less willing to share them, regardless of whether the headlines were true or false, and regardless of whether the headlines were created by humans or AI. The effect of labeling headlines as AI-generated was three times smaller than labeling headlines as false.
To understand the underlying mechanisms behind this AI aversion, the authors experimentally manipulated definitions of AI-generated headlines. They discovered that this aversion stems from the assumption that headlines labeled as AI-generated are entirely produced by AI without any human oversight. While there is widespread support for labeling AI-generated content, the authors emphasize the importance of transparency regarding the meaning of these labels to avoid unintended negative consequences. They argue that to maximize the impact of such labels, false AI-generated content should be explicitly labeled as false, rather than solely as AI-generated.