You are on page 1of 2

ChatGPT is making up fake Guardian articles.

Here’s how we’re responding

RÉSUMÉ

⁃ Article from the Guardian: left wing - directly concerned

⁃ The newspaper noticed abnormal articles, that seem odd


⁃ Sounded and seemed real, written by a real journalist
⁃ After long investigations, they couldn’t find any trace of it.

⁃ Reason: written by ChatGPT


⁃ Fluency and what it said = completely believable

⁃ What is worried about is: the issue of the invention of sources - dangerous for trusted news organizations and
journalists
⁃ Create distrust and conspiracy theory - skepticism

⁃ Huge popularity of ChatGPT: 100 million monthly in January (compared to TikTok) + highly used by students for
assignments
⁃ A domino effect on top rated companies (Microsoft, Google) on the setting of their proper AI system
⁃ While it could appear odd, it has been normalized very quickly

⁃ Even though it enables to gain a lot of advantages, it constitutes no matter what a major danger for responsible
reporting by fostering misinformation

COMMENTAIRE
⁃ Emergence of AI and new technologies: change our way to access to the news.
⁃ A breakthrough that has become not only a useful tool but also, a harmful one, for media.

To what extent have AI altered our perception of truth through the media?

I - A tool that revolutionized our way to access to information

⁃ Today all the news is in demand, a click away


⁃ Huge diversity of platforms
⁃ AI = end of the newspaper sector to digital news; now tasks are taken over by AI (10 times)

II) If media used to be a vector of truth, recent advances made it become a vector of fiction

⁃ Means of distraction from the truth: clickbait, personalized ads, promoted viral news at the expense of relevant
ones
⁃ Fosters malicious acts, digital crimes: deepfakes, fakes news
⁃ Create distrust and increase skepticism
⁃ Impact our behaviors: become “consumers of news”, prefer convenience over fact-checking

III - What are the solutions in order to contain it?

⁃ Digital media companies establish policies: Google and Facebook to counter the spread of fake news.
⁃ Acts protecting responsibilities concerning fake publications: the (CDA) immunizes website from liability
resulting from a publication
⁃ The only way to combat harmful forms of AI is to cultivate human intelligence: it requires awareness and
knowledge
⁃ it’s our responsibility to control + verify the reliability of the sources

ChatGPT is making up fake Guardian articles. Here’s how we’re responding


The Guardian Chris Moran Thu 6 Apr 2023

Last month one of our journalists received an interesting email. A researcher had come across mention of a Guardian article, written
by the journalist on a specific subject from a few years before. But the piece was proving elusive on our website and in search. Had
the headline perhaps been changed since it was launched? Had it been removed intentionally from the website because of a problem
we’d identified? Or had we been forced to take it down by the subject of the piece through legal means?
The reporter couldn’t remember writing the specific piece, but the headline certainly sounded like something they would have written.
It was a subject they were identified with and had a record of covering. Worried that there may have been some mistake at our end,
they asked colleagues to go back through our systems to track it down. Despite the detailed records we keep of all our content, and
especially around deletions or legal issues, they could find no trace of its existence.

Why? Because it had never been written.

Luckily the researcher had told us that they had carried out their research using ChatGPT. In response to being asked about articles on
this subject, the AI had simply made some up. Its fluency, and the vast training data it is built on, meant that the existence of the
invented piece even seemed believable to the person who absolutely hadn’t written it.
Huge amounts have been written about generative AI’s tendency to manufacture facts and events. But this specific wrinkle – the
invention of sources – is particularly troubling for trusted news organisations and journalists whose inclusion adds legitimacy and
weight to a persuasively written fantasy. And for readers and the wider information ecosystem, it opens up whole new questions about
whether citations can be trusted in any way, and could well feed conspiracy theories about the mysterious removal of articles on
sensitive issues that never existed in the first place.

If this seems like an edge case, it’s important to note that ChatGPT, from a cold start in November, registered 100 million monthly
users in January. TikTok, unquestionably a digital phenomenon, took nine months to hit the same level. Since that point we’ve seen
Microsoft implement the same technology in Bing, putting pressure on Google to follow suit with Bard.
They are now implementing these systems into Google Workspace and Microsoft 365, which have a 90% plus share of the
market between them. A recent study of 1,000 students in the US found that 89% have used ChatGPT to help with a homework
assignment. The technology, with all its faults, has been normalised at incredible speed, and is now at the heart of systems that act as
the key point of discovery and creativity for a significant portion of the world.

It’s easy to get sucked into the detail on generative AI, because it is inherently opaque. The ideas and implications, already explored
by academics across multiple disciplines, are hugely complex, the technology is developing rapidly, and companies with huge existing
market shares are integrating it as fast as they can to gain competitive advantages, disrupt each other and above all satisfy
shareholders.

But the question for responsible news organisations is simple, and urgent: what can this technology do right now, and how can it
benefit responsible reporting at a time when the wider information ecosystem is already under pressure from misinformation,
polarisation and bad actors.(…)

You might also like