Information reckons with ChatGPT, fake news risk

The growth in popularity of Artificial Intelligence tools such as ChatGPT, the software that has been talked about for weeks and in which Microsft is investing heavily, is broadening the debate to the world of information as well.

This technology in some cases may simplify some tasks, in others it may increase the risk of fake breaking news.

Information reckons with ChatGPT, fake news risk

Researchers at Newsguard Technologies, put the tool to the test on sample of 100 already known hoaxes: for 80 of these ChatGPT generated false narratives. Meanwhile, the website Cnet has discontinued the use of artificial intelligence for articles. While at a U.S. university ChatGpt passed a law school exam, demonstrating how these tools offer possibilities yet to be understood and managed.

What did researchers do

Researchers at Newsguard Technologies have tried to build conspiracy theories by relying on artificial intelligence (Ai). In 80 percent of the cases ChatGPT generated false and misleading claims on current topics including Covid-19 and Ukraine. "The results," they explain, "confirm the fears and concerns expressed by OpenAi itself (the company that created ChatGPT, ed.) about how the tool could be used if it fell into the wrong hands. To the eyes of those unfamiliar with the topics covered in this report, the findings could easily seem legitimate and even authoritative." However, NewsGuard verified that ChatGPT "has safeguards in place to prevent the spread of some examples of misinformation. For some hoaxes, it took as many as five attempts to lead the chatbot to provide incorrect information."

The Associated Press case

Several newsrooms have been using automation for some time. The Associated Press uses artificial intelligence to produce sports stories based on data and models. Dow Jones, Bloomberg and Reuters to streamline news coverage of corporate earnings and the stock market. But now that artificial intelligence has become so advanced and accessible, notes the Axios website, it has become more difficult for newsrooms to draw the line between using AI and over-relying on the technology. The tencology site Cnet, for example, announced a few days ago that it was pausing experiments with AI after being accused of poor accuracy in some articles written with this very technology. Meanwhile, ChatGPT is being put to the test in several fields. At a U.S. law school, it passed law exams and the results were good enough that some professors said this system could lead to widespread cheating and even spell the end of traditional teaching methods.

"We need to be able to understand the complexities of the world we are entering: if used well, AI can do wonderful things, but if used incorrectly or cheated, it can generate great difficulties,"

– stresses Gudo Di Fraia, pro-rector of Milan's Iulm University, which just today inaugurated a new Artificial Intelligence laboratory.