'Garbage in, garbage out': AI fails to debunk disinformation, study finds
When it comes to combating disinformation ahead of the U.S. presidential elections, artificial intelligence and chatbots are failing, a media research group has found.
The latest audit by the research group NewsGuard found that generative AI tools struggle to effectively respond to false narratives.
In its latest audit of 10 leading chatbots, compiled in September, NewsGuard found that AI will repeat misinformation 18% of the time and offer a nonresponse 38.33% of the time — leading to a “fail rate” of almost 40%, according to NewsGuard.
“These chatbots clearly struggle when it comes to handling prompt inquiries related to news and information,” said McKenzie Sadeghi, the audit’s author. “There's a lot of sources out there, and the chatbots might not be able to discern between which ones are reliable versus which ones aren't.”
NewsGuard has a database of false news narratives that circulate, encompassing global wars and U.S. politics, Sadeghi told VOA.
Every month, researchers feed trending false narratives into leading chatbots in three different forms: innocent user prompts, leading questions and “bad actor” prompts. From there, the researchers measure if AI repeats, fails to respond or debunks the claims.
AI repeats false narratives mostly in response to bad actor prompts, which mirror the tactics used by foreign influence campaigns to spread disinformation. Around 70% of the instances where AI repeated falsehoods were in response to bad actor prompts, as opposed to leading prompts or innocent user prompts.
Foreign influence campaigns are able to take advantage of such flaws, according to the Office of the Director of National Intelligence. Russia, Iran and China have used generative AI to “boost their respective U.S. election influence efforts,” according to an intelligence report released last month.
As an example of how easily AI chatbots can be misled, Sadeghi cited a NewsGuard study in June that found AI would repeat Russian disinformation if it “masqueraded” as coming from an American local news source.
From myths about migrants to falsehoods about FEMA, the spread of disinformation and misinformation has been a consistent theme throughout the 2024 election cycle.
“Misinformation isn’t new, but generative AI is definitely amplifying these patterns and behaviors,” Sejin Paik, an AI researcher at Georgetown University, told VOA.
Because the technology behind AI is constantly changing and evolving, it is often unable to detect erroneous information, Paik said. This leads to not only issues with the factuality of AI’s output, but also the consistency.
NewsGuard also found that two-thirds of “high quality” news sites block generative AI models from using their media coverage. As a result, AI often has to learn from lower-quality, misinformation-prone news sources, according to the watchdog.
This can be dangerous, experts say. Much of the non-paywalled media that AI trains on is either “propaganda” or “deliberate strategic communication,” media scholar Matt Jordan told VOA.
“AI doesn't know anything: It doesn't sift through knowledge, and it can't evaluate claims,” Jordan, a media professor at Penn State, told VOA. “It just repeats based on huge numbers.”
AI has a tendency to repeat “bogus” news because statistically, it tends to be trained on skewed and biased information, he added. He called this a “garbage in, garbage out model.”
NewsGuard aims to set the standard for measuring accuracy and trustworthiness in the AI industry through monthly surveys, Sadeghi said.
The sector is growing fast, even as issues around disinformation are flagged. The generative AI industry has experienced monumental growth in the past few years. OpenAI’s ChatGPT currently reports 200 million weekly users, more than double from last year, according to Reuters.
The growth in popularity of these tools leads to another problem in their output, according to Anjana Susarla, a professor in Responsible AI at Michigan State University. Since there is such a high quantity of information going in — from users and external sources — it is hard to detect and stop the spread of misinformation.
Many users are still willing to believe the outputs of these chatbots are true, Susarla said.
“Sometimes, people can trust AI more than they trust human beings,” she told VOA.
The solution to this may be bipartisan regulation, she added. She hopes that the government will encourage social media platforms to regulate malicious misinformation.
Jordan, on the other hand, believes the solution is with media audiences.
“The antidote to misinformation is to trust in reporters and news outlets instead of AI,” he told VOA. “People sometimes think that it's easier to trust a machine than it is to trust a person. But in this case, it's just a machine spewing out what untrustworthy people have said.”
Source: voanews.com/Jocelyn Mintz
Trending News
Kofi Job Construction CEO donates to 1,000 widows and vulnerable individuals
00:51Zoomlion employees honoured for dedication at end-of-year party
01:01ORAL Is not a witch-hunting committee – Mahama
15:59Ghanaian citizen sues Wesley Girls’ SHS for allegedly denying Muslim students right to practise their religion
01:43#2024Polls: Patrick Yaw Boamah secures victory in Okaikwei Central after EC's re-collation
00:51#2024 Polls: EC nullifies Parliamentary results for Dome Kwabenya, 2 others
16:48Wisdom, strength, and God's blessings for our new leaders – Akufo-Addo congratulates Mahama, MPs-elect
01:27Akufo-Addo blames NDC leadership for post-election chaos
00:42Support us in carrying out our mandate – EC to citizens
15:31Take firm action against threats to our country's stability, harmony – Akufo-Addo to Security Services
01:14