Recently, an AI-generated news story about the resignation of current SEC Chairman Gary Gensler spread like wildfire across the internet. The news originated from a single source, which needed more authority. However, due to the prevailing hostility in the crypto industry towards Gensler, the news gained widespread attention, and even some news outlets reported it as true.
Contrary to the initial reports, the news was later confirmed to be fake. Charles Gasparino, a FOX Business Network (FBN) correspondent, unequivocally denied the resignation rumors. Analyzing the text of the alleged news story using an AI-generated text detector called ZeroGPT revealed that 96.8% of the content had been generated by an artificial intelligence tool.
The challenge lies in AI tools’ inability to verify the reliability of sources. When these tools come across information from sources they perceive as reliable, they often consider it true without conducting source verification or seeking confirmation like competent journalists. In this case, the lack of official confirmation should have raised suspicions about the news’ veracity. It is unlikely that the SEC would conceal the resignation of its chairman without serious repercussions.
Advertisement
Some new platforms did label the news as an “indiscretion,” implying it was unconfirmed. However, the critical error was the failure to verify the original source. The source was not individuals with factual knowledge but AI-based text generation software. Hence, it was not a rumor but simply fake news from poorly programmed AI software that failed to draw exclusively from reliable sources.
The question arises as to who fabricated this fake news. While the AI software played a role in generating the article, it likely did not invent the story itself. The software may have written the text without specifying that the source was unreliable. It is also possible that the original fabrication was the work of a satirical author, and the AI software failed to distinguish satire from reality.
Experienced journalists recognized the need for more confirmation and doubted the news story. The baseless allegations emerged from an article on CryptoAlert and gained traction through dubious Twitter accounts like WhaleChart, neither of which are known for their credibility. Only those unfamiliar with reliable verification methods, including the AI software, fell for the false news.
This incident raises concerns about the potential impact of AI-generated fake news. It poses a significant problem for non-experts, as most people need more ability to verify the authenticity of news stories independently. However, experienced journalists, accustomed to differentiating between fabricated and confirmed news, are less likely to be affected.
Advertisement
Unfortunately, the quality of journalism today often falls short, with many individuals spreading unverified news. To guard against AI-generated misinformation, it is crucial to rely only on reliable sources and disregard information from sources whose credibility is uncertain.