Thursday, August 24, 2023

Over 1,000 AI-Powered Spam Bots Uncovered in Scam Operation; Regulatory Challenges Ensue

People have discovered another way to use ChatGPT, the AI chatbot. Unfortunately, some people misuse it to create spam and fake content on social media. A recent study from Indiana University’s Observatory on Social Media discusses how malicious actors use ChatGPT. The researchers, Kai-Cheng Yang and Filippo Menczer found that ChatGPT’s capability to produce text that looks trustworthy is being exploited to control groups of automated accounts, called “botnets,” on platforms like X (formerly known as Twitter). This is causing problems by spreading misleading information. The study was shared last month and shed light on this concerning issue.

Botnets are networks comprising numerous malicious bots and spam campaigns operating on social media platforms, often evading existing anti-spam filters. These malevolent networks serve various purposes, such as the current instance where they are promoting deceitful cryptocurrencies and NFTs.

The bot accounts employed in this scenario work to persuade individuals into investing in fraudulent cryptocurrencies and even resort to stealing from their legitimate crypto wallets. In one investigation led by Yang and Menczer, a network with over 1,000 active bots was identified on platform X. These bots engaged in reciprocal actions, including responding to each other’s posts using outputs from ChatGPT. Additionally, they frequently utilized stolen selfies from real human profiles to fabricate fake personas.

The advent of social media has granted malicious entities an inexpensive means to access a broad audience and capitalize on fabricated or deceptive content. Menczer emphasized that new AI tools have further exacerbated this issue, making it even more cost-effective to generate substantial volumes of false yet plausible content. Consequently, this has overwhelmed social media platforms’ already fragile moderation mechanisms.

Emergence of Deceptive AI-Powered Content Ecosystem

Over the last few years, social media bots—accounts controlled partially or entirely by software—have been consistently deployed to amplify misinformation regarding various events, from elections to public health emergencies such as the COVID-19 pandemic. Previously, these social media bots were easy to discern due to their mechanical behaviour and unconvincing artificial identities.

However, the introduction of generative AI tools like ChatGPT has revolutionized this landscape by enabling the rapid creation of text and media that closely resemble human-generated content. Yang told Insider, “The advancement of AI tools will distort the idea of online information permanently,”

The AI bots discovered by researchers were primarily engaged in sharing deceptive information related to fraudulent cryptocurrency and NFT campaigns. They also promoted suspicious websites covering similar topics, indicating that they might have been authored using tools identical to ChatGPT.

In addition to their presence on social media, ChatGPT-like tools have been utilized to create low-quality news websites that often spread false information. NewsGuard, a private company that assesses the credibility of news and information websites, has identified over 400 AI-generated websites during its ongoing evaluations since April.

Over 1,000 AI-Powered Spam Bots Uncovered in Scam Operation; Regulatory Challenges Ensue
Credits: Tech.Co

Challenges in Detecting AI-Generated Content as Technology Advances

These websites generate revenue through automated advertising technology, which places ads on websites regardless of their credibility or nature. Both NewsGuard and the researchers of the mentioned paper independently identified AI-generated spam content using an apparent characteristic currently common among chatbots.

When ChatGPT encounters a prompt that violates OpenAI’s policies or involves private information, it generates a predefined response like, “I’m sorry, but I cannot comply with this request.” Experts actively search for instances of these responses in content generated by automated bots, whether on a website or within a tweet. They utilize these phrases to uncover elements of a more extensive bot campaign, which helps them find the entire spam network.

However, there is growing concern among experts that as chatbots improve their ability to imitate humans, these distinctive signs will fade away. This could make it significantly more challenging to identify content that AI generates.

Wei Xu, a professor of computer science at the Georgia Institute of Technology, mentioned to Insider that the ability to detect and filter AI-generated content will diminish as malicious users take advantage of this technology. This might result in a troublesome cycle where AI-generated content becomes progressively harder to distinguish.

Xu’s concerns might soon become a reality. Europol, the European Union’s law enforcement agency, has predicted that as much as 90% of internet content could be generated by AI by 2026.

Xu emphasized that in the absence of proper regulations, as long as more substantial incentives and minimal costs exist for producing AI-generated content, malicious actors will consistently outpace those attempting to counter it.

Challenges in  Content Detection and the Quest for Effective Safeguards

Recent research conducted by European scholars revealed that the current AI content detection tools, such as ZeroGPT and the OpenAI AI Text Classifier, lack reliability. These tools often struggle to accurately differentiate between content created by humans and that generated by AI. The study highlighted that OpenAI’s detection service had such a low level of accuracy that the company chose to discontinue it.

In July, the Biden administration announced that major AI players, including Google, Microsoft, and OpenAI, had provided assurances to the White House regarding their efforts to establish measures that mitigate AI-related risks.

According to the White House, one such precaution involved affixing a concealed label to AI-generated content to aid individuals in distinguishing it from human-generated content. However, Menczer, who co-authored the research at Indiana University, doubted that these safeguards would effectively deter malicious actors who disregard them.

Yang suggested that a more dependable approach for identifying bots involves tracking suspects’ activity patterns on social media. This includes assessing whether they have a history of propagating false claims and evaluating their previous posts’ diversity of language and content.

Yang added that people need to adopt a more sceptical stance toward anything they encounter online. He emphasized that generative AI will significantly influence the overall information landscape.

 

 

 

The post Over 1,000 AI-Powered Spam Bots Uncovered in Scam Operation; Regulatory Challenges Ensue appeared first on TechStory.


0 comments:

Post a Comment