In May of this year, Indiana University researchers discovered a ChatGPT-based botnet on the social network X (then Twitter). The botnet, dubbed Fox8 due to its association with cryptocurrency sites of the same name, consisted of 1,140 accounts. Many of them use ChatGPT to post on the social network and reply to each other’s posts. The auto-generated content encouraged unsuspecting people to click on links to websites promoting cryptocurrencies.
Image source: unsplash.com
Experts are sure that the Fox8 botnet can only be Tip Iceberg considering how popular large language models and chatbots have become. “The only reason we noticed this particular botnet is that it ran sloppy.”, — said Indiana University professor Filippo Menczer (Filippo Menczer). Researchers discovered the botnet by searching the platform for a passphrase “As an AI language model…”, which ChatGPT sometimes uses for tips on sensitive topics. They then manually analyzed the accounts and identified those controlled by bots.
Despite its “carelessness,” the botnet has published many persuasive messages promoting cryptocurrency sites. The apparent ease with which OpenAI AI can be exploited for fraud means that advanced botnets can exploit AI-based chatbots in more sophisticated ways without being detected. “No good villain would make a mistake like that”jokes Menzer.
ChatGPT and other chatbots use large language models to generate text in response to a request. Given enough training data, massive processing power, and feedback from human testers, these bots can respond remarkably well to a wide variety of inputs. But they can also convey hate messages, express social prejudice, generate fictional information, and help hackers create malware.
Image source: Pixabay
“It fools both the platform and the users”, says Mentzer about the ChatGPT-based botnet. If the social media algorithm detects that a post is very popular (even if that activity comes from other bots), it will show the post to more people. Governments and organizations looking to conduct disinformation campaigns are likely already developing or implementing such tools, Menzer said.
Researchers have long been concerned that the technology behind ChatGPT could pose a risk of misinformation, and OpenAI even delayed the release of the next version of the system because of these concerns. William Wang, a professor at the University of California, believes that many spam pages are now automatically generated and says that as AI improves, it is becoming increasingly difficult for humans to detect such material.
Wang’s lab has developed a method to automatically recognize text generated by ChatGPT. However, implementation is expensive because it uses the OpenAI API and the underlying AI is constantly being improved. X can be a fertile platform for testing such tools. Mentzer argues that the activity of malicious bots has increased dramatically recently and the sharply increased prices for using the social network’s API have made it more difficult for researchers to study the problem.
The Fox8 botnet was only removed after the researchers published an article in July, although they reported this back in May. Mentzer claims his group stopped informing X about their botnet research. “They don’t react very much. he says. — They don’t really have staff.
It is a basic policy of AI modelers not to use chatbots for fraud or misinformation. OpenAI has not yet commented on the botnet using ChatGPT.
Add Comment