Summary: This article discusses the use of artificial intelligence and machine learning in cyberwarfare and fraud management, specifically focusing on a software that generates social media bots.
Threat Actor: Meliorator Software | Meliorator Software
Victim: Social media platforms and users
Key Point :
- Meliorator Software is a tool that generates social media bots, which can be used for various purposes including spreading disinformation and manipulating public opinion.
- The use of AI and machine learning in cyberwarfare and fraud management highlights the evolving tactics and strategies employed by threat actors.
Artificial Intelligence & Machine Learning
,
Cyberwarfare / Nation-State Attacks
,
Fraud Management & Cybercrime
Meliorator Software Generates Social Media Bots
U.S. federal authorities seized two web domains they said supported an artificial intelligence-driven disinformation network run by the Russian domestic intelligence agency and affiliates of a state-run propaganda broadcaster.
See Also: Close the Gapz in Your Security Strategy
The Department of Justice searched nearly 1,000 accounts on social media platform X, formerly Twitter. The Kremlin used the accounts to diffuse pro-Moscow propaganda generated by Meliorator software, a bespoke AI application coded in Russia to create bot social media accounts and the disinformation spread on them. The tool is capable of responding to direct messages in real time, the FBI said in an advisory it issued along with government agencies from the Netherlands and Canada.
As of June, Meliorator is capable of running only on X, although the advisory warns that its developers likely intend to extend its reach to other social media networks.
Cyberthreat analysts warn that Russian disinformation production has steadily grown over the last decade and is aimed at the United States and members of international alliances such as NATO. A 2022 analysis by Rand on Russian disinformation says that, contrary to popular portrayals, Russian disinformation operations are neither well-organized nor well-resourced. But even a modest investment can result in disinformation reaching “broad and varied international audiences,” it says. NewsGuard, the rating system for news and information websites, recently found that popular AI chatbots have ingested Russian disinformation and are prone to regurgitating false narratives (see: Popular Chatbots Spout Russian Misinformation, Finds Study).
“Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government,” said FBI Director Christopher Wray.
An affidavit filed by an FBI agent outlines the investigation and says Meliorator is the brainchild of an executive at state-controlled TV network RT, formerly Russia Today. Less than a year into the software’s development, the executive began working with an officer in the Federal Security Service, Russia’s successor to the Soviet Union’s KGB. The disinformation operation registered at least 968 accounts on X between June 2022 and March 2024.
The affidavit identifies the lead developer of Meliorator only as another RT employee. The FBI followed a digital trail leading to that individual through a string of Gmail addresses used to illegally register domains used to register the bot social media accounts. The trail, created through secondary emails given to Google as avenues for account recovery, eventually led to an account created in 2015 using a yandex.ru
email account as backup and a telephone number containing the Russian international country code. The developer supplied Google with payment information from Qiwi – Russia’s equivalent to PayPal – and a Russian tax region.
The FBI said the developer at one point apparently forgot to use a VPN when accessing a Gmail address, revealing a real IP address that resolved to a Moscow-based telecommunications provider.
Meliorator operators focused on avoiding automated detection. The tool can create three different types of accounts that vary in richness of detail. Operators use accounts that contain AI-generated profile photos and biographical data including name and location to initially distribute disinformation. Another account type – created with data scraped online to create a persona with apparently no AI ties – “is used to mirror and amplify disinformation shared by bot and non-bot accounts,” the advisory states. A third account archetype contains little, if any, information in its profile, and its role is restricted to liking disinformation messaging.
The bot accounts often restrict interactions with accounts with more than 100,000 followers, in order to blend into the larger social media environment. Meliorator auto-assigns a proxy-IP address to each account based on where the bot is supposedly located.
Source: https://www.bankinfosecurity.com/us-busts-russian-ai-driven-disinformation-operation-a-25729
“An interesting youtube video that may be related to the article above”