New research from Recorded Futures Insikt Group outlines a collaborative investigation by threat intelligence analysts and R&D engineers into the potential malicious uses of artificial intelligence (AI) by threat actors. They experimented with a variety of AI models, including large language models, multimodal image models, and text-to-speech models, without any fine-tuning or additional training, to mimic the resources threat actors might realistically have.
Their findings suggest that in 2024, the most probable malicious applications of AI will be through targeted deepfakes and influence operations. Deepfakes, created with open-source tools, can be used to impersonate executives, while AI-generated audio and video can enhance social engineering campaigns. The cost of creating content for influence operations is expected to drop significantly, making it easier to clone websites or create fake media outlets. AI can also assist malware developers in evading detection and help threat actors in reconnaissance efforts, such as identifying vulnerable industrial systems or locating sensitive facilities.
Screenshot from a spoofed conference call impersonating Recorded Future executives (Source: Recorded Future)
The text highlights current limitations, including the availability of open-source models that are nearly as effective as state-of-the-art models and the bypassing of security measures in commercial AI solutions. It anticipates significant investments across multiple sectors in deepfake and generative AI technologies, which will enhance the capabilities of threat actors, regardless of their resource level, and increase the number of organizations at risk.
Organizations are advised to prepare for these threats by considering their executives voices and likenesses, their website and branding, and their public imagery as part of their attack surface. They should also anticipate more sophisticated AI uses, such as self-augmenting malware that evades detection, necessitating stealthier detection methods.
Key findings from specific use cases include:
- Deepfakes can be generated to impersonate executives using open-source tools, requiring clips shorter than one minute for training, but facing challenges such as bypassing consent mechanisms for live cloning.
- AI can enable effective disinformation campaigns and assist in cloning legitimate websites, though human intervention is required for creating believable spoofs.
- Generative AI can help malware evade detection by altering source code, though maintaining functionality post-obfuscation remains a challenge.
- Multimodal AI can process public imagery for reconnaissance purposes, but translating this data into actionable intelligence is still challenging without human analysis.
To read the entire analysis, click here to download the report as a PDF.
Source: Original Post