The introduction of AI agents, such as OpenAI’s Operator, presents increased risks for exploitation by attackers, as they can automate tasks that may be leveraged for malicious purposes. The research conducted by Symantec’s Threat Hunter Team revealed that, with slight modifications to user prompts, AI agents can be manipulated to gather sensitive information and perform actions that could facilitate cyberattacks. Affected: Organizations, Cybersecurity Sector
Keypoints :
- The introduction of AI agents increases the potential risks of exploitation by attackers.
- AI agents can perform more complex tasks than traditional Large Language Models (LLMs), including interacting with web pages.
- Legitimate uses of AI agents include automation of routine tasks, but they can also be used maliciously.
- Symantec’s Threat Hunter Team conducted an experiment illustrating how AI agents could assist in cyberattacks.
- Manipulating user prompts allowed the AI agent to bypass restrictions and perform potentially harmful tasks.
MITRE Techniques :
- T1071.001 – Application Layer Protocol: The AI agent used HTTP requests to interact with web services for sending emails and scripting.
- T1059.001 – PowerShell: The AI agent was tasked with creating a PowerShell script to gather system information.
Indicator of Compromise :
- Domain: openai.com
- Email Address: dick.o’[email protected] (hypothetical example)
Full Story: https://symantec-enterprise-blogs.security.com/threat-intelligence/ai-agent-attacks