AI Security Report 2024: Explore Key AI Trends and Risks

Key Findings

  • Explosive AI growth: Enterprise AI/ML transactions surged by 595% between April 2023 and January 2024.
  • Concurrent rise in blocked AI traffic: Even as enterprise AI usage accelerates, enterprises block 18.5% of all AI transactions, a 577% increase signaling rising security concerns. 
  • Primary industries driving AI traffic: manufacturing accounts for 21% of all AI transactions in the Zscaler security cloud, followed by Finance and Insurance (20%) and Services (17%).
  • Clear AI leaders: the most popular AI/ML applications for enterprises by transaction volume are ChatGPT, Drift, OpenAI, Writer, and LivePerson.
  • Global AI adoption: the top five countries generating the most enterprise AI transactions are the US, India, the UK, Australia, and Japan.
  • A new AI threat landscape: AI is empowering threat actors in unprecedented ways, including for AI-driven phishing campaigns, deepfakes and social engineering attacks, polymorphic ransomware, enterprise attack surface discovery, exploit generation, and more.

The new era of AI-driven threats

The risks of AI are bi-directional: from outside enterprise walls, businesses face a continuous wave of threats that now includes AI-driven attacks. The reality is that virtually every type of existing threat can be aided by AI, which translates to attacks being launched at unprecedented speed, sophistication, and scale. Meanwhile, the future possibilities are limitless — meaning that enterprises face an unknown set of unknowns, when it comes to AI-driven cyber attacks. 

ThreatLabz provides insights into numerous evolving threats types, including:

  • AI impersonation: AI deepfakes, sophisticated social engineering attacks, misinformation, and more.
  • AI-generated phishing campaigns: end to end campaign generation, along with a ThreatLabz case study in creating a phishing login page using ChatGPT — in seven simple prompts.
  • AI-driven malware and ransomware: how threat actors are leveraging AI automation across numerous stages of the attack chain.
  • Using ChatGPT to generate vulnerability exploits: ThreatLabz shows how easy it is to create exploit PoCs, in this case for Log4j (CVE-2021-44228) and Apache HTTPS server path traversal (CVE-2021-41773)
  • Dark chatbots: diving into the proliferation of dark web GPT models like FraudGPT and WormGPT that lack security guardrails. 
  • And much more…

DOWNLOAD REPORT