GenAI models are easily compromised – Help Net Security

Summary: A recent survey reveals that 95% of cybersecurity experts lack confidence in the security measures for Generative AI (GenAI), while red team data indicates that prompt attacks can easily manipulate these models. As businesses increasingly adopt GenAI, they expose themselves to significant vulnerabilities that traditional cybersecurity measures fail to address.

Threat Actor: Anyone | prompt attacks
Victim: Businesses | businesses using GenAI

Key Point :

  • 95% of cybersecurity experts express low confidence in GenAI security measures.
  • 200,000 players successfully manipulated the AI educational game Gandalf, highlighting the ease of exploiting GenAI vulnerabilities.
  • 35% of respondents fear LLM reliability and accuracy, while 34% are concerned about data privacy and security.
  • Only 22% of organizations have adopted AI-specific threat modeling for GenAI threats.
  • Preparedness and adoption of security measures vary significantly across industries, with finance showing higher security practices than education.

95% of cybersecurity experts express low confidence in GenAI security measures while red team data shows anyone can easily hack GenAI models, according to Lakera.

GenAI security measures

Attack methods specific to GenAI, or prompt attacks, are easily used by anyone to manipulate the applications, gain unauthorized access, steal confidential data and take unauthorized actions. Realizing this, only 5% of the 1,000 cybersecurity experts surveyed have confidence in the security measures protecting their GenAI applications, even though 90% actively use or explore GenAI.

“With just a few well-crafted words, even a novice can manipulate AI systems, leading to unintended actions and data breaches,” said David Haber, CEO at Lakera. “As businesses increasingly rely on GenAI to accelerate innovation and manage sensitive tasks, they unknowingly expose themselves to new vulnerabilities that traditional cybersecurity measures don’t address. The combination of high adoption and low preparedness may not be that surprising in an emerging area, but the stakes have never been higher. ”

With GenAI everyone is a potential hacker

Gandalf, an AI educational game created by Lakera, has attracted more than one million players including cybersecurity experts attempting to breach its defenses. Remarkably, 200,000 of these players have successfully completed seven levels of the game, demonstrating their ability to manipulate GenAI models into taking unintended actions.

This provides an incredibly valuable reference point for the magnitude of the problem. Entering commands using their native language and a bit of creativity allowed these players to trick Gandalf’s level seven in only 45 minutes on average. This stark example underscores a troubling truth: everyone is now a potential hacker and businesses require a new approach to security for GenAI.

“The race to adopt GenAI, fueled by C-suite demands, makes security preparedness more vital now than at any pivotal moment in technology’s evolution. GenAI is a once in a lifetime disruption,” said Joe Sullivan, ex-CSO of Cloudflare, Uber, and Meta (Facebook), and advisor to Lakera. “To harness its potential, though, businesses must consider its challenges and that, hands down, is the security risk. Being prepared and mitigating that risk is the #1 job at hand for those companies leading adoption.”

LLM reliability and accuracy is the number 1 barrier to adoption

35% of respondents are fearful of LLM reliability and accuracy, while 34% are concerned with data privacy and security. The lack of skilled personnel accounts for 28% of the concerns.

45% of respondents are exploring GenAI use cases; 42% are actively using and implementing GenAI. Only 9% of respondents reported having no current plans to adopt LLMs.

Only 22% of respondents have adopted ​​AI-specific threat modeling to prepare for GenAI specific threats.

The level of preparedness and adoption of security measures varies significantly across industries. For instance, the finance sector, which comprises 8% of respondents, shows a higher inclination towards stringent security practices, with 20% of organizations having dedicated AI security teams and 27% rating their preparedness at the highest levels (4 or 5 out of 5).

In contrast, the education sector, represented by 12% of respondents, has only 9% of organizations with dedicated AI security teams, and just 15% rate their preparedness at the highest levels. These contrasts underscore the varying levels of urgency and regulatory pressures faced by different industries.

Source: https://www.helpnetsecurity.com/2024/08/22/genai-security-measures