The AI balancing act: Unlocking potential, dealing with security issues, complexity – Help Net Security

Summary: The integration of AI and GenAI technologies presents both challenges and opportunities for organizations, particularly in terms of security risks and AI literacy. Many companies face disruptions due to insufficient AI maturity, leading to increased data threats and cautious adoption of these technologies.

Threat Actor: Various Cyber Threat Actors | cyber threat actors
Victim: Organizations | organizations

Key Point :

  • 16% of organizations experience disruptions due to insufficient AI maturity.
  • 46% of documented data policy violations involve proprietary source code sharing within GenAI apps.
  • 41% of cyber teams hide security incidents due to job security concerns.
  • 62% of CISOs lack confidence in their workforce’s ability to identify GenAI-related cyberattacks.
  • 22% of employees admit to breaching company rules regarding GenAI usage.
  • 87% of respondents are concerned about data exfiltration risks associated with GenAI tools.
  • 63% of organizations have established limitations on data entry into GenAI applications.

The rapid integration of AI and GenAI technologies creates a complex mix of challenges and opportunities for organizations. While the potential benefits are clear, many companies struggle with AI literacy, cautious adoption, and the risks of immature implementation. This has led to disruptions, particularly in security, where data threats, deepfakes, and AI misuse are on the rise.

GenAI security risks

16% of organizations experience disruptions due to insufficient AI maturity

Action1 | 2024 AI Impact on Sysadmins: Survey Report | July 2024

  • While sysadmins recognize AI’s potential, significant gaps in education, cautious organizational adoption, and insufficient AI maturity hinder widespread implementation, leading to mixed results and disruptions in 16% of organizations.
  • Down from 73% last year, 60% of sysadmins acknowledge a lack of understanding of leveraging AI practically, indicating a persistent gap in AI literacy.

The most urgent security risks for GenAI users are all data-related

Netskope | Cloud and Threat Report: AI Apps in the Enterprise | July 2024

  • With the increased use, enterprises have experienced a surge in proprietary source code sharing within GenAI apps, accounting for 46% of all documented data policy violations.

GenAI security risks

Worried about job security, cyber teams hide security incidents

VikingCloud | 2024 Cyber Threat Landscape Report: Cyber Risks, Opportunities, & Resilience | May 2024

  • The most worrying AI threats include GenAI model prompt hacking (46%), Large Language Model (LLM) data poisoning (38%), Ransomware as a Service (37%), GenAI processing chip attacks (26%), Application Programming Interface (API) breaches (24%), and GenAI phishing (23%).
  • 41% say GenAI has the most potential to address cyber alert fatigue.

AI’s rapid growth puts pressure on CISOs to adapt to new security risks

Trellix | Mind of the CISO: Decoding the GenAI Impact | May 2024

  • 62% of respondents agreeing they don’t have full confidence in their organization’s workforce to successfully identify cyberattacks incorporating GenAI.
  • 92% of CISOs expressed AI and GenAI have made them contemplate their future in the role, bringing into serious question how policy and regulation need to adapt to bolster the role of the CISO and enable organizations to secure their systems effectively.

GenAI security risks

22% of employees admit to breaching company rules with GenAI

1Password | Balancing act: Security and productivity in the age of AI | April 2024

  • 92% of security pros have security concerns around generative AI, with specific apprehensions including employees entering sensitive company data into an AI tool (48%), using AI systems trained with incorrect or malicious data (44%), and falling for AI-enhanced phishing attempts (42%).
  • And a relatively small, but significant, group of employees (22%) admit to knowingly violating company rules on the use of generative AI.

Security pros are cautiously optimistic about AI

Cloud Security Alliance and Google Cloud | The State of AI and Security Survey Report | April 2024

  • 25% of respondents expressed concerns that AI could be more advantageous to malicious parties.

GenAI security risks

AI tools put companies at risk of data exfiltration

Code42 | Data Exposure Report 2024 | March 2024

  • As today’s risks are increasingly driven by AI and GenAI, the way employees work, and the proliferation of cloud applications, respondents state they need more visibility into source code sent to repositories (88%), files sent to personal cloud accounts (87%), and customer relationship management (CRM) system data downloads (90%).
  • 87% are concerned their employees may inadvertently expose sensitive data to competitors by inputting it into GenAI.
  • 87% are concerned their employees are not following their GenAI policy.

Businesses banning or limiting use of GenAI over privacy risks

Cisco | 2024 Data Privacy Benchmark Study | February

  • More than 90% of respondents believe AI requires new techniques to manage data and risk.
  • Among the top concerns, businesses cited the threats to an organization’s legal and Intellectual Property rights (69%) and the risk of disclosure of information to the public or competitors (68%).
  • Most organizations are aware of these risks and are putting in place controls to limit exposure: 63% have established limitations on what data can be entered, 61% have limits on which employees can use GenAI tools, and 27% said their organization had banned GenAI applications altogether for the time being.

GenAI security risks

Source: https://www.helpnetsecurity.com/2024/08/15/ai-genai-security-risks