[Cyware] Despite Bans, AI Code Tools Widespread in Organizations

Summary: A Checkmarx report reveals that while 99% of organizations are using AI code-generating tools, only 15% explicitly prohibit their use, highlighting a significant gap in governance and security strategies. The report emphasizes the concerns of security professionals regarding the risks associated with generative AI, including AI hallucinations and the lack of secure coding practices.

Threat Actor: Checkmarx | Checkmarx
Victim: Organizations | Organizations

Key Point :

  • 15% of organizations prohibit AI tools for code generation, yet 99% still use them.
  • Only 29% have established governance for generative AI usage.
  • 70% of security professionals lack a centralized strategy for generative AI.
  • 47% of respondents are interested in allowing AI to make unsupervised changes to code.
  • 80% are concerned about security threats from developers using AI.

Organizations are concerned about security threats stemming from developers using AI, according to a new Checkmarx report.

The cloud-native application security provider found that 15% of organizations explicitly prohibit the use of AI tools for code generation, however 99% say that AI code-generating tools are being used regardless.

Meanwhile, just 29% of organizations have established any form of governance for the use of generative AI.

Read now: 70% of Businesses Prioritize Innovation Over Security in Generative AI Projects

These findings are part of the firm’s Seven Steps to Safely Use Generative AI in Application Security report, published on July 25, 2024.

The report included findings from 900 CISOs and application security professionals in companies in North America, Europe and Asia-Pacific with annual revenue of $750m or more.

CISOs Grapple with Generative AI Strategies

The report found that 70% of security professionals say there is no centralized strategy for generative AI, with purchasing decisions made on an ad-hoc basis by individual departments.

The company noted that CISOs are looking to build the right types of governance in order to permit their application development teams to use AI coding tools.

According to Checkmarx, 47% of respondents indicated interest in allowing AI to make unsupervised changes to code.

However, generative AI is currently unable to follow secure coding practices or to produce truly secure code, which motivates some security teams to consider AI-driven security tools to help manage the proliferation of development teams’ AI-generated code.

Many are worried about GenAI attacks like AI hallucinations and 80% are worried about security threats stemming from developers using AI.

“Enterprise CISOs are grappling with the need to understand and manage new risks around generative AI without stifling innovation and becoming roadblocks within their organizations,” said Sandeep Johri, CEO at Checkmarx. “GenAI can help time-pressured development teams scale to produce more code more quickly, but emerging problems such as AI hallucinations usher in a new era of risk that can be hard to quantify.”

Source: https://www.infosecurity-magazine.com/news/ai-code-tools-widespread-in