Summary: The content discusses the use of unsanctioned apps, including AI, by cybersecurity professionals and the risks associated with it.
Threat Actor: N/A
Victim: N/A
Key Point :
- 73% of cybersecurity professionals have used unsanctioned apps, including AI, in the past year.
- Most professionals acknowledged data loss, lack of visibility and control, and data breaches as the top risks of using unauthorized tools.
- 10% admitted that the use of shadow SaaS and AI tools led to a data breach.
- AI use has been restricted to specific roles or completely banned by some organizations due to security concerns.
Almost three-quarters (73%) of cybersecurity professionals have used unsanctioned apps including AI in the past year, according to a new poll from Next DLP.
The security vendor interviewed 250 security pros at the recent Infosecurity Europe and RSA Conference industry events, in the UK and US respectively.
Its findings revealed that a majority of industry professionals don’t practice what they preach when it comes to shadow IT.
Most acknowledged data loss (65%), lack of visibility and control (62%) and data breaches (52%) as the top risks of using unauthorized tools. A further one in 10 admitted that use of shadow SaaS and AI tools led to a data breach, according to the study.
AI use has been singled out by many IT security teams as a potential security risk, with half of respondents claiming it has been restricted to specific roles in the organization, and nearly a fifth (16%) have banned it completely. A further 46% said they’ve rolled out tools and policies to control employee use of generative AI.
Yet in general, Next DLP found that IT teams aren’t being proactive enough about managing employee use of potentially risky apps. Specifically:
- Only 37% of security professionals said they had developed policies for using these tools
- Just half received guidance and updated policies on Shadow SaaS and AI in the past six months
- A fifth claimed they’d never received policies/guidance on shadow SaaS and AI
- A fifth of respondents were unaware of corporate policies or training to mitigate shadow IT risk
Time for a Shadow IT Plan
“Clearly, there is a disparity between employee confidence in using these unauthorized tools and the organization’s ability to defend against the risks,” argued Next DLP CSO, Chris Denbigh-White.
“Security teams should evaluate the extent of shadow SaaS and AI usage, identify frequently used tools, and provide approved alternatives. This will limit potential risks and ensure confidence is deserved, not misplaced.”
The challenge of shadow IT has grown to the point where the UK’s National Cyber Security Centre (NCSC) released guidance in 2023 on how to manage it.
Some 11% of organizations experiencing cybersecurity incidents between 2021 and 2023 linked their experience to use of shadow IT, according to Kaspersky.
Source: https://www.infosecurity-magazine.com/news/most-security-pros-shadow-saas-ai
“An interesting youtube video that may be related to the article above”