Summary: The integration of Generative AI (GenAI) in enterprise environments has surged, paving the way for significant security worries, particularly concerning the sharing of sensitive data. A recent report emphasizes the implications of “shadow AI,” where employees utilize personal accounts for GenAI applications, heightening the risk of data exfiltration. To combat these challenges, organizations are adopting stricter policies and controls around AI usage and data security measures.
Affected: Enterprise organizations
Keypoints :
- 30-fold increase in sensitive data shared with GenAI applications over the past year.
- 72% of enterprise users access GenAI apps using personal accounts, raising governance concerns.
- Organizations are taking a proactive “block first” approach to unapproved AI applications to mitigate risks.
- 54% of organizations are shifting towards local AI infrastructure, introducing new security challenges.
- Implementing security frameworks like OWASP, NIST, and MITRE can help address AI-specific vulnerabilities.
Source: https://thecyberexpress.com/ai-powered-productivity-or-security-nightmare/