As AI continues to capture everyone’s attention, security for AI has become a popular topic in the marketplace of ideas. Security for AI is capturing the media cycle; AI security startups are coming out of stealth left and right; and incumbents are scrambling to release AI-relevant security features. It is clear security teams are concerned about AI.
But what does “AI security” mean, exactly?
Frankly, we don’t really know what security for AI means yet because we still don’t know what AI development means. “Security for X” typically arrives after X has matured — think cloud, network, Web apps — but AI remains a moving target.
Still, there are a few distinct problem categories emerging as a part of AI security. These line up with the concerns of different roles within an organization, so it is unclear whether they easily merge, though of course they do have some overlap.
These problems are:
-
Visibility
-
Data leak prevention
-
AI model control
-
Building secure AI applications
Let’s tackle them one at a time.
1. Visibility
Security always starts with visibility, and securing AI applications is no different. Chances are many teams in your organization are using and building AI applications right now. Some might have the knowledge, resources, and security savviness to do it right, but others probably don’t. Each team could be using a different technology to build their applications and applying different standards to ensure they work correctly. To standardize practices, some organizations create specialized teams to inventory and review all AI applications, While that is not an easy task in the enterprise, visibility is important enough to begin this process.
2. Data Leak Prevention
When ChatGPT was first launched, many enterprises went down the same route of trying desperately to block it. Every week had new headlines about companies losing their IP to AI because an employee copy-pasted highly confidential data to the chat so they could ask for a summary or a funny poem about it. This was really all anybody could talk about for a few weeks.
Since you cannot control ChatGPT itself, nor the other AIs that started appearing on the consumer market, this has become a sprawling challenge. Enterprises issue acceptable use policies with approved enterprise AI services, but those are not easy to enforce. This problem got so much attention that OpenAI, which caused the scare in the first place, changed its policies to allow users to opt out of being included in the training set and organizations to pay to opt out on behalf of all their users.
This issue — users pasting the wrong information into an app it does not belong to — seems similar to what data loss prevention (DLP) and cloud access security broker (CASB) solutions were created to solve. Whether enterprises can use these tools created for conventional data to protect data within AI remains to be discovered.
3. AI Model Control
Think about SQL injection, which boosted the application security testing industry. It arises when data is translated as instructions, resulting in allowing people who manipulate application data (i.e. users) to manipulate application instruction (i.e. its behavior). With years of severe issues wreaking havoc on Web applications, application development frameworks have risen to the challenge and now safely handle user input. If you’re using a modern framework and going through its paved road, SQL injection is for all practical purposes a solved problem.
One of the weird things about AI from an engineer’s perspective is that they mix instructions and data. You tell the AI what you want it to do with text, and then you let your users add some more text into essentially the same input. As you could expect, this results in users being able to change the instructions. Using clever prompts lets you do that even if the application builder really tried to prevent it, a problem we all know today as prompt injection.
For AI application developers, trying to control these uncontrollable models is a real challenge. This is a security concern, but it is also a predictability and usability concern.
4. Building Secure AI Applications
Once you allow AI to act on the user’s behalf and chain those actions one after the other you’ve reached uncharted territory. Can you really tell if the AI is doing things it should be doing to meet its goal? If you could think of and list everything the AI might need to do then you arguably wouldn’t need AI in the first place.
Importantly, this problem is about how AI interacts with the world, and so it is as much about the world as it is about the AI. Most Copilot apps are proud to inherit existing security controls by impersonating users, but are user security controls really all that strict? Can we really count on user-assigned and managed permissions to protect sensitive data from a curious AI?
A Finishing Thought
Trying to say anything about where AI or by extension AI security will end up is trying to predict the future. As the Danish proverb says, it’s difficult to make predictions, especially about the future. As AI development and usage continue to evolve, the security landscape is bound to evolve with them.
Source: Original Post
“An interesting youtube video that may be related to the article above”