AI cybersecurity needs to be as multi-layered as the system it’s protecting – Help Net Security

Summary: Cybercriminals are exploiting large language models (LLMs) to execute sophisticated attacks, including jailbreaking and data poisoning, which pose significant risks to enterprises. Effective protection against these threats requires a comprehensive understanding of security vulnerabilities and the implementation of robust cybersecurity measures throughout the AI lifecycle.

Threat Actor: Cybercriminals | cybercriminals
Victim: Enterprises | enterprises

Key Point :

  • Cybercriminals can manipulate LLMs through jailbreaking and malicious prompts, leading to harmful outputs and data extraction.
  • Recent attacks have included large-scale fraud schemes that exploit vulnerabilities in AI systems, such as tax fraud and unemployment claim fraud.
  • Protective measures should encompass design, development, deployment, and operational phases to mitigate security risks effectively.
  • Regular security testing, proper data sanitization, and robust encryption are essential to safeguard AI systems from attacks.
  • Industry-wide collaboration and prioritization of AI security tools are critical for addressing emerging cybersecurity threats.

Cybercriminals are beginning to take advantage of the new malicious options that large language models (LLMs) offer them. LLMs make it possible to upload documents with hidden instructions that are executed by connected system components. This is a boon to cybercriminals and, thus, a substantive risk to the enterprises using them.

AI cybersecurity needs

LLMs can be tricked in many ways. Cybercriminals can input malicious prompts that trick the LLM into overriding its guardrails (i.e., generating harmful outputs), a process termed jailbreaking. They can also influence a model’s capabilities, poison the data, or instruct the LLM to execute malicious instructions on the attacker’s request. Malicious prompts can also lead to model and data extraction, and the model itself may contain functionality enabling backdoors. All these attacks put sensitive information at risk.

Attacks against AI systems occurring within the last two years have used some form of adversarial machine learning (ML). Examples of these attacks include full-scale tax fraud in China, where attackers fraudulently acquired $77 million by creating fake shell companies and sending invoices to victims the tax system recognized as clients, and unemployment claim fraud in California, in which attackers withdrew $3.4 million in falsified unemployment benefits by collecting real identities to create fake driver licenses, thus exploiting flaws in the system’s identity verification process.

Protecting against attacks such as these begins with understanding security vulnerabilities and the frequency, source, and extent of cyber harm they can produce. From there, cybersecurity solutions fall into four key categories: design, development, deployment, and operation.

Design

By altering the technical design and development of AI before its training and deployment, companies can reduce their security vulnerabilities before they begin. For example, even selecting the correct model architecture has considerable implications, with each AI model exhibiting particular affinities to mitigate specific types of prompt injection or jailbreaks. Identifying the correct AI model for a given use case is important to its success, and this is equally true regarding security.

Development

Developing an AI system with embedded cybersecurity begins with how training data is prepared and processed. Training data must be sanitized and a filter to limit ingested training data is essential. Input restoration jumbles an adversary’s ability to evaluate the input-output relationship of an AI model by adding an extra layer of randomness.

Companies should create constraints to reduce potential distortions of the learning model through Reject-On-Negative-Impact training. After that, regular security testing and vulnerability scanning of the AI model should be performed continuously.

During deployment, developers should validate modifications and potential tampering through cryptographic checks. Library loading abuse can be prevented through tight restrictions on the software’s ability to load unstructured code. Encryption of sensitive data is non-negotiable.

Deployment

Organizations should practice good security hygiene. Their AI lifecycle should be well documented, coupled with a comprehensive inventory of AI initiatives aligned with an organization’s AI risk governance. External stakeholder feedback must be collected and integrated into system design. Staff training, red teaming, ongoing research of the AI threat landscape, and strong supply chain security must be common practices.

Operation

Above all, AI cybersecurity requires a combination of tools and methods. It’s an ongoing process throughout operation and maintenance. This can include limiting the total number of queries a user can perform.

Model obfuscation effectively alters model properties to deviate from the typical operation an extraction cyberattack would anticipate. Content safety systems can sanitize input and output from an LLM, and adversarial input detection can screen query traffic before it’s sent to the model for inference.

Preventing the cybersecurity threats facing new tech from manifesting will not be simple or easy. It’s a process that requires multiple tools and methods used in tandem. These AI security tools and strategies do exist and are becoming increasingly mature each day—a critical component that is missing is an industry-wide push in making their use a priority.

Source: https://www.helpnetsecurity.com/2024/09/09/ai-cybersecurity-needs