Protect AI’s June 2024 Vulnerability Report

Summary: This content discusses the proactive approach taken by Protect AI to identify and address security risks in AI systems, specifically focusing on vulnerabilities in the tools used to build machine learning models in the OSS AI/ML supply chain.

Threat Actor: N/A

Victim: N/A

Key Point :

  • Protect AI’s huntr is the world’s first AI/ML bug bounty program, where a community of 15,000+ members hunts for vulnerabilities in the OSS AI/ML supply chain.
  • The tools used in the supply chain to build machine learning models are vulnerable to unique security threats, potentially leading to complete system takeovers.
  • This report highlights 31 vulnerabilities, including critical ones found in the Triton Inference.

Executive Summary:

At Protect AI we are taking a proactive approach to identifying and addressing security risks in AI systems, to provide the world with critical intelligence on vulnerabilities and how to fix them. 

Protect AI’s huntr is the world’s first AI/ML bug bounty program. Our community of 15,000+ members hunt for impactful vulnerabilities across the entire OSS AI/ML supply chain. Through our own research and the huntr community, we’ve found the tools used in the supply chain to build the machine learning models that power AI applications to be vulnerable to unique security threats. These tools are Open Source and downloaded thousands of times a month to build enterprise AI Systems. They also likely come out of the box with vulnerabilities that can lead directly to complete system takeovers such as unauthenticated remote code execution or local file inclusion. This report contains 31 vulnerabilities, including some critical vulnerabilities found in the Triton Inference Server and the Intel Neural Compressor. You can find all the details of all this month’s vulnerabilities in the table below, or you can head over to protectai.com/sightline, to search the comprehensive database of huntr findings, and download tools to detect, assess and remediate them within your organizations AI Supply chain. 

It is important to note that all vulnerabilities were reported to the maintainers a minimum of 45 days prior to publishing this report, and we continue to work with maintainers to ensure a timely fix prior to publication. The table also includes our recommendations for actions to take immediately, if you have these projects in production. If you need help mitigating these vulnerabilities in the meantime, please reach out, we’re here to help. community@protectai.com.

Log Injection in Triton Inference Server

https://sightline.protectai.com/vulnerabilities/979e1aa6-534e-4478-9bc2-dee60a1971a8/assess

Impact: Allows attackers to inject arbitrary log entries, potentially hiding malicious activities or misleading investigations.

The Triton Inference Server is vulnerable to log injection due to insufficient sanitization of user input in log entries. Attackers can exploit this to forge logs, mislead investigations, or execute ANSI escape sequences that could harm the log viewer’s system.


SQL Injection in Neural Solution Server

https://sightline.protectai.com/vulnerabilities/1d6ccb37-f83d-4726-ac7d-06f922792879/assess

Impact: Enables attackers to manipulate database entries and download arbitrary files from the host system.

The Neural Solution Server’s task submission API is vulnerable to SQL injection, allowing attackers to alter database records and download files from the server without authorization. This compromises both the integrity of the database and the confidentiality of the server’s files.


Unauthorized Memory Access in Triton Inference Server

https://sightline.protectai.com/vulnerabilities/70f44145-9c74-4ee8-9934-034616e8fbcd/assess

Impact: Permits unauthorized read and write operations on memory, potentially leading to a crash or arbitrary code execution.

The Triton Inference Server improperly validates parameters for shared memory operations, allowing attackers to specify illegal memory offsets. This can lead to unauthorized memory access, causing segmentation faults or enabling arbitrary code execution through crafted requests.

Source: https://protectai.com/threat-research/june-vulnerability-report


“An interesting youtube video that may be related to the article above”