[Cyware] Vanna AI prompt injection vulnerability enables RCE

Summary: The Vanna AI library is vulnerable to remote code execution (RCE) due to a prompt injection vulnerability.

Threat Actor: Unknown | Vanna AI
Victim: Users of Vanna AI | Vanna AI

Key Point :

  • The Vanna AI library has a vulnerability in its chart generation process that allows for remote code execution (RCE) through a payload delivered via the user’s prompt.
  • The vulnerability was discovered by researchers at JFrog and independently reported by Tong Liu, a PhD student at the Chinese Academy of Sciences Institute of Information Engineering.
  • Vanna AI translates natural language prompts into SQL queries and uses the Plotly library to present the query results as visual charts.
  • The vulnerability is tracked as CVE-2024-5565 and enables attackers to execute arbitrary code on systems running Vanna AI.

The Vanna AI library could be exploited for remote code execution (RCE) due to a prompt injection vulnerability.

Researchers at JFrog discovered a vulnerability in the library’s chart generation process that allows for RCE of a payload delivered through the user’s prompt. The vulnerability was also independently discovered by Tong Liu, a PhD student at the Chinese Academy of Sciences Institute of Information Engineering, who reported it through the Huntr bug bounty platform.

Vanna AI enables natural language prompts to be translated into SQL queries and uses a Python-based graphical library called Plotly to present the query results to the user as a visual chart. Vanna AI implementations can be interfaced with databases through programs such as Jupyter Notebook and made available to end users through apps such as Slackbot, Streamlit apps or other custom web apps.

In a blog post published Thursday, JFrog researchers detailed how the vulnerability, tracked as CVE-2024-5565 enables RCE.

The user’s prompt is first sent to the LLM with pre-prompting instructions telling the LLM to generate a SQL query based on the prompt. The generated SQL query is then sent to the database, and both the original prompt and query results are then passed to Plotly for chart creation.

The Python Plotly code is dynamically generated based on the prompt received from the LLM and then run through Python’s exec method to generate a visualization of the queried data. Due to the dynamic nature of the Python code executed at the end of this chain, users can manipulate the process to execute their own malicious code by crafting their prompt in a way that ensures their payload ends up in the final Plotly script.   

The catch is that the user prompt will only be accepted if it can be used to produce a valid SQL query. However, the JFrog researchers demonstrated a proof-of-concept prompt injection that includes a valid SQL query for printing a simple string that instructs the LLM to append the payload code at the start of the Plotly code.

Using their PoC exploit, the researchers showed how prompting Vanna AI’s text-to-SQL interface can cause the AI to output a list of files in the target machine’s current directory.

Vanna AI users urged to harden implementations against Plotly manipulation

JFrog reported the issue to Vanna AI, which subsequently published guidance for developers using the library to harden their implementations against potential RCE.  

“Running vn.generate_plotly_code can generate any arbitrary Python code which may be necessary for chart creation. If you expose this function to end users, you should use a sandboxed environment,” the guidance states.

Vanna AI also advises that that the vn.generate_plotly_code function can be overridden and forced to return an empty string rather than execute arbitrary code. Users will then receive their query results in a default format rather than a dynamically generated visualization.

JFrog recommends developers working with any LLM to sandbox execution environments, use prompt injection tracing models and check output integrity to prevent malicious RCE.

“LLMs often struggle to distinguish user inputs from their predefined guidelines, making them vulnerable to manipulation, a.k.a Prompt injection attacks. Therefore – LLM implementers should not rely on pre-prompting as an infallible defense mechanism and should employ more robust mechanisms,” the researchers wrote.  

Source: https://www.scmagazine.com/news/vanna-ai-prompt-injection-vulnerability-enables-rce


“An interesting youtube video that may be related to the article above”