API security firm Salt Security has conducted an analysis of ChatGPT plugins and found several types of vulnerabilities that could have been exploited to obtain potentially sensitive data and take over accounts on third-party websites.
ChatGPT plugins enable users to access up-to-date information (rather than the relatively old data the chatbot was trained on), as well as to integrate ChatGPT with third-party services. For instance, plugins can allow users to interact with their GitHub and Google Drive accounts.
However, when a plugin is used, ChatGPT needs permission to send the user’s data to a website associated with the plugin, and the plugin may need access to the user’s account on the service it’s interacting with.
The first vulnerability identified by Salt Security impacted ChatGPT directly and it was related to OAuth authentication. An attacker who tricked a victim into clicking on a specially crafted link could install a malicious plugin with their own credentials on the victim’s account, and the victim would not need to confirm the installation.
This would result in any message typed by the victim, including messages that may include credentials and other sensitive data, being sent to the plugin and implicitly the attacker.
The second vulnerability was found in the AskTheCode plugin developed by PluginLab.AI, which enables users to interact with their GitHub repositories. The security flaw could have allowed an attacker to take control of the victim’s GitHub account and gain access to their code repositories through a zero-click exploit.
The third vulnerability was also related to OAuth and it was found to impact several plugins, but Salt demonstrated its findings on a plugin called Charts by Kesem AI. An attacker who could trick a user into clicking on a specially crafted link could have taken over the victim’s account that was associated with the plugin.
The vulnerabilities were reported to OpenAI, PluginLab.AI and Kesem AI shortly after they were discovered in the summer of 2023 and the vendors rolled out patches at some point in the following months.
When Salt Security conducted its research, ChatGPT plugins represented the primary means of adding functionality and features to the LLM. In November, OpenAI announced that paying customers would be able to create their own GPTs that can be customized for specific topics or tasks. These GPTs are expected to replace plugins.
On the other hand, Salt Security said it also found vulnerabilities in GPTs as well and it plans on detailing them in an upcoming blog post. Others have also found ways to abuse GPTs to obtain potentially valuable data.
Related: Simple Attack Allowed Extraction of ChatGPT Training Data
Related: Cyber Insights 2024: Artificial Intelligence
Related: Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting
Source: Original Post
“An interesting youtube video that may be related to the article above”