ChatGPT and Cybersecurity: Exploring the Dual-Edged Sword of AI Innovation

The rapid evolution of AI, particularly tools like ChatGPT, has revolutionized industries — but it has also opened Pandora’s box of cybersecurity risks. From malicious chatbots to regulatory rollbacks, the intersection of AI and cybersecurity demands urgent attention. Here’s a breakdown of the latest threats and how to mitigate them.

1. The Rise of Malicious AI Chatbots

Cybercriminals are weaponizing AI with tools like GhostGPT, a ChatGPT knockoff sold on Telegram that simplifies malware creation, phishing campaigns, and code generation for low-skilled attackers. Similarly, WormGPT (a predecessor) highlights a trend: AI democratizing cybercrime.

  • Key Takeaway: Tools like GhostGPT enable even novice hackers to craft convincing phishing emails or malicious code in minutes.

2. Shadow AI: The Silent Corporate Threat

Employees are increasingly using unauthorized AI tools (“shadow AI”) outside corporate oversight. A staggering 74% of ChatGPT usage occurs via personal accounts, exposing sensitive data (e.g., healthcare records, financial info) to third-party platforms.

  • Key Takeaway: Unregulated AI use risks data leaks, compliance violations, and breaches.

3. LLMs in the Cyber Attack Lifecycle

Large Language Models (LLMs) are reshaping cyberattacks by automating stages of the Cyber Kill Chain:

  • Reconnaissance: Scraping public data for vulnerabilities.
  • Weaponization: Generating malware code or phishing scripts.
  • Command & Control (C2): Managing payloads via AI-driven infrastructure.
  • Example: Attackers use jailbroken LLMs (like modified ChatGPT) to bypass ethical safeguards and create malicious tools.

4. Regulatory Rollbacks: A Security Vacuum

The recent reversal of Biden’s 2023 AI executive order — which mandated safety standards for LLM developers — has left critical infrastructure vulnerable. Private companies like OpenAI now face fewer accountability measures, potentially weakening defenses against AI-driven cyberattacks.

  • Key Takeaway: Policy shifts could delay protections against AI-powered threats like bio-weapons or infrastructure attacks.

5. ChatGPT’s Technical Vulnerabilities

Researchers uncovered flaws in ChatGPT’s API that enable DDoS attacks and prompt injection:

  • A single HTTP request can trigger thousands of requests to a target site, overwhelming it.
  • Poor URL deduplication and lax request limits expose websites to abuse.

6. Supply Chain Attacks: Compromised Browser Extensions

A December 2024 phishing campaign targeted Chrome extension developers, injecting malware into legitimate tools like Cyberhaven’s extension. Attackers harvested:

  • API keys, session cookies, and authentication tokens.
  • Data from ChatGPT, Facebook for Business, and other platforms.
  • Key Takeaway: Even trusted extensions can become attack vectors.

Mitigation Strategies: Staying Ahead of AI Threats

  1. Adopt AI Governance: Enforce strict policies for corporate AI use, including approved tools and data encryption.
  2. Educate Employees: Train teams on risks of shadow AI and data leakage.
  3. Leverage AI Defensively: Use AI-powered threat detection to counter malicious LLM activity.
  4. Conduct Risk Assessments: Tools like the free GenAI Risk Assessment (mentioned in Source 6) identify vulnerabilities in browsing, SaaS, and identity security.
  5. Patch & Monitor: Regularly update software and audit browser extensions.

Conclusion: Balancing Innovation and Caution

AI’s potential is undeniable, but its misuse poses existential risks. By prioritizing security frameworks, advocating for robust regulations, and fostering a culture of awareness, organizations can harness AI’s power without falling victim to its darker applications.

Sources: Compiled from cybersecurity reports and threat analyses (January 2025).

>> https://www.hendryadrian.com/new-ghostgpt-ai-chatbot-facilitates-malware-creation-and-phishing/
>> https://www.hendryadrian.com/the-security-risk-of-rampant-shadow-ai/
>> https://www.hendryadrian.com/beyond-flesh-and-code-building-an-llm-based-attack-lifecycle-with-a-self-guided-malware-agent/
>> https://www.hendryadrian.com/trump-overturns-biden-rules-on-ai-development-security/
>> https://www.hendryadrian.com/chatgpt-crawler-vulnerability-can-enable-ddos-attacks-via-http-requests/
>> https://www.hendryadrian.com/discover-hidden-browsing-threats-free-risk-assessment-for-genai-identity-web-and-saas-risks/
>> https://www.hendryadrian.com/?p=43002
>> https://www.hendryadrian.com/?p=42730
>> https://www.hendryadrian.com/employees-enter-sensitive-data-into-genai-prompts-far-too-often/