Finding a Use for GenAI in AppSec – Keith Hoodlet – ASW #323

Summary: The video discusses the integration and application of large language models (LLMs) in application security (AppSec) and their effectiveness compared to traditional security tools such as fuzzers. With a focus on real-world examples and case studies, the hosts highlight the capabilities and limitations of LLMs, particularly in identifying security vulnerabilities and providing design recommendations. The conversation also touches on the importance of human oversight, security practices, and the challenges presented by evolving technologies and frameworks.

Keypoints:

  • Introduction of LLMs in AppSec alongside traditional methods like fuzzing.
  • Discussion of Keith Hoodlet’s work on AI bias bounty hunting and LLM capabilities in security.
  • The historical context of interactive fiction and its influence on coding and security practices.
  • LLMs can assist in identifying security flaws but may struggle with novel problems not present in their training data.
  • Current limitations of LLMs include their inability to understand context and logic in complicated code bases.
  • The potential for LLMs to generate erroneous code or introduce bugs while attempting to fix them.
  • An overview of the Ruby XML library vulnerability that led to authentication bypass due to improper parser implementation.
  • Critical examination of the “build vs. buy” philosophy in open-source software development and the consequences of maintaining custom libraries.
  • Discussion of memory safety in programming languages and the implications of using high-level languages like Ruby for architectural decisions.
  • Exploration of the usefulness of detailed security write-ups in understanding vulnerabilities and their context.
  • Youtube Video: https://www.youtube.com/watch?v=zn3LT4BqOJo
    Youtube Channel: Security Weekly – A CRA Resource
    Video Published: Tue, 25 Mar 2025 09:01:14 +0000