[Cyware] Google’s Zero-Day Hunters Test AI for Security Research

Summary: This article discusses Google’s Project Zero framework, which aims to enhance the detection of AI bugs in the field of cybersecurity.

Threat Actor: None mentioned.

Victim: None mentioned.

Key Point:

  • Google’s Project Zero framework focuses on improving the detection of AI bugs in the cybersecurity domain.
  • The framework aims to enhance the capabilities of large language models to perform basic vulnerability research.

Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
Security Operations

Project Zero Framework Aims to Boost AI Bug Detection Skills

Google's Zero-Day Hunters Test AI for Security Research
Google zero-day researchers say large language models could perform basic vulnerability research. (Image: Shutterstock)

Google’s team of zero-day hunters said artificial intelligence can lead to improved automated threat identification and analysis and detect vulnerabilities that current tools miss.

See Also: Live Webinar | The Machines Are Learning, But Are We?

Researchers from the Project Zero team said Thursday that they are exploring how large language models can replicate the systematic methods of human security researchers, such as manual code audits and reverse engineering.

“We hope that in the future, this can close some of the blind spots of current automated vulnerability discovery approaches, and enable automated detection of ‘unfuzzable’ vulnerabilities,” the team said. Fuzzing uses random inputs to find vulnerabilities.

The Google researchers said they’ve been testing LLMs potential for vulnerability discovery, adding that by refining techniques, AI can potentially outperform traditional methods.

Meta released CyberSecEval 2 in April, a benchmark suite designed to test how capable LLMs are in finding and exploiting memory safety issues. Initial findings showed that LLMs struggled with these tasks.

Project Zero’s latest research shows that with better testing methodologies, LLMs could improve. A Google framework boosted scores on the CyberSecEval 2 benchmarks by up to 20 times, especially in the buffer overflow and advanced memory corruption tests.

To put these improvements into practice, Project Zero developed a framework called Naptime. Researchers say that Naptime allows an LLM to perform vulnerability research that closely mimics the iterative, hypothesis-driven approach of human security experts. A key element is that it equips the LLM with task-specific tools to enable automatic verification of output, which researchers call a “critical” feature.

“This project has been called Naptime because of the potential for allowing us to take regular naps while it helps us out with our jobs,” the researchers said. “Please don’t tell our manager.”

For tasks where an expert human would rely on multiple iterative steps of reasoning, hypothesis formation and validation, the researchers said, AI models must be allowed flexibility. To do so, they suggested allowing LLMs to think through problems in detail to improve accuracy and permitting models interact with the program environment to refine their analyses, akin to how human researchers work. Providing LLMs with tools such as debuggers and scripting environments can also help them perform tasks that human experts do, while structuring tasks for automatic, clear verification ensures reliable results, the researchers said. The AI models must also be given a sampling strategy, where using multiple, independent attempts to solve problems can improve exploration and discovery, they said.

Project Zero tested Naptime with CyberSecEval 2 benchmarks. In the buffer overflow and advanced memory corruption categories, Naptime significantly outperformed baseline models. In the advanced memory corruption tests, Naptime helped reproduce crashes and improve precision in finding vulnerabilities.

Despite demonstrating improved security performance with LLMs, Project Zero said there is more work to be done before these tools can be widely used in daily security research.

“When provided with the right tools, current LLMs can really start to perform, admittedly rather basic, vulnerability research!” the researchers said.

Source: https://www.bankinfosecurity.com/googles-zero-day-hunters-test-ai-for-security-research-a-25592


“An interesting youtube video that may be related to the article above”