Google Uses AI To Detect 20-Year-Old Software Bug In OpenSSL.

Google Uses AI To Detect 20-Year-Old Software Bug In OpenSSL.

Google recently announced that it used AI to uncover a 20-year-old software bug in OpenSSL, a widely used library for encryption and server authentication. This discovery is part of an effort that identified 26 vulnerabilities across various open-source projects.

The process involved “fuzz testing,” a technique that feeds random data into software programs to identify potential crashes or errors. Traditionally, fuzz testing required manual input, but Google leveraged large language models (LLMs) to automate the generation of fuzz targets. The AI system mimics a developer’s workflow, including writing, testing, and improving fuzz testing code, as well as analyzing crashes.

The OpenSSL vulnerability, dubbed CVE-2024-9143, was reported in September and fixed in October. The issue stems from “out-of-bounds memory access,” which can cause program crashes or, in rare cases, lead to the execution of rogue code. While the bug is considered low risk, its presence for two decades underscores the limitations of traditional testing approaches, where certain code paths or configurations remain untested.

Google applied its AI-driven fuzz testing across 272 software projects, leading to the identification of vulnerabilities that may not have been discoverable using conventional methods. The company’s researchers noted that long-standing bugs often persist because code is assumed to be well-tested and vetted. They highlighted the importance of generating new fuzz testing targets to uncover hidden flaws in seemingly robust code.

Looking ahead, Google’s Open Source Security Team aims to refine the AI tools to suggest fixes for vulnerabilities automatically and eventually reduce the need for human review. This would allow new vulnerabilities to be reported directly to project maintainers with minimal delay.

This effort is part of a broader initiative by Google to improve software security using AI. Another project, “Big Sleep,” uses LLMs to simulate the workflow of security researchers. Earlier this month, Big Sleep successfully identified a previously unknown bug in SQLite, a popular open-source database engine.

By integrating AI into software security processes, Google hopes to enhance the speed and efficiency of identifying and resolving vulnerabilities, contributing to the ongoing effort to secure open-source projects.

Scroll to Top