We are excited to announce our investment in Dreadnode, an AI security platform that enables cybersecurity teams to build and deploy offensive AI capabilities; attack models with the most advanced red team tooling; and practice adversarial ML techniques in a safe environment. Dreadnode has rapidly become the trusted partner to enterprises, governments, and frontier AI model providers, equipping practitioners with the tools required to explore what’s possible when AI is applied at scale to offensive security.
Dreadnode was founded by Will Pearce and Nick Landers, who previously built and augmented the AI red teams at Microsoft, NVIDIA, and numerous Fortune 500 companies. We spoke to them about their vision for building offensive AI solutions in our Q&A:
We both grew up in very modest upbringings and had similar paths to discovering hacking and cybersecurity. I grew up in rural Kentucky, and Nick grew up in a small town in Utah. Both places instilled strong values that aligned our moral compass from an early age. We are both great learners, but maybe not great students. If you ask many security researchers how they got into the field, many have a story that starts with needing to get access to a Wi-Fi network. We both believe our professional success is rooted in our innate curiosity and our drive to pursue our work to the bitter end. We are grateful to have found a career that enables us to be creative and curious as cybersecurity researchers.
We were both lucky to work together in one of the best offensive consulting organizations over 5 years ago. We were known for custom tooling and breaking into hardened environments that others could not. It was really fun, but also intense. We saw machine learning systems pop up in networks more often. Being a responsible red team, we started some research projects to see what it might mean for our ops and tooling. What we found was both an attack surface and a new capability.
Adversarial machine learning had existed in academic settings and we combined these early techniques with our operational expertise in “Proof Pudding” research that was widely recognized in 2019. In summary, we used adversarial techniques to reverse the scoring model for Proofpoint’s email security filters so we could phish accounts who were using their solution. It lacked any of the normal academic rigor, but it worked. This was a big moment for the industry as it was one of the first tangible examples of how an adversarial ML attack could subvert a widely used defensive product. Even to this day it is not clear that testing the robustness of ML classifiers in hostile settings is common practice.
We also started to do some statistical analysis of our session logs to see how we could make tooling and operators more efficient. We did a lot of little things like next command prediction, detecting sandboxes with process lists, and reinforcement learning to find paths through networks. It was all cool, but it lacked the “so what” moment. Finally, when GPT-1 dropped and GPT-2 followed shortly thereafter - the trajectory of what AI would be able to do for offensive security was immediately obvious. Our perspective was widened again when NVIDIA told us to go solve the hardest problem we could think of with seemingly unlimited compute - suddenly what seemed impossible was now possible. Advancing offensive security with AI has been our obsession ever since.
We had seen first hand the increasing demands on companies like Microsoft, NVIDIA, and Google, to push the boundaries of AI in what was becoming a rapidly evolving arms race. Every new exciting breakthrough in AI created exponentially more demand to test and integrate each systems’ real-world offensive capabilities. Technology is often dual use, the Win32 API can be used to write software that enables hospitals to be more efficient and see more patients, but can also be used to write ransomware. AI will be the same.
After years of hacking, building red team tooling, and some early research we already knew AI would impact meaningful swaths of that process. The continuous improvement in both models and technology meant we could reliably start tackling the problem. We founded Dreadnode with the core belief that if we don’t push AI’s limits on how it can be used to attack our digital ecosystem, someone else will—and probably not with good intentions. That has been our mission and inspiration from day one.
The future of all offense and defense in our digital world will be autonomous, and we want to ensure that those who want to do good in the world have these capabilities. It won’t be long before AI agents can spin up an end-to-end attack—creating a campaign, generating a phishing site, sending thousands of emails, compromising credentials, escalating privileges—all at machine speed. Defenses will need similarly automated responses, and we’ll be right there helping them.
Ultimately, we want to provide teams with the full spectrum of products and services they need—from elite-level education to human-assisted tooling to fully autonomous operations. We aim to be the trusted partner to lead our industry through this transition.
We have both grown up in the cybersecurity industry and our community has embraced a philosophy that “offense drives defense”. Our first platform, “Crucible” has become one of the more popular platforms for teaching cybersecurity researchers how to “Hack AI”. To date we have had thousands of participants from around the world, and have collected ~10 million data points on how AI can be used and abused in adversarial settings.
We are now launching two complementary products, “Strikes” and “Spyglass”, which enable research teams to both hack “with” AI, and also hack “on” AI. These new capabilities are foundational components for operationalizing offensive capabilities.
Every cybersecurity practitioner will tell you that the faster we can understand an exploit in the wild, the more rapidly we can create methods to detect and respond to its risk. Putting our products in the hands of companies and governments will allow us to shape the trajectory of AI’s adoption before others and plan for its inevitable use and abuse.