Are We Just Training Future AI Attackers for Free?

02 Jul 2025

There’s never been more cybersecurity knowledge out in the open than there is today. Blog posts, whitepapers, Github repos, open-source tooling, step-by-step videos, detailed writeups of zero-days and attack paths - the industry thrives on transparency, collaboration, and knowledge-sharing.

But as LLMs and generative AI become more capable, and more widely accessible, we’re starting to wonder… who’s actually benefiting the most from this free, open pool of cyber intel?

Because while we think we’re educating the next wave of defenders, it might be the attackers who are quietly learning faster than anyone… And we’re feeding them everything they need.

Let’s talk about it.

So Much Knowledge. So Little Control.

We’ve all seen ChatGPT or other LLMs summarising a TTP, writing a basic phishing script, crafting an exploit or walking through how to run a port scan.

Most of the time, these models aren’t “hacking” in the traditional sense. They’re pulling from publicly available data (CVEs, technical blogs, forum threads, MITRE ATT&CK entries, GitHub tools, conference talks) and stitching together what’s already out there.

But attackers don’t need new ideas. They need scale, speed and plausible deniability. And LLMs are giving them exactly that.

The Threat Is in the Accessibility, Not the Complexity

Most seasoned threat actors already know how to breach a target. But AI makes it easier for the less experienced (or less skilled) to enter the game.

Script kiddies don’t need to browse ten different StackOverflow threads to find the right syntax anymore. They can just ask the model.

Language barriers? Gone. Regional threat groups can now get instant explanations of English-based documentation in fluent local dialects.

Phishing emails? AI can now write convincing, grammatically perfect messages tailored to specific industries, roles, and even tone of voice. No more awkward "Dear Sir, please open the attachment for kindly confirmation."

In short, the knowledge gap between seasoned attackers and casual ones is shrinking because the barrier to entry is being bulldozed by publicly trained AI.

Is Our Own Transparency Turning Into a Weapon?

Here’s where it gets uncomfortable.

Most AI models are trained on massive volumes of internet content. That includes all the “how to” cyber blogs, all the writeups breaking down APT techniques, and all the code we’ve open-sourced in the spirit of knowledge sharing.

We’re not saying we should stop contributing to the community, that openness is a huge part of what makes cyber strong.

But we do need to be aware that, every time we document a red team engagement or upload a detection script or break down a ransomware strain, it’s not just the defenders reading it. We might be handing over the playbook.

Worse still, many AI models don’t distinguish between ethical and malicious use. They don’t care about intent. They just generate. And with enough clever prompting (or model tuning), even “safe” models can be manipulated to output something harmful.

So, What Do We Do?

We’re not calling for less transparency or fewer contributions, that would be a loss for defenders, too. But we are suggesting a shift in awareness and approach.

Here’s where to start:

 

  • Think like a threat actor when you publish. Could this post be directly abused? Can you redact or reframe without losing the core lesson?
  • Push for context-aware AI tools. There’s a huge opportunity in building defensive-focused models that flag misuse or can’t easily be jailbroken.
  • Invest in education that teaches real-world critical thinking. Because no matter how much attackers learn, the people who can out-think, out-analyse and out-defend will still win.
  • Understand that AI won’t replace skilled attackers but it will enable more bad actors, faster, cheaper and at greater scale.

Final Thoughts

AI isn’t inventing cyber threats out of nowhere. It’s remixing and redistributing what we’ve already made public. The irony is, in trying to educate the good guys, we may have accidentally supercharged the bad ones.

The answer isn’t panic. It’s perspective. And maybe a bit more intentionality about what we share, how we share it, and who might be listening.

Because the next breach might not come from a genius hacker, it might come from someone who just knew how to prompt better.

LIKE WHAT YOU’RE HEARING?
SHARE THE ARTICLE

next up

13 Apr 2022
read more

INTRODUCING: ZYNC.

Things are changing here, and we are thrilled to share with you the news of our evolution which includes a total rebrand and some epic new features!   We know that if you stand still, you`ll be left behind – that is why we are changing as a bus...

BY:
05 Aug 2020
read more

Is TikTok a Threat to Cyber Security?

If you know anyone below the age of 20, chances are you’ve heard of TikTok. It’s the hottest new viral app made in China, where anyone and everyone can share 15-second video clips with the world. These short-form videos often take the form of lip-syn...

BY: Burhan Choudhry

GET EXCLUSIVE ZYNC UPDATES