The launch of DeepSeek R1 has stunned Silicon Valley, launched global counter-intelligence initiatives and crashed tech shares on Wall Street. But that’s not what CISOs should be worrying about.

By Pax Zoega, Head of AI Readiness, Ideal


Barely two weeks after launch, the world’s technology heads have been turned by a little-known 200 person company, DeepSeek, founded in 2023 in Hangzhou, China.

You will undoubtedly have seen the hullabaloo in the world’s media:

  • How has it produced such a capable tool so quickly? Was it illegally trained on OpenAI’s proprietary IP?
  • Is it a Chinese trojan horse with in-built capability to steal the West’s commercial secrets?
  • If it doesn’t need the West’s advanced micro processing chips, what are the ramifications for companies like Nvidia, which had almost $600bn wiped off its market value – the biggest drop in US stock market history?
  • Has DeepSeek quickly become the most popular free application on Apple’s App Store across the US and UK because people are just curious to play with the next shiny new thing (like me) or is it set to unseat the likes of ChatGPT and Midjourney?

I would argue, that as a Corporate CISO, whilst these questions are interesting, it isn’t the one you need to be primarily concerned with.

The question you need to consider, is what might bad actors start doing with it?

A readily usable tool of immense power for cyber attackers 

Up until this point, in the brief history of coding assistants using GenAI-based code, the most capable models have always been closed source and available only through the APIs of frontier model developers like Open AI and Anthropic. These closed source models come with guardrails to prevent nefarious use by cyber attackers and other bad actors, preventing them from using these models to generate malicious code.

DeepSeek R1 by contrast, has been released open source and open weights, so anyone with a modicum of coding knowledge and the hardware required can run the models privately, without the safeguards that apply when running the model via DeepSeek’s API.

Furthermore, once a model is running privately, the user has full freedom to implement jailbreaking techniques that remove all remaining restrictions. Indeed, the internet is already abuzz with multiple successful jailbreaking attacks that have already been documented to be effective on DeepSeek R1 (including by Palo Alto’s Unit 42).

This means that for the first time in history – as of a few days ago – the bad actor hacking community has access to a fully usable model at the very frontier, with cutting edge of code generation capabilities.

How capable is it?

To set the scene on R1’s coding capabilities, it outperforms or matches the benchmark performance of the two most capable coding models in public release, Open AI’s o1 model and Anthropic’s Claude 3.5 Sonnet.

On Codeforces, a competitive coding benchmark, R1 is more capable than 96.3% of competitive coders. In other words, this places R1 in the top 3.7% of human competitive coders.

At the same time, it’s ability to run on less technically advanced chips makes it lower cost and easily accessible. You could create an immensely powerful hacking tool based on a stack of Mac minis in the corner of a teenager’s bedroom.

Imagine what this could do for the hacker-sphere

Does all of this mean that DeepSeek will be used by bad actors to supercharge their cyber attacking capabilities?

Let’s reason this through. For DeepSeek R1 to be an effective tool for nefarious code generation, three things would have to hold true:

1. It would have to be true that GenAI code generators are able to be used to generate code that can be used in cyber-attacks.

This has already been proven time and time again to be the case. Leading cybersecurity vendors are already defending against a growing number of AI generated, autonomous malware attacks.

Recently, AI-pen testing startup XBOW, founded by Oege de Moor, the creator of GitHub Copilot, the world’s most used AI code generator, announced that their AI penetration testers outperformed the average human pen testers in a number of tests (see the data on their website here along with some examples of the ingenious hacks conducted by their AI “hackers”).

2. R1 must be usable for the purpose.

In other words, the model must be accessible in a jailbroken form so that it can be used to perform nefarious tasks that would normally be prohibited.

Given that the model is open source and open weights and has already been jailbroken, this condition has also been satisfied.

3. The model must be able to be run by a bad actor on her own system in a practical and economically viable manner to avoid the restrictions that would apply when accessing the model via DeepSeek’s guard-railed API.

This condition too has been satisfied. The smaller and mid-parameter models can be run on a powerful home computer setup.

Even the most powerful 671 billion parameter version can be run on 18 Nvidia A100s with a capital outlay of approximately $300k. This might sound like a chunky investment, but given that there are multiple recorded ransomware payouts in the +$1M range (the highest ever disclosed was $70M), a single successful attack on a reasonable sized enterprise would put the bad actors comfortably in profit.

In summary, as of 20 January 2025, cybersecurity professionals now live in a world where a bad actor can deploy the world’s top 3.7% of competitive coders, for only the cost of electricity, to perform large scale perpetual cyber-attacks across multiple targets simultaneously.

We are effectively witnessing the democratisation of cybercrime; a world where smaller criminal groups can run sophisticated large-scale operations previously restricted to groups able to fund teams with this level of advanced technical expertise.

That is why, as you read these words, multiple bad actors will be testing and deploying R1 (having downloaded it for free from DeepSeek’s GitHub repro).

How can you defend your business against real-time autonomous malware attacks?

Now for the good news. Impressive though R1 is, for the time being at least, bad actors don’t have access to the most powerful frontier models. For instance, OpenAI’s already trained and tested, but yet-to-be publicly released, o3 reasoning model scored better than 99.95% of coders in Codeforces’ all-time rankings. To put that in perspective, this means there are only 175 human competitive coders on the planet who can outperform o3. Fortunately, the top model developers (including OpenAI and Google) are already involved in cybersecurity initiatives where non-guard-railed instances of their cutting-edge models are being used to push the frontier of offensive & predictive security.

Of course, to be of any use, you need those capabilities on your side. Whether we’re specifically talking about DeepSeek or the flurry of rivals/spinoffs that will inevitably follow, now is the time to deploy real-time AI-enabled autonomous detection, prevention and remediation solutions.

If upgrading your cyber defences was near the top of your 2025 IT to do list, (it’s no.2 in Our Tech 2025 Predictions, ironically right behind AI) it’s time to get it right to the top.

In my opinion, open source, open weights DeepSeek R1 is a drop everything moment. It certainly is for your opponent.

Pax Zoega, Head of AI Readiness, Ideal

The launch of DeepSeek R1 has stunned Silicon Valley, launched global counter-intelligence initiatives and crashed tech shares on Wall Street. But that’s not what CISOs should be worrying about.

By Pax Zoega, Head of AI Readiness, Ideal


Barely two weeks after launch, the world’s technology heads have been turned by a little-known 200 person company, DeepSeek, founded in 2023 in Hangzhou, China.

You will undoubtedly have seen the hullabaloo in the world’s media:

  • How has it produced such a capable tool so quickly? Was it illegally trained on OpenAI’s proprietary IP?
  • Is it a Chinese trojan horse with in-built capability to steal the West’s commercial secrets?
  • If it doesn’t need the West’s advanced micro processing chips, what are the ramifications for companies like Nvidia, which had almost $600bn wiped off its market value – the biggest drop in US stock market history?
  • Has DeepSeek quickly become the most popular free application on Apple’s App Store across the US and UK because people are just curious to play with the next shiny new thing (like me) or is it set to unseat the likes of ChatGPT and Midjourney?

I would argue, that as a Corporate CISO, whilst these questions are interesting, it isn’t the one you need to be primarily concerned with.

The question you need to consider, is what might bad actors start doing with it?

A readily usable tool of immense power for cyber attackers 

Up until this point, in the brief history of coding assistants using GenAI-based code, the most capable models have always been closed source and available only through the APIs of frontier model developers like Open AI and Anthropic. These closed source models come with guardrails to prevent nefarious use by cyber attackers and other bad actors, preventing them from using these models to generate malicious code.

DeepSeek R1 by contrast, has been released open source and open weights, so anyone with a modicum of coding knowledge and the hardware required can run the models privately, without the safeguards that apply when running the model via DeepSeek’s API.

Furthermore, once a model is running privately, the user has full freedom to implement jailbreaking techniques that remove all remaining restrictions. Indeed, the internet is already abuzz with multiple successful jailbreaking attacks that have already been documented to be effective on DeepSeek R1 (including by Palo Alto’s Unit 42).

This means that for the first time in history – as of a few days ago – the bad actor hacking community has access to a fully usable model at the very frontier, with cutting edge of code generation capabilities.

How capable is it?

To set the scene on R1’s coding capabilities, it outperforms or matches the benchmark performance of the two most capable coding models in public release, Open AI’s o1 model and Anthropic’s Claude 3.5 Sonnet.

On Codeforces, a competitive coding benchmark, R1 is more capable than 96.3% of competitive coders. In other words, this places R1 in the top 3.7% of human competitive coders.

At the same time, it’s ability to run on less technically advanced chips makes it lower cost and easily accessible. You could create an immensely powerful hacking tool based on a stack of Mac minis in the corner of a teenager’s bedroom.

Imagine what this could do for the hacker-sphere

Does all of this mean that DeepSeek will be used by bad actors to supercharge their cyber attacking capabilities?

Let’s reason this through. For DeepSeek R1 to be an effective tool for nefarious code generation, three things would have to hold true:

1. It would have to be true that GenAI code generators are able to be used to generate code that can be used in cyber-attacks.

This has already been proven time and time again to be the case. Leading cybersecurity vendors are already defending against a growing number of AI generated, autonomous malware attacks.

Recently, AI-pen testing startup XBOW, founded by Oege de Moor, the creator of GitHub Copilot, the world’s most used AI code generator, announced that their AI penetration testers outperformed the average human pen testers in a number of tests (see the data on their website here along with some examples of the ingenious hacks conducted by their AI “hackers”).

2. R1 must be usable for the purpose.

In other words, the model must be accessible in a jailbroken form so that it can be used to perform nefarious tasks that would normally be prohibited.

Given that the model is open source and open weights and has already been jailbroken, this condition has also been satisfied.

3. The model must be able to be run by a bad actor on her own system in a practical and economically viable manner to avoid the restrictions that would apply when accessing the model via DeepSeek’s guard-railed API.

This condition too has been satisfied. The smaller and mid-parameter models can be run on a powerful home computer setup.

Even the most powerful 671 billion parameter version can be run on 18 Nvidia A100s with a capital outlay of approximately $300k. This might sound like a chunky investment, but given that there are multiple recorded ransomware payouts in the +$1M range (the highest ever disclosed was $70M), a single successful attack on a reasonable sized enterprise would put the bad actors comfortably in profit.

In summary, as of 20 January 2025, cybersecurity professionals now live in a world where a bad actor can deploy the world’s top 3.7% of competitive coders, for only the cost of electricity, to perform large scale perpetual cyber-attacks across multiple targets simultaneously.

We are effectively witnessing the democratisation of cybercrime; a world where smaller criminal groups can run sophisticated large-scale operations previously restricted to groups able to fund teams with this level of advanced technical expertise.

That is why, as you read these words, multiple bad actors will be testing and deploying R1 (having downloaded it for free from DeepSeek’s GitHub repro).

How can you defend your business against real-time autonomous malware attacks?

Now for the good news. Impressive though R1 is, for the time being at least, bad actors don’t have access to the most powerful frontier models. For instance, OpenAI’s already trained and tested, but yet-to-be publicly released, o3 reasoning model scored better than 99.95% of coders in Codeforces’ all-time rankings. To put that in perspective, this means there are only 175 human competitive coders on the planet who can outperform o3. Fortunately, the top model developers (including OpenAI and Google) are already involved in cybersecurity initiatives where non-guard-railed instances of their cutting-edge models are being used to push the frontier of offensive & predictive security.

Of course, to be of any use, you need those capabilities on your side. Whether we’re specifically talking about DeepSeek or the flurry of rivals/spinoffs that will inevitably follow, now is the time to deploy real-time AI-enabled autonomous detection, prevention and remediation solutions.

If upgrading your cyber defences was near the top of your 2025 IT to do list, (it’s no.2 in Our Tech 2025 Predictions, ironically right behind AI) it’s time to get it right to the top.

In my opinion, open source, open weights DeepSeek R1 is a drop everything moment. It certainly is for your opponent.

By Pax Zoega, Head of AI Readiness, Ideal