Cyber Security, Tech, Analysis.

A Realistic Look at Implications of ChatGPT for Cybercrime

Recently there’s been a lot of bold claims about how ChatGPT is going to revolutionize the cybercrime landscape, but it can be hard to distinguish the facts from the fiction. In this article I’m going to dive into some claims, as well as share some of my thoughts on where things might be heading.

AI will allow low skilled hackers to develop advanced malware

This is one of the claims that seems to be everywhere. I can’t even scroll down three posts on LinkedIn without someone talking about AI malware.

The first problem with this claim is that ChatGPT is simply not good at coding. If you ask it to generate a Python snippet to load a webpage, it can do that. If you ask it to generate a file encryptor, it can probably do that too. But when it comes to building any kind of complex code, it sucks. The more parameters you add, the more confused it gets.

Whilst you can sometimes get ChatGPT to generate a very rudimentary example of an individual malware component, it’s far from capable of building a fully functional piece of malware. The second you start trying to assemble multiple components together, it loses track of what it’s doing and fails. In fact, even if ChatGPT did have the capability to work well with code, the prompt character/token limit would prevent inputting enough data to generate anything beyond snippets you could find on Google.

As an example, I tried to get ChatGPT to generate a cookie stealer for Chrome. Below is the code ChatGPT output.

#include <stdio.h>

#include <string.h>

#include <windows.h>

#define COOKIE_FILE "C:\\Users\\<username>\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Cookies"

int main() {
    FILE *cookie_file;
    COOKIE_HEADER header;
    char name[256];
    char value[4096];
    char encrypted_value[4096];

    cookie_file = fopen(COOKIE_FILE, "rb");
    if (cookie_file == NULL) {
        printf("Failed to open cookie file.\n");
        return 1;

    while (fread(&header, sizeof(header), 1, cookie_file) == 1) {
        if (header.name_length >= sizeof(name) || header.value_length >= sizeof(value) || header.encrypted_value_length >= sizeof(encrypted_value)) {
            printf("Cookie size is too large.\n");

        fread(name, header.name_length, 1, cookie_file);
        name[header.name_length] = '\0';

        fread(value, header.value_length, 1, cookie_file);
        value[header.value_length] = '\0';

        fread(encrypted_value, header.encrypted_value_length, 1, cookie_file);
        encrypted_value[header.encrypted_value_length] = '\0';

        printf("Name: %s\nValue: %s\nEncrypted Value: %s\n\n", name, value, encrypted_value);


    return 0;

Already very early on, we’ve hit multiple problems.

Consider the following line #define COOKIE_FILE "C:\\Users\\<username>\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Cookies"

Here the user’s username is set to a placeholder value of , so we'd need to come up with code to get the actual username. The cookie path also depends on using the right Windows version, Chrome version, and drive letter. In the case of my own system, the cookie file was actually in a completely different folder. The code also doesn't handle Cookie decryption, which has been used since Chrome version 80. Cookies are simply output to the console, so we'd still need to build another component to upload them. And on top of all this, that isn't even the right file format. Chrome uses SQLite3 for cookie storage, whereas this code is just trying to read the raw file in a way that won't ever work.

All of these errors I noticed because I can read code, I’m familiar with programming, I understand the internal workings of the relevant systems & software, and I know how malware works on a functional level. If I were coming in as someone who cannot code, I’m unlikely to have any of the above skills, therefore no idea why my code doesn’t work. In my experimentation with ChatGPT, not only did I find I was heavily relying on my skills as an experience malware developer, but also my skills as a communicator. Having to translate abstract malware concepts into plain English instructions for a chatbot to understand was definitely a new experience for me.

Something also worth noting is that ChatGPT generates different responses to the same prompts. I think this is due the fact that Large Language Models as statistic models that work on probabilities of one word following the next. So when using ChatGPT to generate code, it will generate different code each time we ask. This makes it a nightmare to generate, debug, and assemble multiple piece of code.

I believe a lot of the misinformation stems from people’s belief that programming consists of simply writing code. Therefore, because the AI can output code, it can replace programmers. But the AI cannot replace programmers, because programming is not just writing code. Programming requires that you research and understand what it is you want to do, how you want to do it, and are familiar with the limitations of your design choices. Only then can you begin translating ideas into code. We don’t think a C programmer doesn’t understand code because they don’t write ASM, and we don’t believe a Python programmer doesn’t understand code because they don’t write C. So why do we expect that someone who has no coding experience can just pick up an AI and churn out complex software? AI is simply the next level of abstraction from machine code, not a replacement for the coder.

But ultimately everything we’ve said here is avoiding the elephant in the room: ChatGPT being able to generate code examples is due to it being trained on publicly available code. If someone with zero coding ability wants malware, there’s are thousands of ready-to-go examples available on Google. There’s even custom malware development services for sale on various open hacking forums. I think we need not worry about cybersecurity being turned on its head by Schrodinger’s hacker, who is simultaneously highly proficient in malware design despite knowing no coding at all, but also too dense to perform simple Google searchs.

Antivirus bypassing polymorphic malware

In this article, CyberArk makes the claim that ChatGPT can not only generate malware, but polymorphic malware which easily bypasses security products. Such claims are either misleading or false.

What is polymorphic malware? Polymorphism is an old, pretty much obsolete virus technique. Back when antivirus relied exclusively on code signatures, you could avoid detection via altering (mutating) their code. For example, let’s say we wanted to get the number 2 in Assembly.

; Method 1

mov eax, 2

; Method 2

mov eax, 1
add eax, 1

; Method 3

xor eax, eax
inc eax 
inc eax

; Method 4 

mov eax, 3
dec eax

These are just 4 examples of the nearly infinite ways to do the same thing in programming. Polymorphic malware exploits this. The malware regenerates its own code on each deployment or every time its run, so that no two instances of the same malware are identical. This is very similar to how biological viruses are sometimes able to evade the immune system due to DNA mutations. If there are infinite variations of the malware, then the antivirus companies must write infinite detection rules (or they would have had to, 20+ years ago when antivirus worked that way).

In the article, the authors demonstrate ChatGPT taking some Python code then rewriting it slightly differently, which is not polymorphism. With polymorphism, the malware rewrites itself, rather than relying on a 3rd party service to generate new code. Although you could achieve similar (but vastly inferior) results with ChatGPT, there are some problems.

  1. As previously addressed, ChatGPT struggles to write functional code. Even the example code provided in the article doesn’t work.
  2. Modern security products don’t rely on code signature based detection like they did in the 80s and 90s when polymorphism was an issue. Nowadays, anti-malware systems use a multitude of technologies such as behavioral detection, emulation, and sandboxing. None of which are vulnerable to polymorphism.
  3. The code in question is Python, which doesn’t run on the system it’s design for. Although Python can be made to run natively using py2exe, it wraps the python code with the py2exe loader, which makes the code mutation pointless and could just be replaced with a changing encryption key (to greater effect, I might at).
  4. The mutation process proposed is extremely convoluted, has several points of failure, and entirely relies on ChatGPT not catching on to the malicious use and blocking it.
  5. Exponentially better versions of such a system already exist in the cybercrime economy. They’re commonly referred to as “crypting services” and use more advanced techniques designed to not only evade code signature based detection, but most modern antivirus technologies.

Ultimately the techniques and systems proposed in the article are completely unrealistic. The proposal basically ham-fist ChatGPT into an existing solution, which would work much better without it.

AI enhanced phishing emails

I’ve seen two main claims about how ChatGPT could supercharge phishing. The first is that it would enable non-English speaking cybercriminals to write phishing emails in perfect English. The second is it could enable people unfamiliar with phishing to easily write convincing templates.

ChatGPT As a phishing translator

In 2016 Google quietly released a sophisticated AI service that allows cyber criminals to translate phishing to/from any language. It was known by the ominous name: Google Translate. Joking aside, Google Translate was explicitly designed for language translation, and while ChatGPT can sort of do it too, it’s not particularly good. It’s also somewhat unclear why someone would want to ask ChatGPT to say, write a phishing email in a language they don’t speak, having no idea what it says, when they could simply write the exact email they want in their native language and have it translated.

Asides from online free translators like Google Translate, there are low-cost services such as Fiverr. While automated Translators can do a fairly good job of translating words, they don’t necessarily translate grammar, tone, or formality well. Paying someone fluent in multiple languages a small fee to do translation is almost always preferable, and this is what cyber criminals have been doing for decades.

Using ChatGPT simply does not make much sense for translation, which wasn’t something that needed solving anyway. The headlines remind me a lot of the way in which blockchain is often referred to as “a solution looking for problems”. ChatGPT itself is actually extremely useful for many tasks, but the headlines have been plagued by cybersecurity marketing teams trying to get in on the hype by inventing problems for the AI to solve.

Chat GPT for writing phishing emails in general

I asked ChatGPT to make me a phishing email, and this is what I got.

`Subject: Urgent: Your account requires immediate attention

Dear [Your Name],

We regret to inform you that your account has been compromised. Our system has detected suspicious activity on your account, and we need you to take immediate action to prevent any further damage.

To secure your account, please click on the following link and enter your login credentials:

[Malicious Link]

Please note that failure to do so within the next 24 hours will result in your account being suspended indefinitely.

We apologize for any inconvenience this may cause, but your security is our top priority.

Sincerely, [Phishing Scammer]`

It ‘s a standard boilerplate phishing email, and not a particularly great one at that. If you’re not a cybersecurity professional and thought to yourself “I could have come up with this” or “I could write something better”, you’re correct. Writing phishing emails was never hard and doesn’t require AI. In fact, it’s one of the easiest ways to hack.

There are plenty of examples of effective real world phishing emails online that an attacker can simply copy. Since ChatGPT can’t do images or UI design, it’s limited to trivial text based phishing emails.

Here is an example of a real world phishing email, which ChatGPT can’t make, but you can!

Emails like this can be constructed by very simply copying the HTML code from a genuine email, then swapping out the text for your own. It’s very simple, and pretty effective due to the fact it uses familiar email templates.

“Breaking News: Marcus Hutchins just showed everyone how to make phishing emails”. Probably not a headline you’re going to see anytime soon. After all, I just stated the obvious, you’ve probably seen such a phishing email before, and if you haven’t, you could just Google for one. ChatGPT phishing is a good example of how easy it is to turn a flimsy premise into major news with the right topical buzzwords.

Evidence of ChatGPT use in cybercrime

In a fair few cases I’ve been presented with links to posts on hacking forums as evidence that the predictions were true and ChatGPT actually is being used by cybercriminals. This, however, is simply just evidence of circular reporting. If I were to claim I’d hidden 1 million dollars in a can of supermarket beans, it can be expected that there would be an in influx in people looking for money in beans. Nobody is going to find my million dollar luxury beans, because they don’t exist, but I could certainly now cite forum posts discussing them as evidence they’re real. The same is true for ChatGPT.

The cybersecurity industry has spent the past several months marketing ChatGPT as an omniscient hacking tool that will revolutionize cybercrime, so it’s not a surprise that cybercriminals are posting about it too. However, every example I’ve seen falls into one of three categories.

  1. People cashing in on the hype by offering services providing access to ChatGPT (something, something, during a gold rush, sell shovels).
  2. People who already know how to code building stuff with ChatGPT and posting it for attention.
  3. People who don’t know how to code sharing non-functional code snippets and asking others why they don’t work.

In most cases, examples are in Python or PHP, languages which are non-native to Windows, and therefore rarely used for malware due to impracticality. This is likely because ChatGPT struggles with native languages, but does slightly better with scripting ones due to the abundance of examples online.

ChatGPT Filtering

Another thing often not mentioned is ChatGPT attempts to filter out and prevent malicious requests. Whilst you can get around the filters, it’s time-consuming. In most cases, I was able to find the same example on Google in less time than I was able to get ChatGPT to produce it.

It is certain that OpenAI is going to continue to place more hurdles limiting ChatGPT’s use for malicious purposes. Right now, the product is starting from a base where queries are 100% free, there is minimal filtering, and access is open to everyone. In spite of this, ChatGPT remains essentially useless for someone who lacks the basic skill required to need the AI’s help running a cybercrime operation. Often proponents claim ChatGPTs capabilities are only going to get better, which I do agree with. But with better capabilities comes better filtering, raising the bar far past the level of the hypothetical minimally skilled hackers it supposedly enables.

Final thoughts

Whilst most of the mainstream suggested uses of ChatGPT for cybercrime are completely nonsensical, there are plenty of real threats that could arise. Since ChatGPT is a Large Language Model (LLM), it may be useful for streamlining of more sophisticated large-scale organizations that heavily rely on natural language. For example, troll farms and tech support scammers often employ hundreds of agents to engage in conversation with targets. Theoretically, some parts of these operations could be optimized by using LLMs to generate responses, but this depends on access remaining cheaper than hiring workers in developing nations. Either way, I think it’ll be interesting to see how the current threat intelligence industry adapts towards detecting and prevent abuse of AI.