Threat Modeling Example with ChatGPT

A quick demonstration of a threat modeling processes using ChatGPT and STRIDE.

A robot works at a computer station in mysterious blue light

I'm presuming you've heard of ChatGPT and also threat modeling. If you don't know much about threat modeling but are interested in learning, then you are in the right place. This is not a deep dive into threat modeling. Instead, this is a fun way to learn quickly by messing around.
One caveat is that plenty of existing threat modeling tools are out there. Some are complex, but we have a new AI friend, so let's see how ChatGPT can help us simulate a quick threat modeling process.

Overview and Choosing a Threat Modeling Method

First, a quick recap of what threat modeling is:

  • Threat modeling is a process used to identify potential security risks in a system, application, or network.
  • It involves thinking like a potential attacker and examining what assets are at risk, who the potential attackers might be, and what methods they might use to compromise the system.
  • By identifying relevant threats, organizations can take proactive steps to address them before they can exploit vulnerabilities, improving the overall security posture of the system.

In short, it is the concept that we take time to think about what may go wrong in a given system or situation, so that we can be prepared in case it does. In a deeper sense, so that we can design a system resilient to the bad things that could happen.

There are many available threat modeling methods. Nataliya Shevchenko with CMU lists 12 threat modeling methods on the SEI blog. We will take a common one, STRIDE, and jump right into it. You could run into STRIDE in many technical fields and it is commonly used in cybersecurity.

STRIDE Threat Modeling with ChatGPT

You can read the wikipedia page on STRIDE for detail, but the basics you need to know is that STRIDE is a mnemonic for six common threat categories we can use for reasoning "what could go wrong" with a particular element or interaction of a system.

  • Spoofing
  • Tampering
  • Repudiation
  • Information Disclosure
  • Denial of Service
  • Elevation of Privilege

Typically you would apply STRIDE against a system you are designing or an existing system you are analyzing for the purpose of reducing risk. Let's use ChatGPT to generate a plausible, oversimplified system concept that we can apply STRIDE against.

Generate a System Description With Elements

Straightforward right? I asked it to conversationally describe a common three tier architecture with a bit of AWS in there. It isn't very concrete or exactly accurate, but it will do for a general example. If you wanted to do something like that in real life, I found a whitepaper that describes scaling Wordpress on AWS here. Moving on - Let's ask ChatGPT to detail some of the data flow among the tiers.

Generate a Data Flow Description

Sounds pretty plausible, albeit with some weirdness here and there.

Okay, how do we use STRIDE against the system design? In some threat models, such as STRIDE, you may choose to apply the STRIDE process against a few aspects:

  • An element in the system
  • An interaction or event in the system
  • A boundary or trust boundary

Let's choose an element. I know data is important, so let's look at potential threats to the entire data storage tier. If you wanted to be more specific you could choose a smaller element than the tier itself, such as a particular device, software or construct. This is where the real meat of threat modeling begins.

Generate a List of Threats Using STRIDE Against the System

Here you see where ChatGPT lists each category in STRIDE and provides an example of how the threat could materialize against the database tier. However, it is still sort of high-level. Let's see about drilling in a bit.

Generate Specific Threats to Technology Specific Elements

Great! We got some more detail on specifics. We used a very explicit prompt to describe a technology piece within the database tier we wanted to drill into. Also we chose a specific category, tampering, to focus on.

You can see how this process could become very detailed. In real life you will want to scope your threat modeling, involve experts from various perspectives and use diagrams. It is essential that the architects of the system be involved!

Now, remember that ChatGPT is far from perfect and will get many details wrong. However, it isn't bad at giving you ideas and sending you in a lot of good directions. It is useful for brainstorming! Here we used ChatGPT to show you an example of threat modeling so that you can understand the concept.

Identify Mitigations to Threats

ChatGPT got ahead of me - on the previous response it started generating some mitigations:

After you model threats and you have a list of threats, mapped to elements, events/interactions or boundaries in your system, the next step is to identify mitigations for the threats. That is a whole other story, but by now you likely realize you can ask ChatGPT to help brainstorm some mitigations if you want.


For a quick recap, we:

  • Generated a fictitious system with plausible elements
  • Applied STRIDE against a chosen element to brainstorm what threats could look like to that element
  • Drilled into a particular category of threat to find useful details
  • Generated mitigations to threats identified via the threat modeling process

You should know this was a simplified version of threat modeling, however even in the real world simple threat modeling is better than no threat modeling. Due to time constraints you may only be able to run a simple threat model. Even a simple threat model can identify some very relevant threats the system designers may have overlooked. Once you have threats and mitigations you can work with system architects to put together appropriate controls depending on the business appetite for risk, time, resources and money available.

I hope you enjoyed this little exercise! Til next time!