~/home ~/blog ~/projects ~/about ~/resume

Threat Modeling for the Rest of Us: Part 1 - The Playbook I Wish Existed

Introduction

If you’re reading this because an auditor, a regulation, or a board member just asked about your threat model – you’re in the right place. And I’m not going to start by telling you that compliance is a terrible reason to do threat modeling. That would be dishonest. Compliance is how most organizations get here, and there’s no shame in it. The audit finding, the SOC 2 gap analysis, the PCI requirement that suddenly has teeth – these are legitimate catalysts. The goal is to take that catalyst and build something that actually protects your organization, not just something that satisfies a checkbox.

Here’s what I’ve seen happen too many times. A mid-sized SaaS company passed their annual pen test with flying colors two years running. Third year, a new testing firm comes in and finds a chained privilege escalation: an unauthenticated API endpoint leaking internal service names, combined with a misconfigured identity provider that trusted those service names implicitly. It was not sophisticated. It was not a zero-day. It was the kind of architectural blind spot that a single threat modeling session – one whiteboard, one hour, the right people in the room – would have surfaced before a line of code shipped. Instead, it cost them a quarter of incident response, a delayed product launch, and a very uncomfortable board conversation.

I’ve been asked to stand up threat modeling programs multiple times over the past fifteen years, across defense contractors, financial services firms, healthcare organizations, and startups that suddenly found themselves handling data they never expected to handle. This is the guide I wish someone had handed me the first time a CISO said, “We need threat modeling. Make it happen.”

This is Part 1 of a three-part series. Here, we cover the what and the why – what threat modeling actually is once you strip away the formality, which methodology fits your organization’s size and maturity, and how to run your first real session in 60 minutes. In Part 2, we get into tooling and automation – how to make threat modeling repeatable without making it miserable, and how to integrate it into the pipelines your engineering teams already use. In Part 3, we tackle the hardest problem: communicating threat model outputs to three audiences that speak entirely different languages – engineers who want specifics, executives who want risk posture, and board members who want assurance.

If you’ve tried threat modeling before and it felt like an academic exercise that produced a diagram nobody looked at twice – stick around. If you’ve never done it and the whole concept feels like one more security theater obligation – definitely stick around. This is the playbook that has worked in the field, refined by getting it wrong more times than I’d like to admit.

What Threat Modeling Actually Is (And Isn’t)

Let me cut through the formality. Threat modeling is a structured way of thinking about what could go wrong with something you’re building or operating, and deciding what to do about it before an attacker decides for you. That’s it. No special certification required. No proprietary software. No twelve-week engagement from a Big Four consultancy.

The best framing I’ve encountered comes from Adam Shostack, who distilled the entire practice down to four questions:

  1. What are we building? – Get the right people in a room and make sure everyone agrees on what the system actually looks like. Not what the architecture diagram from two years ago says. What’s actually running, who talks to what, and where data flows.
  2. What can go wrong? – This is where you systematically identify threats. Not “everything” – that’s useless. Specific, concrete scenarios tied to the system you just described.
  3. What are we going to do about it? – For each threat, you decide: mitigate it, accept the risk, transfer it, or eliminate the component that creates it. Every threat gets a disposition. No orphans.
  4. Did we do a good enough job? – You look back at the analysis itself. Did we miss anything obvious? Did we have the right people in the room? Is the model still accurate after last month’s architecture changes?

Those four questions work whether you’re protecting a three-tier web application, an IoT fleet, a data pipeline, or an entire business process. They scale from a startup’s first production deployment to a federal agency’s zero-trust migration. The methodology you wrap around them varies – and we’ll get into that – but the core logic stays the same.

What Threat Modeling Is Not

This is where I see organizations burn time and money. They confuse threat modeling with adjacent security activities, try to make one practice do the job of another, and end up with something that satisfies nobody.

Threat modeling is not a penetration test. A pen test tells you what’s exploitable in what you’ve already built. Threat modeling tells you what might be exploitable in what you’re about to build – or what you’re running right now but haven’t examined architecturally. One is reactive validation. The other is proactive analysis.

Threat modeling is not a vulnerability scan. Scanners find known weaknesses in known software – missing patches, misconfigured headers, outdated libraries. They don’t find architectural flaws, business logic gaps, or trust boundary violations. A scanner will never tell you that your microservice trusts an upstream caller it shouldn’t.

Threat modeling is not a risk register. Risk registers track organizational risks at a portfolio level – “we might lose a key vendor,” “regulatory changes could affect revenue.” Threat modeling operates at the system level, identifying specific technical and process threats against specific assets. The outputs can feed a risk register, but they are not the same artifact.

Threat modeling is not a one-time exercise. This might be the most damaging misconception. I’ve walked into organizations that proudly showed me a threat model from their initial product launch – three years and forty deploys ago. A threat model that doesn’t evolve with the system it describes is a historical document, not a security tool.

These activities are complementary. Each one covers ground the others can’t. Here’s how they relate:

Activity When It Happens What It Finds Threat Modeling’s Relationship
Threat Modeling Design phase and ongoing Architectural flaws, trust boundary issues, business logic threats
Penetration Test After deployment, periodically Exploitable vulnerabilities in running systems Threat models identify what pen testers should target first
Vulnerability Scan Continuously or on schedule Known CVEs, misconfigurations, missing patches Threat models reveal which scan findings matter most given your architecture
Risk Register Ongoing at organizational level Portfolio-level business and operational risks Threat model outputs feed risk registers with system-specific technical risks
Security Audit Periodically, often compliance-driven Gaps against a standard or control framework Threat models provide evidence that design-level controls were considered

The point is not that threat modeling is superior to these other practices. The point is that it fills a gap none of them cover: understanding your system’s exposure before the code ships, and keeping that understanding current as the system evolves. Skip it, and you’re relying entirely on after-the-fact discovery – pen tests, scans, and incident response – to catch what upfront analysis would have prevented.

Why It Matters Beyond the Audit

Let me be direct: if compliance is the reason you’re starting threat modeling, that’s a perfectly good reason. I’ve never looked down on a team that got here because an auditor told them to. SOC 2 Type II, PCI-DSS, HIPAA, FedRAMP – they all either require or strongly imply that you’re doing some form of threat analysis at the design level. Meeting that requirement is legitimate work, and the artifact you produce has real value in that context.

But if compliance is the only value you extract from threat modeling, you’re leaving most of the benefit on the table. The audit checkbox is the floor, not the ceiling. Once you have the practice running – even in its most basic form – three things start happening that pay dividends well beyond your next assessment cycle.

Finds Design Flaws Before Code Exists

This is the highest-leverage benefit of threat modeling, and it’s the one that wins over skeptical engineering leads faster than anything else I’ve seen.

When you threat model during design – before a single line of production code ships – you catch architectural mistakes at the point where fixing them costs hours instead of weeks. The earlier you find a flaw, the cheaper and less disruptive the fix. This isn’t theory. I watched a platform team at a mid-sized fintech catch an authentication bypass during an architecture review for a new service-to-service integration. Their proposed design had an internal API that accepted a caller-asserted identity token without independent verification – the downstream service simply trusted that the upstream caller had already authenticated the user. A forty-five-minute threat modeling session surfaced the trust boundary violation. The fix was a design change on a whiteboard. Had that flaw shipped, it would have required rearchitecting the auth flow across three services, invalidating two months of integration testing, and delaying a launch that was already on a tight timeline. The threat model cost them one meeting. The alternative would have cost them a quarter.

Every organization I’ve worked with that sustains threat modeling past the first cycle reports the same thing: the bugs they stop finding in production are the ones they started catching in design. Pen test findings shift from “critical architectural flaw” to “implementation-level issue” – which is exactly where you want your pen testers spending their time.

Creates Shared Language Across Teams

One of the less obvious but more durable benefits of threat modeling is that it forces security, engineering, and product teams to agree on what words mean.

I worked with a healthcare SaaS company where the security team rated a data exposure path as “high risk,” the engineering team called the same issue “medium,” and the product owner didn’t think it was a risk at all because the affected data wasn’t covered by HIPAA. They were all looking at the same system. The disconnect wasn’t about competence – it was about language. “Risk” meant something different to each group because they’d never sat in the same room and built a shared model of what the system actually did, what the threats actually were, and what “high” actually meant for their specific context. One structured threat modeling session – walking through data flows together, identifying threats together, and rating severity using a common scale – gave them a shared vocabulary that persisted well beyond that single exercise. Six months later, their cross-team security conversations were faster and produced fewer arguments, because everyone was working from the same map.

Threat modeling doesn’t just produce a document. It produces alignment. And alignment across teams reduces the friction that slows down every security initiative you’ll ever run.

Makes Security Spending Defensible

Every security leader I know has fought the budget battle. You know the threat is real, you know the investment is justified, but translating that into language that gets a finance committee to release funds is a different skill entirely. Threat modeling gives you the raw material to make that translation.

When your threat model identifies a specific attack path – say, a lateral movement risk from a compromised contractor VPN into a segment hosting customer payment data – you have something concrete to point to. You’re not saying “we need to invest in network segmentation because it’s a best practice.” You’re saying “here is a documented threat against a specific asset, here is the potential impact, and here is what it costs to mitigate versus what it costs if we don’t.” That’s a fundamentally different conversation, and it’s one that budget holders can actually engage with. I saw a regional bank use threat model outputs to justify a segmentation project that had been stuck in budget limbo for over a year. The threat model didn’t change the technical reality – the risk was always there. It changed the conversation by making the risk visible and specific.

We’ll go much deeper into communicating threat model outputs to different audiences in Part 3 of this series, including how to frame findings for engineers, executives, and board members. For now, the key point is this: threat modeling doesn’t just improve your security posture. It gives you the evidence to explain why you’re spending what you’re spending – and that evidence holds up under scrutiny in ways that gut instinct and industry benchmarks never will.

Choosing Your Approach: A Use-Case Guide

Here’s where most guides go wrong. They hand you a list of frameworks sorted alphabetically and say “pick one.” That’s like walking into a hardware store, being handed a catalog of every tool they sell, and being told to figure out which one drives screws. You don’t start with the tool. You start with the job.

Over the past fifteen years I’ve deployed every major threat modeling methodology at least once, and the single most reliable predictor of success isn’t which framework you pick – it’s whether you picked the one that matches your actual situation. A startup trying to use OCTAVE will drown in process. An enterprise trying to scale with ad-hoc STRIDE sessions will end up with a hundred inconsistent threat models that nobody can aggregate. The framework has to fit the organization, the team, and the problem you’re solving right now.

What follows is organized by your situation, not by framework name. Find the scenario that sounds like yours, and that’s your starting point.

“We’re Shipping a Product and Need to Secure the Design”

Recommended approach: STRIDE

If you’re an engineering team building software and you need to identify security threats at the component level, STRIDE is where you start. Microsoft developed it in the late 1990s, and it has survived for a reason: it maps directly to the way developers already think about systems.

STRIDE is a mnemonic for six threat categories: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. You take each component in your system – every service, data store, external entity, and data flow – and systematically ask which of those six threat categories apply. It’s exhaustive without being overwhelming, and it produces output that engineers can act on immediately.

Why it fits product teams specifically: STRIDE is developer-friendly in a way most other frameworks aren’t. It integrates naturally into your SDLC. You can run a STRIDE session on a single microservice in a sprint planning meeting and walk out with a concrete list of threats mapped to your backlog. I’ve seen teams embed STRIDE into their architecture review process so that no design document gets approved without a threat table attached. That’s the kind of lightweight, repeatable integration that actually sticks.

The output looks like this: a per-component threat list where each entry identifies the component, the STRIDE category, a description of the specific threat, and the proposed mitigation. For a login service, that might be “Spoofing – attacker submits forged OAuth tokens; mitigate with signature verification and issuer validation.” Concrete, actionable, and traceable back to the design.

“We Need to Assess Risk Across the Enterprise”

Recommended approach: OCTAVE

If you’re a security leader trying to understand risk across multiple systems, business units, or operational domains, OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) is built for exactly that problem. Carnegie Mellon’s Software Engineering Institute developed it, and it takes an asset-centric view that ties directly to business objectives rather than technical components.

Where STRIDE zooms into individual systems, OCTAVE zooms out. It starts by identifying your organization’s critical assets – not just servers and databases, but the information and capabilities your business depends on. Then it maps threats against those assets and evaluates risk in terms of business impact. That distinction matters. When your CISO needs to explain risk to the board, “we have 347 high-severity findings across our application portfolio” lands very differently than “our three most critical customer data assets face unmitigated lateral movement risk, and here’s the business impact if any of them are compromised.”

OCTAVE works well for organizations that have outgrown single-system threat modeling but haven’t yet built a formal enterprise risk management program. It bridges the gap between technical threat analysis and business risk management. The trade-off is process weight – OCTAVE requires workshops with stakeholders across the organization, and it’s not something you knock out in a single sprint.

The output: risk profiles tied to critical assets, with each profile documenting the asset, its value to the business, the threats it faces, existing controls, and residual risk expressed in business terms.

“We’re in a Regulated Industry and Need to Demonstrate Due Diligence”

Recommended approach: PASTA

PASTA – Process for Attack Simulation and Threat Analysis – is a seven-stage, risk-centric methodology that produces the kind of documentation trail that makes auditors and regulators genuinely happy. If you’re in financial services, healthcare, government, or any sector where you need to demonstrate not just that you identified threats but that you followed a rigorous, repeatable process to do so, PASTA earns its overhead.

The seven stages walk you from defining business objectives through technical scope, application decomposition, threat analysis, vulnerability analysis, attack modeling, and finally risk and impact analysis. Each stage produces a defined artifact. That traceability – from business objective to specific attack tree to risk rating to remediation plan – is what regulated organizations need. When an examiner asks “how did you determine this risk rating?” you can trace it back through every analytical step.

I won’t pretend PASTA is lightweight. It’s the most process-heavy approach on this list, and for a ten-person startup it would be absurd overkill. But for a regional bank preparing for an OCC examination or a health system navigating HIPAA security rule requirements, that process weight is a feature, not a bug. The artifacts PASTA produces aren’t busywork – they’re the evidence that your threat analysis was systematic, comprehensive, and tied to business context.

The output: attack trees showing specific attack paths against your systems, risk ratings with documented rationale, and remediation plans with full traceability from business objective through threat identification to mitigation. Hand that package to your examiner and watch their eyes light up.

“We Need to Scale Threat Modeling Across Many Teams”

Recommended approach: VAST

VAST – Visual, Agile, and Simple Threat modeling – was designed from the ground up for organizations that need threat modeling to work at scale across dozens or hundreds of development teams, integrated into agile and DevOps workflows without becoming a bottleneck.

The key differentiator is that VAST separates application threat models from infrastructure threat models. Application threat models use process-flow diagrams that developers create as part of their normal design work. Infrastructure threat models use a different format optimized for operations and platform teams. This separation means you’re not forcing a single modeling format on teams with fundamentally different perspectives on the system.

VAST is the approach I recommend when a CISO comes to me and says “we did STRIDE on one team and it worked great, now we need to roll it out to forty teams.” That’s a different problem than securing a single application. At scale, you need consistency across models, aggregation of findings, and integration with CI/CD pipelines. VAST’s design anticipates those requirements in a way that the other frameworks on this list don’t – they were designed for depth on individual systems, not breadth across an organization.

The output: process-flow diagrams with threat overlays for application models, and Data Flow Diagram (DFD)-based models for infrastructure, both designed for automation and integration with ticketing systems, risk dashboards, and pipeline tooling.

“We Care Specifically About Privacy Threats”

Recommended approach: LINDDUN

If your primary concern is privacy rather than security – or more accurately, if you need a methodology that treats privacy as a first-class concern rather than a subset of confidentiality – LINDDUN is the purpose-built answer. Developed by researchers at KU Leuven, it’s structured specifically around privacy threat categories and maps directly to data protection regulations like GDPR.

LINDDUN is a mnemonic covering seven privacy threat types: Linkability, Identifiability, Non-repudiation (as a privacy threat – when users can’t deny actions they should be able to deny), Detectability, Disclosure of information, Unawareness, and Non-compliance. Several of those categories – linkability, detectability, unawareness – simply don’t appear in security-focused frameworks like STRIDE. A STRIDE analysis might tell you that a data store is vulnerable to information disclosure, but it won’t tell you that your system’s query patterns make it possible to link anonymized records back to individuals. LINDDUN will.

This is the framework I point organizations toward when they’re working through GDPR Data Protection Impact Assessments, building systems that handle sensitive personal data, or operating in jurisdictions where privacy regulations carry real enforcement teeth. It’s not a replacement for security-focused threat modeling – it’s a complement. Run STRIDE for your security threats and LINDDUN for your privacy threats, and you’ve covered ground that either one alone would miss.

The output: privacy threat trees mapped to your system’s data processing activities, with each threat tied to specific LINDDUN categories and linked to relevant regulatory requirements.

“We Need to Model a Specific Adversary’s Capabilities”

Recommended approach: Attack Trees

Sometimes you don’t need to catalog every possible threat against a system. You need to understand how a specific adversary – a nation-state actor, an insider threat, a financially motivated criminal group – would actually attack you. Attack trees are the tool for that job.

Attack trees are adversary-centric and goal-oriented. You start with the attacker’s objective at the root of the tree – “exfiltrate customer payment data,” “disrupt manufacturing operations,” “gain persistent access to the executive mail system” – and then decompose that goal into the specific paths an attacker could take to achieve it. Each path breaks down into sub-goals, and each sub-goal has associated costs, required capabilities, and likelihood estimates.

This approach integrates naturally with red team operations and threat intelligence programs. If your threat intel team has identified a specific threat actor targeting your industry, you can build an attack tree that models that actor’s known TTPs (tactics, techniques, and procedures) against your specific architecture. The result isn’t a theoretical catalog of what could go wrong – it’s a model of what a real adversary would actually do, given what you know about their capabilities and your system’s exposure.

I use attack trees most often alongside one of the broader frameworks. STRIDE tells you what categories of threat each component faces. An attack tree tells you exactly how a motivated adversary chains those threats together into a viable attack path. They answer different questions, and the most mature programs I’ve worked with use both.

The output: tree structures showing attack paths from the adversary’s goal down through the specific steps required, annotated with cost estimates, capability requirements, and probabilities of success at each node.

Decision Matrix: Where to Start

The six use cases above give you depth. The table below gives you speed. If you need a quick starting recommendation based on your organization’s size and primary driver, start here – then read the relevant use case above for the full picture.

Compliance Driver Product Security Driver Enterprise Risk Driver
Startup (< 50 people) STRIDE + document rigorously STRIDE STRIDE (single-product focus)
Mid-Market (50–500) PASTA STRIDE, graduating to VAST OCTAVE
Enterprise (500+) PASTA VAST OCTAVE + Attack Trees

A note on maturity: this matrix assumes you’re standing up a threat modeling practice for the first time or formalizing an ad-hoc one. If you have no program at all, start with STRIDE regardless of your size – it has the lowest barrier to entry and produces useful output from day one. If you have ad-hoc threat modeling happening in pockets across the organization, the matrix above tells you what to formalize around. If you already have a formalized program and you’re reading this to evaluate whether you chose the right framework – look at where your current approach falls short and use the use cases above to identify what fills the gap. Most mature programs end up using two or three of these methodologies for different purposes, and that’s exactly right. No single framework covers every situation, and anyone who tells you otherwise is selling something.

The 60-Minute Whiteboard: Your Minimum Viable Threat Model

Before you buy any tool or adopt any framework formally, do this once. One whiteboard, one hour, the right people in the room. That’s all it takes to produce a real, usable threat model – not a perfect one, but a real one. I’ve facilitated hundreds of these sessions, and the single biggest barrier to threat modeling isn’t complexity or tooling or expertise. It’s starting. So let’s fix that right now.

Block sixty minutes on the calendar. Pull in two or three people who know the system – at least one developer who built it and one person who operates it. If you have a security person available, great, bring them too. If you don’t, that’s fine. The people who know how data moves through the system are more valuable for this exercise than someone who knows threat taxonomy but not your architecture. Grab a whiteboard, a marker, and something to capture notes. Here’s exactly how you spend that hour.

Step 1: Scope It (5 min)

Pick one system. Not your entire infrastructure. Not your whole product. One system – a single application, a single service, a single data pipeline. Draw a boundary around it. What’s inside the scope of this session, and what’s outside?

This matters more than it sounds. I’ve watched threat modeling sessions collapse twenty minutes in because the scope was “our platform” – which turned out to mean forty-seven microservices, three data stores, two third-party integrations, and an authentication layer nobody fully understood. The conversation spiraled, nobody could agree on which threats mattered because nobody could agree on what they were even modeling, and the session ended with a whiteboard full of half-finished diagrams and no actionable output.

Go too narrow and you have the opposite problem – you model a single API endpoint in isolation and miss the systemic risks that emerge from how it interacts with the rest of the system. The sweet spot is a system that has a clear purpose, defined interfaces with other systems, and data flows you can trace from entry to exit. An e-commerce checkout flow. A customer onboarding pipeline. An internal reporting dashboard and its backing data stores. Something you can hold in your head and draw on a whiteboard in under fifteen minutes.

State the scope out loud: “We are threat modeling the customer checkout flow – from the user’s browser through the API gateway, the order service, the payment processor integration, and the order database. Everything else is out of scope for today.” Write that on the whiteboard. It keeps the session honest.

Step 2: Diagram It (15 min)

Now draw the data flow. This doesn’t need to be a formal Data Flow Diagram with precise notation. Boxes and arrows on a whiteboard work fine. You need to capture four things:

  • Where data enters the system – user inputs, API calls, file uploads, webhook payloads, anything that crosses the boundary from outside to inside.
  • Where data is stored – databases, caches, file systems, message queues, session stores.
  • Where data exits the system – responses to users, API calls to third parties, log outputs, messages to downstream services.
  • Who or what touches the data along the way – services, functions, human operators, automated processes.

Draw each of these as a box. Draw arrows showing how data flows between them. Label the arrows with what’s moving – “user credentials,” “payment token,” “order details,” “API response.” It doesn’t need to be pretty. It needs to be accurate.

Here’s what a simple diagram might look like for a web application with a checkout flow:

+-----------+       HTTPS        +-------------+      REST/TLS      +---------------+
|           | -----------------> |             | -----------------> |               |
|  Browser  |    credentials,    |   API       |   order details,   |   Order       |
|  (User)   |    order data      |   Gateway   |   payment token    |   Service     |
|           | <----------------- |             | <----------------- |               |
+-----------+    order confirm   +-------------+   order status     +-------+-------+
                                                                           |
                                                              SQL/TLS      |      HTTPS/TLS
                                                   +----------+            |    +------------+
                                                   |          | <----------+    |            |
                                                   |  Order   |                 |  Payment   |
                                                   |  DB      | <-- - - - - - - |  Processor |
                                                   |          |   (no direct    |  (Stripe)  |
                                                   +----------+    connection)  +------------+
                                                                        ^
                                                                        |
                                                              Order Service calls
                                                              Payment Processor
                                                              to charge card,
                                                              stores result in DB

That took maybe ten minutes to draw. It’s not architecturally complete – it doesn’t show logging, monitoring, the CDN in front of the browser, or the seventeen other things that exist in a real system. It doesn’t need to. It shows the data flows that matter for the checkout function, and that’s what we’re threat modeling today.

Walk the group through the diagram. Make sure everyone agrees it reflects reality. You’ll be surprised how often the developer says “actually, the order service also writes to a Redis cache that the API gateway reads directly” – and suddenly you have a data flow nobody drew that might matter a lot.

Step 3: Ask “What Can Go Wrong?” (20 min)

This is the core of the exercise, and it’s where STRIDE earns its keep as a thinking tool. Walk each data flow – each arrow on your diagram – and each component, and ask the group: what could an attacker do here? Use the six STRIDE categories as prompts to jog your thinking:

  • Spoofing – Can someone pretend to be something or someone they’re not? Can an attacker forge a request that looks like it came from a legitimate user? Can a malicious service impersonate the payment processor?
  • Tampering – Can someone modify data they shouldn’t be able to modify? Can an attacker change the order amount in transit between the browser and the API? Can they alter records in the database?
  • Repudiation – Can someone perform an action and then deny they did it? If a user places an order and later claims they didn’t, do you have the logs and audit trail to prove otherwise?
  • Information Disclosure – Can someone access data they shouldn’t see? Could an error message leak database schema details? Could the API return more fields than the caller needs?
  • Denial of Service – Can someone make the system unavailable for legitimate users? What happens if an attacker floods the checkout endpoint? What if the payment processor goes down?
  • Elevation of Privilege – Can someone gain access beyond what they’re authorized for? Could a regular user manipulate their session to gain admin privileges? Could the order service be tricked into executing commands on the database server?

You don’t need to be exhaustive at every flow. Spend your time where your instincts and experience tell you the risk is highest. The connection between the browser and the API gateway? That’s internet-facing and handles credentials – spend extra time there. The connection between the order service and the database on an internal network? Still worth examining, but probably lower risk if your network segmentation is solid.

Write down every threat the group identifies. Don’t filter during brainstorming – capture everything. You’ll prioritize in the next step. A typical session on a system this size produces eight to fifteen threats. If you’re getting fewer than five, push harder on the STRIDE categories. If you’re getting more than twenty, your scope might be too broad.

Step 4: Prioritize (10 min)

Take your list of threats and rate each one high, medium, or low based on two factors: how likely is it to happen, and how bad is it if it does? Don’t overthink this. You’re not building a quantitative risk model. You’re making rough judgment calls with the people in the room who know the system best.

A threat that’s easy to exploit and leads to customer data exposure? High. A threat that requires physical access to a data center and results in a minor service degradation? Low. When the group disagrees – and they will – have a thirty-second conversation and make a call. Perfection is the enemy of progress here. You can always revisit priorities later as you learn more.

Step 5: Decide (10 min)

For each high-priority threat, the group makes a decision: mitigate, accept, or transfer. Every threat gets a disposition. No orphans.

  • Mitigate means you’re going to do something to reduce the risk – add input validation, implement mutual TLS, add rate limiting, fix the access control logic. Be specific about what the mitigation is, even if it’s just a sentence.
  • Accept means you’ve looked at the risk, you understand it, and you’re choosing to live with it for now. Maybe the likelihood is low enough, or the cost of mitigation outweighs the impact. Write down why you’re accepting it, so future-you doesn’t have to relitigate the decision.
  • Transfer means you’re shifting the risk to someone else – buying insurance, using a managed service that takes on the liability, or contractually requiring a third party to handle it.

For medium- and low-priority threats, note them but don’t burn your remaining time on them. They go on the backlog for the next session.

Here’s what the output of your session might look like for the top threats identified against our checkout flow:

Threat                                  | STRIDE Category        | Priority | Disposition
--------------------------------------- | ---------------------- | -------- | ----------------------------------
Attacker modifies order amount          | Tampering              | High     | Mitigate: server-side price
in transit between browser and API      |                        |          | validation, reject client-supplied
                                        |                        |          | totals
                                        |                        |          |
Stolen credentials used to place        | Spoofing               | High     | Mitigate: enforce MFA on checkout,
fraudulent orders                       |                        |          | implement device fingerprinting
                                        |                        |          |
API error responses leak internal       | Information Disclosure | Medium   | Mitigate: standardize error
service names and DB schema details     |                        |          | responses, strip debug info in prod
                                        |                        |          |
Order Service compromise allows         | Elevation of Privilege | High     | Mitigate: least-privilege DB
direct SQL execution on Order DB        |                        |          | credentials, parameterized queries,
                                        |                        |          | network-level DB access controls

That’s your threat model. Print it, save it, put it in your wiki – wherever your team will actually look at it again.

You now have a threat model. It’s not perfect. It doesn’t need to be. What matters is you’ve started thinking structurally about what can go wrong. Part 2 will show you how to automate and scale this.

What’s Next

This article gave you the foundations – what threat modeling is, which methodology fits your situation, and how to run your first session in 60 minutes. But a whiteboard session that happens once is a workshop, not a program.

In Part 2: Tools and Automation, we tackle the tooling question: how to make threat modeling repeatable without making it miserable, how to integrate it into CI/CD pipelines your teams already use, and how to maintain living threat models that evolve with your architecture instead of gathering dust in a wiki.

In Part 3: Presenting to Leadership, we solve the communication problem – translating threat model outputs into language that works for engineers who want technical specifics, executives who want risk posture, and board members who want assurance that the organization is managing threats responsibly.

Moose is a Chief Information Security Officer specializing in cloud security, infrastructure automation, and regulatory compliance. With 15+ years in cybersecurity and 25+ years in hacking and signal intelligence, he leads cloud migration initiatives and DevSecOps for fintech platforms.