Threat Modeling for Privacy and Security: A Practical Framework
Threat modeling is the systematic process of identifying what you need to protect, who you need to protect it from, and what measures are proportionate and effective given your specific circumstances. It is the foundational step that should precede every security and privacy decision, yet it is frequently skipped by people who jump straight to implementing tools without understanding whether those tools address their actual risks. A journalist protecting sources in an authoritarian country faces fundamentally different threats than a corporate employee defending against industrial espionage or an ordinary citizen seeking basic privacy from data brokers. Without a clear threat model, security efforts are unfocused at best and counterproductive at worst, as users may invest significant effort into protections that do not address their real vulnerabilities while ignoring threats that could have catastrophic consequences. This guide provides a structured approach to threat modeling that can be adapted to any situation, from the general internet user to individuals facing sophisticated state-level adversaries.
The Five Fundamental Questions
Every threat model begins by answering five questions. These questions, adapted from the Electronic Frontier Foundation's Surveillance Self-Defense guide, provide the framework for all subsequent analysis:
1. What do I want to protect? These are your assets. Assets include digital data (files, communications, browsing history, location data, metadata, social graphs), physical objects (devices, storage media, documents), and intangible assets (reputation, relationships, freedom, physical safety). Make a comprehensive list of everything that would cause harm if compromised, exposed, or destroyed.
2. Who do I want to protect it from? These are your adversaries. Adversaries vary enormously in capability, motivation, and resources. They might include advertising companies tracking your behavior, an abusive partner monitoring your communications, a corporate competitor seeking trade secrets, a criminal organization, law enforcement agencies, or a nation-state intelligence service. Different adversaries employ different attack methods and have different levels of persistence.
3. How likely is it that I will need to protect it? This is risk assessment. Not every theoretical threat is a practical concern. A software developer in a stable democracy faces a low probability of state-sponsored surveillance but a high probability of credential theft or phishing. A dissident journalist in an authoritarian regime faces the opposite risk profile. Probability assessment must be realistic and based on your actual circumstances, not hypothetical worst-case scenarios.
4. How bad are the consequences if I fail? This is impact assessment. The consequences of a security failure range from minor inconvenience (a spam email) to life-threatening danger (physical violence against an identified source). The severity of potential consequences determines how much effort and inconvenience is justified in implementing protections. A threat with low probability but catastrophic consequences may justify significant protective measures.
5. How much trouble am I willing to go through to prevent this? This is the usability constraint. Every security measure has a cost in time, money, convenience, or functionality. Encrypting all communications is more secure but slower. Using Tor for everything is more private but significantly impacts browsing speed and website compatibility. The most secure solution is worthless if it is so inconvenient that you abandon it or implement it inconsistently.
Defining and Categorizing Adversaries
Understanding your adversaries is the most critical component of threat modeling because it determines the level of sophistication your defenses must achieve. Adversaries can be categorized into tiers based on their capabilities, resources, and persistence.
Tier 1: Passive Commercial Surveillance
This tier includes advertising networks, data brokers, social media companies, and website operators who collect user data for commercial purposes. Their primary tools are cookies, tracking pixels, browser fingerprinting, cross-site tracking, and data aggregation from multiple sources. They are typically not targeting specific individuals but rather building profiles on all users. Defense against this tier focuses on blocking trackers, using privacy-preserving browsers and search engines, limiting data shared with services, and using disposable email addresses. Most internet users should defend against this tier as a baseline.
Tier 2: Targeted Non-State Actors
This tier includes stalkers, abusive partners, corporate competitors, hackers-for-hire, and organized criminal groups. These adversaries may target specific individuals and can employ social engineering, phishing, malware, physical surveillance, and exploitation of known software vulnerabilities. They may also abuse legal processes (subpoenas, court orders) to obtain data from service providers. Defense requires strong authentication (hardware security keys), encrypted communications, careful operational security, physical security awareness, and hardened computing environments.
Tier 3: Law Enforcement
Law enforcement agencies have legal authority to compel service providers to hand over data, install wiretaps, and in some jurisdictions use malware for surveillance. They can obtain metadata from telecommunications providers, email providers, and social media companies through court orders or administrative subpoenas. In some cases they can compel individuals to decrypt data or provide passwords. Defense against this tier requires end-to-end encrypted communications, encrypted DNS, full disk encryption, minimal data retention by service providers, and services that are technically unable to comply with data requests (zero-knowledge architecture).
Tier 4: Nation-State Intelligence Agencies
The most capable adversaries have access to signals intelligence (SIGINT) infrastructure that can monitor internet backbone traffic, exploit zero-day vulnerabilities, compromise hardware supply chains, and deploy advanced persistent threats. They can correlate traffic patterns across global observation points, decrypt or circumvent commercial encryption through various means, and bring essentially unlimited computational resources to bear on specific targets. Defense against this tier requires Tor or equivalent anonymity networks, air-gapped systems for the most sensitive work, hardware verification, extremely disciplined operational security, and acceptance that perfect defense may not be achievable. The goal shifts from preventing surveillance to increasing the cost and risk of surveillance to the point where the adversary may choose not to invest resources against you.
Assets and Attack Vectors
After identifying adversaries, you must map your assets and the specific attack vectors through which each adversary might compromise them. An attack vector is the method or pathway an adversary uses to reach an asset. For each asset, consider all the ways it could be compromised, exposed, or destroyed.
Digital Communication Assets
Your communications include the content of messages, the metadata about those messages (who you communicate with, when, how often, from where), and the social graph implied by your communication patterns. Attack vectors include intercepting unencrypted traffic, compelling service providers to provide data, compromising endpoint devices to read messages before or after encryption, and analyzing metadata patterns even when content is encrypted. The OWASP Threat Modeling methodology provides a structured approach to identifying these vectors, documented at owasp.org.
Location and Movement Assets
Your physical location is continuously tracked through your mobile phone's connections to cell towers, Wi-Fi access points, and GPS. Additionally, license plate readers, surveillance cameras with facial recognition, payment card transactions, and public transit cards create a detailed record of your movements. Attack vectors include accessing cell tower records (available to law enforcement with a court order), compromising location-sharing apps, installing GPS tracking devices, and correlating payment records with physical locations.
Identity and Authentication Assets
Passwords, encryption keys, hardware tokens, and biometric data are the keys to your digital life. Attack vectors include phishing, credential stuffing (using passwords leaked from other services), keyloggers, SIM swapping to intercept SMS two-factor codes, shoulder surfing, and rubber-hose cryptanalysis (compelled disclosure through legal or physical coercion). Protecting these assets requires strong unique passwords with a password manager, hardware security keys (FIDO2/WebAuthn) for two-factor authentication, and in some threat models, duress mechanisms such as hidden encrypted volumes.
Risk Assessment Frameworks
Once you have identified your assets, adversaries, and attack vectors, you need a systematic way to prioritize which threats to address first. Risk assessment frameworks provide this structure by scoring each threat based on its likelihood and impact.
The DREAD Model
DREAD is a risk rating system originally developed at Microsoft that scores threats on five dimensions, each rated from 1 to 10:
DREAD Risk Assessment Framework:
D - Damage Potential: How severe would a successful attack be?
1 = Minor inconvenience
5 = Significant data loss or financial harm
10 = Complete compromise, physical danger, or life-threatening
R - Reproducibility: How easy is it to reproduce the attack?
1 = Very difficult, requires rare conditions
5 = Moderately reproducible
10 = Trivially reproducible, can be automated
E - Exploitability: How much skill/resources are needed?
1 = Requires nation-state resources
5 = Requires moderate technical skill
10 = No technical skill needed, tools freely available
A - Affected Users: How many people are affected?
1 = Only the specific targeted individual
5 = Some users in specific conditions
10 = All users by default
D - Discoverability: How easy is the vulnerability to find?
1 = Requires extensive insider knowledge
5 = Discoverable through focused research
10 = Publicly known, widely documented
Overall Risk Score = (D + R + E + A + D) / 5
Score 1-3: Low priority
Score 4-6: Medium priority, address when resources allow
Score 7-10: High priority, address immediately
The STRIDE Model
STRIDE categorizes threats into six types, which can be applied to each component of your digital life:
STRIDE Threat Categories:
S - Spoofing: An adversary pretends to be someone or something else
Example: Phishing email impersonating your bank
Defense: Strong authentication, certificate verification
T - Tampering: Unauthorized modification of data
Example: Man-in-the-middle modifying downloaded software
Defense: Cryptographic signatures, HTTPS, hash verification
R - Repudiation: Denying having performed an action
Example: An adversary claims they did not send a threatening message
Defense: Digital signatures, audit logs, non-repudiation protocols
I - Information Disclosure: Unauthorized access to data
Example: DNS queries revealing browsing history to ISP
Defense: Encryption (DoH/DoT), VPN, Tor
D - Denial of Service: Making a resource unavailable
Example: Blocking access to communication channels during a protest
Defense: Redundant communication methods, mesh networks, offline tools
E - Elevation of Privilege: Gaining unauthorized higher access
Example: Malware exploiting a browser vulnerability to gain root
Defense: Sandboxing, mandatory access controls, system hardening
Practical Threat Model Examples
Abstract frameworks become concrete when applied to real-world scenarios. The following examples demonstrate how threat modeling translates into specific security decisions for different user profiles.
Example 1: Investigative Journalist
An investigative journalist working on corruption stories involving powerful political figures faces a complex threat landscape. Their primary assets are source identities, unpublished story materials, communication records, and their own safety. Their adversaries include the subjects of their investigations (who may have access to state resources), government intelligence agencies, and potentially organized crime. The journalist's threat model leads to the following specific protections:
- All source communications use Signal with disappearing messages enabled, or SecureDrop for initial anonymous contact
- Work is conducted on a dedicated device running Tails or Qubes OS, never connected to personal accounts
- All internet traffic routes through Tor to prevent network-level identification
- Story materials are encrypted with VeraCrypt with a hidden volume providing plausible deniability
- The journalist maintains physical security awareness, varying routines and checking for surveillance
- Meeting sources in person involves leaving all electronic devices at home or in a Faraday bag
- Published stories are timed and structured to avoid revealing source-identifying information
The Privacy Guides tools section provides current recommendations for the specific software tools that support this type of threat model.
Example 2: Political Activist in a Restrictive Country
A political activist organizing peaceful protests in a country with limited press freedom and active government surveillance faces threats to their physical liberty. Their primary assets are their identity as an organizer, the identities of fellow activists, communication channels used for coordination, and their physical safety. Their adversaries are state security services with access to telecommunications infrastructure, ISP-level monitoring, and potentially IMSI catchers for mobile phone surveillance. This threat model demands:
- Compartmentalized identities: online activist identity is completely separated from real identity, with no overlapping accounts, devices, or network connections
- Communication through Tor-based channels (OnionShare, Briar for in-person mesh networking) rather than commercial platforms subject to government requests
- Devices used for activism contain no identifying personal information and use full disk encryption with strong passphrases
- Regular assessment of whether communication patterns could be correlated through timing analysis, even if content is encrypted
- Physical operational security: awareness of CCTV locations, avoidance of patterns in physical meetings, counter-surveillance techniques
- Emergency protocols: remote wipe capability, memorized emergency contacts, legal support arrangements
Example 3: General Privacy-Conscious User
A typical internet user who values privacy but does not face targeted threats has a simpler but still important threat model. Their primary assets are personal data, financial information, browsing habits, and communications. Their adversaries are primarily commercial trackers, data brokers, opportunistic cybercriminals, and potentially their ISP selling browsing data. This threat model is adequately addressed by:
- A password manager (Bitwarden or KeePassXC) with unique strong passwords for every account
- Two-factor authentication on all important accounts, preferably using TOTP or hardware keys rather than SMS
- A privacy-focused browser (Firefox with arkenfox user.js or Mullvad Browser) with uBlock Origin
- Encrypted DNS through DoH or DoT to prevent ISP snooping on browsing activity
- A trustworthy VPN for general browsing to prevent ISP metadata collection
- Full disk encryption enabled on all devices
- Regular software updates to patch known vulnerabilities
- Awareness of phishing techniques and social engineering tactics
This user does not need Tor for everyday browsing, does not need air-gapped computers, and does not need to use Tails. Implementing those measures would add significant inconvenience without meaningfully reducing their actual risk, which is the key insight of threat modeling: security measures should be proportionate to the actual threat.
Tools for Threat Modeling
Several tools and methodologies exist to formalize the threat modeling process, particularly for those responsible for the security of organizations or software systems.
Microsoft Threat Modeling Tool
Microsoft provides a free Threat Modeling Tool that allows users to create data flow diagrams, automatically identifies potential threats using the STRIDE methodology, and generates a prioritized list of security issues. While primarily designed for software development, the systematic approach can be adapted for personal threat modeling by mapping your personal data flows, identifying trust boundaries (where data moves between your control and a third party's control), and analyzing each crossing point for potential threats.
LINDDUN Privacy Threat Modeling
LINDDUN is a privacy-specific threat modeling framework developed at KU Leuven that focuses specifically on privacy threats rather than general security threats. It categorizes threats into seven types: Linkability, Identifiability, Non-repudiation (in a privacy context, where non-repudiation is undesirable), Detectability, Disclosure of information, Unawareness, and Non-compliance. This framework is particularly useful for analyzing privacy risks in systems that process personal data and can help identify threats that general security frameworks miss.
Attack Trees
Attack trees are a graphical method for modeling threats where the root node represents the adversary's goal (such as "identify an anonymous source") and child nodes represent the different methods to achieve that goal. Each branch can be further decomposed into sub-steps. Attack trees are valuable because they force you to think systematically about all possible attack paths, not just the most obvious ones. They also help identify which defenses provide the most protection by showing which branches of the tree a single countermeasure can cut:
Attack Tree Example: De-anonymize a Tor user
Root: Identify real IP address of Tor user
|
+-- Traffic correlation attack
| +-- Observe entry node traffic (requires ISP/backbone access)
| +-- Observe destination traffic (requires access at destination)
| +-- Correlate timing and volume patterns
|
+-- Browser exploitation
| +-- Deliver exploit via compromised website
| +-- Exploit causes browser to connect outside Tor
| +-- Real IP revealed to attacker's server
|
+-- DNS leak
| +-- Application bypasses Tor SOCKS proxy for DNS
| +-- DNS query reveals destination to ISP
| +-- Correlation with Tor usage timing
|
+-- Operational security failure
| +-- User logs into real identity account over Tor
| +-- User reveals identifying information in content
| +-- Device fingerprint correlates Tor and non-Tor sessions
|
+-- Physical surveillance
+-- Observe user's physical location when Tor traffic detected
+-- Correlate Wi-Fi MAC address with known identity
+-- Surveillance camera captures user at known Tor usage time
Common Threat Modeling Mistakes
Several recurring mistakes undermine the effectiveness of threat modeling. The first and most common is failing to do it at all, jumping straight to implementing security tools without understanding whether they address actual risks. The second is threat inflation, where users assume a threat model appropriate for a dissident in an authoritarian state when their actual risk profile is that of a general privacy-conscious user. This leads to burnout, abandoned security practices, and ultimately worse security than a simpler but sustainable approach would provide.
The third mistake is ignoring the human element. The most common real-world security failures are not technical but operational: using the same device for anonymous and identified activities, revealing information through writing style or habits, trusting people who should not be trusted, or failing to keep software updated. The fourth mistake is treating threat modeling as a one-time exercise rather than an ongoing process. Your threat landscape changes as your activities, location, adversaries, and available tools evolve. A threat model should be reviewed and updated regularly, particularly after significant life changes or when new attack techniques become public.
Effective threat modeling is the prerequisite for every other security decision. Before configuring your DNS privacy settings, before hardening your Linux installation, and before defending against browser fingerprinting, you must understand who your adversaries are, what they are capable of, and what the consequences of failure look like. Only then can you make informed decisions about which protections are worth implementing and which represent unnecessary complexity. The goal of threat modeling is not to achieve perfect security, which is impossible, but to achieve proportionate security that addresses your real risks within the constraints of your resources and tolerance for inconvenience.