Ground rules
Introductions
Who are you? Why are you here?
This isn’t some shiny new tech.
What brought you to this session?
Who am I?
Why listen to me? Don’t.
Trust but verify
Do. Your. Homework.
Who works with me?
Teams who are struggling with AuthN/AuthZ problems.
Companies fed up with the fear-mongering and scare tactic sales in the security industry. Just want practical, business-justified advice
People who are scared of some new reality about their security profile.
Why am I here?
Who, What, Why, How
risk analysis. Identifying, quantifying, and ranking risk.
Not just for tech
General -> specific
“map” of assets, threats, and protections
Set of plans for protection of an asset from a specific threat
Informs risk assessment to prioritize action
the cyclical process of creating/evaluating these.
You! You already understand this intrinsically
You do already understand this intrinsically – changing locks, PIN on phone,
The decisions that when into choosing to have a PIN might have been subconscious, but that’s threat modeling.
Helps Risk analysis
You can’t protect against everything.
Find the point of diminishing return on your security investment
Defensive – informs risk analysis
Offensive – informs efficient effort investment proactively
Cheaper than failing
Justify the expense - time, money, priority
Threat model failures affect you too…
Move the needle.
Make better, conscious choices personally and professionally
Direct and indirect
Why are you asked to rotate your passwords regularly?
Why do you change your locks when you move in somewhere new?
Why do you change them when you break up with someone?
Why is 2FA a good idea?
Why do racecar drivers use 5 point harnesses, but we only use a 3 point seatbelt?
Why do some gas stations have thick glass and others don’t?
Why don’t bank tellers sit behind bars/glass anymore?
Three Components
Assets
Threats
Protections
What do you want to protect? (The data, communications, and other things that could cause problems for you if misused.)
Who do you want to protect it from? (The people, organizations, and criminal actors who might seek access to that stuff.)
How bad are the consequences if you fail?
How likely is it that you will need to protect it? (Your personal level of exposure to those threats.)
How much trouble are you willing to go through in order to try to prevent those? (The money, time and convenience you're willing to dispense with to protect those things.)
Warning –
Depending on your situation, your threat model artifacts might be something you want to destroy or protect as an asset.
Assets – something you value and want to protect
Contact info
Financial info
Locations
Affiliations
Adversaries – someone acting contrary to your security goals
Enemies are people
Can be person or organization
Can be an active adversary or passive adversary (nosey neighbor)
Can be targeted or dragnet adversary
Often hypothetical
Have capabilities and motivations
Capability and motivation matters
If cost of obtaining your asset exceeds value of asset, less likely to pursue, no ROI
Worst case scenarios
Consider capability/audience
Your cell phone company has more capability than an a hacker on an open WiFi AP
What will your adversary do with your data
Assess Risk – likelihood that a particular threat to a particular asset will occur.
Threat != risk. Thread is something that can happen. Risk is probability it WILL happen.
Guard against amplification of perceived risk – use a rubric
Requires the risk assessment
Increased security is always an inherent tradeoff
Accessibility/convenience
Money
Time
This World of Ours - James Mickens
https://www.usenix.org/system/files/1401_08-12_mickens.pdf
Requires practice
Add to your normal software dev practices
Add to your regular best practices
Rinse and Repeat.
You’re never done, just done enough for now.
System changes
Topology change
Vulnerability bulletins
Time
Changes in your “profile” (CEO bragging, other hacks, scandal, press release, politics)
We take an architecture-centric, data flow approach
Identify Assets
Find threats with STRIDE
Use DREAD-D/DREAD+D to get risk score
Find/recommend defenses
Application Overview – Enumerate Components/data flows/trust boundaries
Decompose the application – look not every component has the same threat model. Auth is higher than
Microsoft developed pneumonic to help you think through the categories of threats
Not exhaustive/not scientific, just a memory tool
Spoofing Identity – Users cannot become other users.
Tampering with Data – Never trust the user
Repudiation – Ability to deny you did something. Can’t prove who did what
Information Disclosure – people find out things they shouldn’t know
Denial of Service – can you withstand? How do you mitigate?
Escalation of Privilege – allows users to do more than they should be able to do
Quantify with DREAD-D/DREAD+D
DREAD minus D and DREAD plus D
Damage Potential
Reproducibility – how easy to reproduce exploit?
Exploitability – what is needed to exploit?
Affected Users – how many users will be affected?
Discoverability – how easy is it to find? does it matter?
Detectability – how easy is it to detect if you’ve been exploited?
Rank according to score
Research defense mechanisms, make proposals.
Zoom in/ zoom out
repeat
Jan Schaumann
Help your grandmothers/parents/kids/friends.
Curse of knowledge. You can’t imagine not knowing.
Encourage vulnerable people to seek help