Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
TSSG paper for International Symposium on Integrated Network Management (IM)
1. Policy evaluation
testbed
Butler et al
An experimental testbed to predict the Introduction
XACML
performance of XACML Policy Decision performance
testbed
Points Initial experiments
Summary and
Future Work
Bernard Butler, Brendan Jennings and Dmitri Botvich
Telecommunication Software and Systems Group,
Waterford Institute of Technology, Ireland
IFIP/IEEE IM 2011 at TCD, Dublin
May 2011
2. Policy evaluation
Outline testbed
Butler et al
Introduction
Introduction XACML
performance
Access Control basics testbed
The Problem Initial experiments
Response of other researchers Summary and
Future Work
XACML performance testbed
Overview
Initial experiments
Measurement-based simulation
Summary and Future Work
3. Policy evaluation
Definitions - 1 testbed
Butler et al
Introduction
Access Control basics
The Problem
Response of other
researchers
XACML
What is Access Control? performance
testbed
Generally, Subjects apply Actions to Resources Initial experiments
Access control is a system which enables an Authority Summary and
Future Work
to limit these interactions.
Constraints are binary-valued decisions: Permit/Deny
Decisions are made by searching business Rules
4. Policy evaluation
Definitions - 2 testbed
Butler et al
Introduction
Access Control basics
The Problem
Response of other
researchers
XACML
What is XACML? performance
testbed
XACML is an industry-standard (OASIS) XML Initial experiments
specifying Access Control rules Summary and
Future Work
XACML standard also defines an architecture for Access
Control
Rules roll up into policies and thence into policy sets
5. Policy evaluation
Architecture description testbed
Butler et al
Introduction
Access Control basics
The Problem
Response of other
researchers
XACML
P*P performance
testbed
PAP Policy Access Point - Editing policies Initial experiments
PDP Policy Decision Point - Deciding requests Summary and
Future Work
PEP Policy Execution Point - Handling requests
PIP Policy Information Point - Looking up other
sources
6. Policy evaluation
Functionality versus Safety: Initial testbed
Butler et al
Introduction
Access Control basics
The Problem
Response of other
researchers
XACML
performance
testbed
Initial experiments
Permit Deny
Summary and
Future Work
Functionality Safety
7. Policy evaluation
Functionality versus Safety: After refinement testbed
Butler et al
Introduction
Access Control basics
The Problem
Response of other
researchers
XACML
performance
testbed
Initial experiments
Permit Deny
Summary and
Future Work
Functionality Safety
8. Policy evaluation
Fine-Grained Access Control testbed
Butler et al
Introduction
Access Control basics
The Problem
Refining the decision boundary causes slower evaluation Response of other
researchers
XACML
1. More complex conditions and rules performance
testbed
2. More need to evaluate policies
Initial experiments
Summary and
Future Work
. . . With the following results
1. Longer evaluation times per request
2. More requests
3. PDP(s) become the bottleneck
4. Scalability problems!
9. Policy evaluation
Is caching a viable solution? testbed
Butler et al
Introduction
Access Control basics
Issue 1: Dynamic policy updates The Problem
Response of other
researchers
Subjects S and Resources R are added and removed XACML
performance
Entitlements S × R need to be managed testbed
Initial experiments
Summary and
Issue 2: XACML suitability Future Work
Non-local with complex rule and policy combining
algorithms
Missing support for change impact analysis
Verbose
Generally, other approaches are used, notably brute force.
10. Policy evaluation
Better XACML PDP Performance testbed
Butler et al
Introduction
Better PDP Access Control basics
The Problem
Response of other
Better distributed software engineering - Heras-AF researchers
XACML
performance
Better XACML policies testbed
Initial experiments
recombination (Miseldine (2004))
Summary and
clustering and reordering (Marouf et al (2009)) Future Work
indexing (Gryb)
Better policy representation
recoding (Xie and Lu (2008)) - Xengine
reformulation using description logic - Kolovski (2006)
11. Policy evaluation
Critique testbed
Butler et al
Introduction
Access Control basics
The Problem
Each researcher presents evidence in their favour Response of other
researchers
XACML
Generally compare their approach with a reference PDP performance
testbed
but Initial experiments
No common published test suite of policies and requests Summary and
Future Work
Experimental conditions differ
So cannot compare improvements!
Our approach
Create a testbed to measure service times under controlled
experimental conditions
12. Policy evaluation
Schematic of the measurement testbed testbed
Butler et al
Introduction
XACML Request MODE 1 XACML
Policy Set MODE 2 performance
Generator
MODE 3 Observed testbed
XACML Requests Overview
Generated Initial experiments
XACML Requests
Summary and
Domain Future Work
PDP Model
Universal Request
Adapter
PEP Scheduler
XTS XTC
Measurement Clustering Queueing
Simulator
Data Algorithm Model
XTA XTP
Performance Performance Performance
Measurements Abstraction Predictions
13. Policy evaluation
Example service time measurements for given PDP and testbed
Service times for 'single' request set
policy set on host 'bear' using 'SunXacmlPDP' Butler et al
Introduction
density (scaled so that Total Histogram Area = 1)
XACML
1500
performance
testbed
Initial experiments
Measurement-based
simulation
Summary and
1000
Future Work
500
0
0.002 0.003 0.004
seconds
14. Policy evaluation
Preliminary analysis testbed
Butler et al
Introduction
Preprocessing, Comparison and Clustering XACML
performance
Let t = t(S, P, R, q) ∈ R|S|×|P|×|R|×q be the set of testbed
Initial experiments
measured service times. Measurement-based
simulation
Assume t is subject to nonnegative error so Summary and
t = t (S, P, R) = minq t is a reduced-error estimate of Future Work
the service time for that PDP, policy set, request set
combination.
Comparison: Perform ANOVA on the t with its
associated context.
Derive the service time clusters.
Assume each service time cluster represents a different
request cluster
15. Policy evaluation
Comparison: ANOVA study testbed
Butler et al
Introduction
SunXacmlPDP EnterpriseXacmlPDP XACML
performance
1.5e-03 1.2e-03 testbed
rep 800 800 Initial experiments
Measurement-based
simulation
Table: Comparison of service times for PDPs ‘SunXacmlPDP’
Summary and
and ‘EnterpriseXacmlPDP’. Future Work
Deny NotApplicable Permit
1.3e-03 2.1e-03 1.1e-03
rep 1244 136 220
Table: Comparison of service times for Decisions ’Deny’ and
’NotApplicable’ and ’Permit’.
16. Policy evaluation
Operation of Clustering algorithm 1: Histogram testbed
Butler et al
Histogram of service times for 'single'
request set on host 'bear' using 'SunXacmlPDP' Introduction
XACML
performance
density (scaled so that Total Histogram Area = 1)
testbed
1500
Initial experiments
Measurement-based
simulation
Summary and
Future Work
1000
500
0
0.002 0.003 0.004
seconds
17. Policy evaluation
Operation of Clustering algorithm 2: Histogram and Fit testbed
Butler et al
Histogram of service times for 'single'
request set on host 'bear' using 'SunXacmlPDP' Introduction
XACML
performance
density (scaled so that Total Histogram Area = 1)
testbed
1500
Initial experiments
Measurement-based
simulation
Summary and
Future Work
1000
500
0
0.002 0.003 0.004
seconds
18. Policy evaluation
Operation of Clustering algorithm 3: Fit and Peaks testbed
Butler et al
Cluster centres of service times for 'single'
request set on host 'bear' using 'SunXacmlPDP' Introduction
XACML
performance
density (scaled so that Total Histogram Area = 1)
testbed
1500
q Initial experiments
q
Measurement-based
simulation
Summary and
Future Work
1000
q
q
500
q
q
q
q
0
0.001 0.002 0.003 0.004 0.005
seconds
19. Policy evaluation
Operation of Clustering algorithm 4: Cluster Peaks and testbed
Endpoints Butler et al
Cluster endpoints of service times for 'single' Introduction
request set on host 'bear' using 'SunXacmlPDP' XACML
performance
testbed
density (scaled so that Total Histogram Area = 1)
Initial experiments
1500
Measurement-based
q simulation
q
Summary and
Future Work
1000
q
q
500
q
q
q
q
0
0.001 0.002 0.003 0.004 0.005
20. Policy evaluation
Compare service time clusters testbed
Butler et al
Scenario: Different PDPs, other controllable conditions are Introduction
identical. XACML
performance
Observation: Qualitatively different service time testbed
Initial experiments
distributions. Measurement-based
simulation
Service time intervals define request clusters for 'single' Service time intervals define request clusters for 'single'
request set on host 'bear' using 'SunXacmlPDP' request set on host 'bear' using 'EnterpriseXacmlPDP'
Summary and
Future Work
1500
q q
40000
q
density (scaled so that Total Histogram Area = 1)
density (scaled so that Total Histogram Area = 1)
30000
1000
q
q
20000
500
q
q
10000
q
q
q q
0
0
0.001 0.002 0.003 0.004 0.005 0.0014 0.0016 0.0018 0.0020 0.0022 0.0024
seconds seconds
21. Policy evaluation
Further analysis testbed
Butler et al
Introduction
XACML
performance
testbed
Queueing and Simulation Initial experiments
Measurement-based
simulation
Parametrise each request cluster (height, location, Summary and
width) Future Work
Compute explicit queue length and waiting time
Simulate requests having that service time profile
Prediction: Examine overload performance for different
request mixes.
22. Policy evaluation
Prediction: Compute explicit queue length testbed
Butler et al
Queueing Model
Introduction
Assume M/G/1 with FIFO scheduling and infinite buffer size.
XACML
For hyperexponentially-distributed service times, the service performance
testbed
time density function is
Initial experiments
Measurement-based
p p ∞ simulation
def −µi x
b(x) = αi µi e , where αi ≡ 1 ≡ b(x)dx Summary and
Future Work
i=1 i=1 0
(1)
Mean Queue Length
From Pollaczek-Khinchin formula, we derive
2
(1 + Cb )
q = ρ + ρ2
¯ , (2)
2(1 − ρ)
where q is the mean queue length, ρ = λ¯, x is the mean
¯ x ¯
service time and Cb is the coefficient of variation of the
service times. This formula is explicit.
23. Policy evaluation
Prediction using discrete event simulation testbed
Butler et al
Suppose a steady state has been reached and ρ = 0.5. Introduction
XACML
Suddenly requests increase in frequency so that ρ would performance
testbed
be 0.8 if the request service time distribution remained
Initial experiments
the same. Measurement-based
simulation
Now consider favourable and unfavourable overload Summary and
Future Work
distributions instead of the original distribution. Let
(overload:lo) n−j +1
αj = n
i=1 i
(overload:hi) j
αj = n
i=1 i
See next slide for explicit and simulated server loadings,
representing step changes in access requests such as
might happen when a deadline occurs.
25. Policy evaluation
Summary testbed
Butler et al
Introduction
XACML
(XACML) PDP performance is a real problem performance
testbed
Our approach... Initial experiments
Summary and
Compared to other authors, it is Future Work
More than isolated performance improvement proposals
Repeatable and reproducible
Compared to our earlier work, if offers
Greatly improved clustering algorithm
Derived explicit model for special (validation) cases
Prediction using discrete event simulation
26. Policy evaluation
Future work testbed
Butler et al
Introduction
XACML
performance
Use a flexible domain model for policies and requests testbed
Initial experiments
richer policies, explicitly implementing security models Summary and
Future Work
generalised request profiles
Generalise to a distributed PDP implementation
Multiprocessing / multithreading
Additional queueing disciplines (such as processor
sharing)