Architecture descriptions greatly contribute to the understanding, evaluation and evolution of software but despite this, up-to-date software architecture views are rarely available. Typically only initial descriptions of the static view are created
but during the development and evolution process the software drifts away from its description. Methods and corresponding tool support for reconstructing and evaluating the current architecture views have been developed and proposed, but they usually
address the reconstruction of static and dynamic views separately. Especially the dynamic views are usually bloated with low-level information (e.g. object interactions) making the understanding and evaluation of the behavior very intricate. To overcome this,
we presented ARAMIS, a general architecture for building toolbased approaches that support the architecture-centric evolution and evaluation of software systems with a strong focus on their behavior. This work presents ARAMIS-CICE, an instantiation
of ARAMIS. Its goal is to automatically test if the run-time interactions between architecture units match the architecture description. Furthermore, ARAMIS-CICE characterizes the intercepted behavior using two newly-defined architecture metrics.
We present the fundamental concepts of ARAMIS-CICE: its meta-model, metrics and implementation. We then discuss the results of a two-folded evaluation. The evaluation shows very promising results.
Sending Calendar Invites on SES and Calendarsnack.pdf
Run-time Monitoring-based Evaluation and Communication Integrity Validation of Software Architectures
1. Run-time Monitoring-based Evaluation andCommunication Integrity Validation of SoftwareArchitectures
Ana Dragomir
02.12.2014
Motivation
State of the art vs. state of the practice
Goals
Approach
Evaluation
Summary and outlook
2. The ARAMIS ProjectMotivation
“You don’t need architecture to build a dog kennel, but you’d better have some for a skyscraper”
Software architecture (SA) description is useful to understand and meaningfully evolve a software system but…
“Is it just a shared hallucination?”
…the architecture drifts from its description!
The description:
is no longer as useful
can lead to misunderstandings
domino effect
4. Stakeholders
Architects
Software Landscape Architects
Are the systems of the landscape interacting as prescribed?
Which systems are the hubs and sinks?
Software Architects
Is the system built according to its description?
Which architecture units are the hubs and sinks?
Which architecture units are “too complex”?
Software Developers
If I need to change a requirement, how can I quickly find out how it is currently implemented?
What architecture/software units are interacting & how?
7. State of the PracticeOur Experience: Company 1
CMMI Level 3
More than 1000 IT Employees
Large, heterogeneous systems (Java EE, Cobol, …)
System architecture
Decisions and architecture are documented, though separately
Code-quality monitors (e.g., Sonarqube) are used
Enterprise architecture
Inter-system information flow diagrams
Manual proces
No architecture reconstruction tools!
“Read the documentation, then start making phone calls”
Reconstruction is cumbersomeand must be supplemented with support for evolution
8. State of the PracticeOur Experience: Company 2
Medium Enterprise
More than 500 IT Employees
Mainly Java-based systems
Systemarchitecture
“The Developer Handbook”
Low-level descriptions
Abstract view of the system missing
Jointattempt with SWC to reconstruct the architecture
Purchased Sonargraph Architect
Encountered terminology differences
9. State of the ArtFlaws
The reconstruction occurs on very different abstraction levels
Structural view: layers, modules, subsystems, etc.
Behavioral view: objects, methods, etc.
Heterogeneity of terminology is not addressed
Reconstruction tools have stiff meta-models
The architects expect results that conform to their terminology
Heterogeneity of systems is not properly addressed
The interplay of heterogeneous systems is important
Reconstruction by itself is not a goal
Are the units behaving as expected? What are the deviations?
Evolution support is needed!
10. Goals
Improve the traceabilitybetween usage scenarios, implementation and architecture documentation
Develop/Use a minimally intrusive technical solution for run-time monitoring and real-time visualization
Provide a means to evaluateif the predetermined architecture rules have been respected
Easily extendablesolution
11. Multi-level Behavior Monitoring
XX
X
Y
MM
M
To
From
Operation Name
Parameters
Monitoring
Systems To Monitor
Analysis
m2
m3
m4
m6
AA
A
B
m1
m10
m7
m8
m9
Interest: communication
between AA, XX, MM
AA
XX
MM
m2
m5
m6
m8
m9
Interest: communication
within AA
A
B
m1
m10
m7
Interest: High level
violations
AA
MM
m8
m9
14. Metrics @ ARAMIS
Goal: find weak points in the analyzed behavior
Step 1: Characterize behavior
Step 2: Compare previous result with the architects’ expectations
Metrics Categories
Behavioral coupling and cohesion metrics
Behavior hotspots
Violations-based metrics
15. ARAMISCoupling and Cohesion Metrics
Behavioral Coupling (BCo)
Behavioral Cohesion (BCh)
Scenario-based Unit Behavior Metric (SUB)
Cohesion vs. Coupling of an Architecture Unit
SUB = 퐵퐶ℎ 퐵퐶ℎ+퐵퐶표
SUB Characterization (SUBC)
High coupling/low cohesion; SUB ∈[0, 0.5)
Mid coupling/mid cohesion; SUB ∈[0,5, 0.66)
Low coupling/high cohesion;SUB ∈[0.66, 1]
16. Evaluation
Phase 1: validation of communication rules
The MosAICsoftware system
Incentive: an initial architecture description was available
interest in evaluating its conformance
111753 LOC
116 classes, 10 packages
Defined 4 top-level units & 16 inner units
20 allowed rules
>100000 monitored calls
3 distinct architecture violations (frequencies: 1, 2, 22 resp.)
The result was used to improve the architecture
Increased confidence in MosAIC’squality
17. Evaluation
Phase 2: relevance of the SUB and SUBC metrics
The JHotDrawframework
Incentive: presumably, a good designed architecture
126068 LOC
529 classes, 38 packages
Defined 12 top-level units but only 7 were used at run-time
No rules
> 150000 monitored calls
19. Conclusions
ARAMIS aims to:
Support the understanding of software systems
Validate their communication integrity
Assess their behavior
The current results are promising
Evaluated two projects
MosAIC
JHotDraw
20. Outlook
Develop relevant multi-level visualizations
Experiment with monitoring the inter-play of heterogeneous systems
Model evolution scenarios
Simulation & Impact analysis
Explore further relevant metrics to depict the quality of the architecture
Flexible quality model, tailorableaccording to the specific needs of the architects
21. Summary
ARAMIS
Flexible approach towards
Monitoring architecture
Validating its communication integrity according to specified rules
Assessing its quality
Supporting a reasonable evolution