Slides of my keynote at QUATIC 2019.
Abstract: Uncertainty is the quality or state that involves lacking information or insufficient knowledge. Uncertainty can be due to different reasons, including incomplete or inaccurate information, inexact data or measurements, imprecise human judgments, or approximate estimations. The explicit representation of uncertainty is gaining attention among software engineers in order to provide more faithful systems representations, more accurate design methods, and better estimations of the development processes. However, incorporating uncertainty into our systems models is not enough. Uncertainty also affects many aspects related to the quality of systems, products, processes, and data, including how uncertainty is taken into account when designing our systems, measured when evaluating their quality, and perceived by customers and users. In fact, uncertainty – and, more specifically, the lack of knowledge about the system, our measuring tools, and our potential users – should be incorporated into our quality models, too. This talk identifies several kinds of uncertainties that have a direct impact on quality, and discusses some challenges on how quality needs to be planned, modeled, designed, measured and ensured in the presence of uncertainty.
Scaling API-first – The story of a global engineering organization
Modeling and Evaluating Quality in the Presence of Uncertainty
1. Modeling and Evaluating Quality in the
Presence of Uncertainty
QUATIC 2019
Ciudad Real, September 13, 2019
Antonio Vallecillo
Universidad de Málaga, Spain
2. Uncertainty
It applies to: predictions of future events,
estimations,
physical measurements, or
properties of a system, its elements or its environment
due to:
Underspecification of the problem or solution domains
Lack of knowledge of the system, its environment, or its underlying physics
Lack of precision in measurements
Imperfect, incorrect, or missing information
Numerical approximations
Values and parameters indeterminacy
Different interpretations of the same evidences by separate parties
2
Uncertainty: Quality or state that involves imperfect and/or unknown information
“There is nothing certain, but the uncertain” (proverb)
3. Uncertainty in Software Engineering
Ziv’s Uncertainty Principle: “Uncertainty is inherent and inevitable in software
development processes and products” (1996)
All projects, no matter the domain, processes, or technology, operate in the presence
of uncertainty – reducible (epistemic) and irreducible (aleatory)
Humphrey’s Requirements Uncertainty Principle: “For a new software system, the
requirements will not be completely known until after the users have used it.”
The true role of design is thus to create a workable solution to an ill-defined problem.
Software engineering variables affected by Uncertainty:
Cost
Schedule
Performance
Capacity for work
Productivity
Quality of results
3
4. Software development methodologies and uncertainty
4
“The Uncertainty Principle OR How to Choose the Right Methodology”. https://kosmothink.wordpress.com/2010/12/31/the-
uncertainty-principal-or-how-to-choose-the-right-methodology
6. Different kinds of uncertainty in Complex Event Processing (CEP) systems
Selection phase:
Uncertain events in the stream: Missing events (false negatives, FN); or wrongly
inserted (false positives, FP).
Uncertainty in the values of the attributes (including their timestamps!) due to
imprecision of the measuring methods or tools (measurement uncertainty, MU).
Matching phase:
Uncertainty of comparison operators (=, <, >, ->,...) between uncertain values.
Uncertainty of logical composition operators (or, and, not) between uncertain
statements
Production phase:
Lack of precision in the values of the attributes of derived events, due to the
propagation of uncertainty in their calculation.
Lack of confidence in the derived event, due to incomplete or erroneous
assumptions about the environment in which the system operates, which many
influence the rule’s confidence.
6
Nathalie Moreno, Manuel F. Bertoa, Loli Burgueño, Antonio Vallecillo: “Managing Measurement and Occurrence Uncertainty
in Complex Event Processing Systems.” IEEE Access 7: 88026-88048 (2019)
7. Many different formalisms and theories to quantify uncertainty
7
Bayesian Belief Networks (BBN)
Monte Carlo simulations
Decision theory/trees
Probabilities
Fuzzy Logic
…
11. A classification of uncertainty (according to its nature)
Aleatory Uncertainty – A kind of uncertainty that refers to the inherent
uncertainty due to the probabilistic variability or randomness of a
phenomenon
Examples: measuring the speed of a car, or the duration of a software
development process
This type of uncertainty is irreducible, in that there will always be variability in
the underlying variables.
Epistemic Uncertainty – A kind of uncertainty that refers to the lack of
knowledge we may have about the system (modeled or real).
Examples: Ambiguous or imprecise requirements about the expected system
functionality, its envisioned operating environment, etc.
This type of uncertainty is reducible, in that additional information or knowledge
may reduce it.
11
A. Der Kiureghian and O. Ditlevsen: "Aleatory or epistemic? Does it matter?" Structural Safety 31(2):105-112, 2009
12. Reducing the uncertainty
1. Certainty: There is no reducible uncertainty and information is
complete
2. Fully reducible imprecision: There is no full certainty, but
uncertainty can be reduced by collecting additional information
until achieving full certainty (no irreducible uncertainty present)
3. Partially reducible imprecision: There is no full certainty, but
uncertainty can be reduced by collecting additional information.
However, there is still irreducible uncertainty
4. Irreducible imprecision: There is no full certainty, and it cannot
be reduced (only margins can be used)
12
Epistemic
Aleatory
13. Uncertainty and Knowledge…
13Borrowed from “Introduction to Uncertainty Modeling” presentation at the OMG by T. Tue, S. Ali, B. Selic and A. Watson, 2016
14. Knowledge vs. Belief
14
Each “knowledge” statement here is based on real evidence!
When dealing with
uncertainty, perhaps
it is best to avoid the
notion of “knowledge”
altogether!
Borrowed from “Introduction to Uncertainty Modeling” presentation at the OMG by T. Tue, S. Ali, B. Selic and A. Watson, 2016
15. Belief
Belief: An implicit or explicit opinion or conviction held by a belief agent about a
topic, expressed by one or more belief statements
Belief agent: An entity (human, institution, even a machine) that holds one or
more beliefs
Topic: a possible phenomenon or notion belonging to a given subject area.
Belief Statement: An explicit specification of some belief held by a belief agent.
It represents a belief, and therefore it is a subjective concept
It may not always be possible to determine whether or not a belief statement is valid.
A belief statement may not necessarily correspond to objective reality.
This means that it could be completely false, or only partially true, or completely true.
The validity of a statement may only be meaningfully defined within a given context
or purpose.
Thus, the statement that “the Earth can be represented as a perfect sphere” may be perfectly
valid for some purposes but invalid or only partly valid for others.
15OMG. “Precise Semantics for Uncertainty Modeling” Request For Proposals. OMG Document: ad/2017-12-01, 2017.
16. The OMG PSUM initiative (Precise Semantics for Uncertainty Modeling)
16
17. Related concepts
Risk – The effect of uncertainty on objectives [ISO/IEC 31000].
An uncertainty may have an associated risk, and a high-risk difficulty or danger
associated with this uncertainty that deserves special attention.
“Risk does not exist by itself. Risk is created when there is uncertainty.”
Evidence – Objective information that may be used to justify a belief
It can be an observation, a record of a real-world event occurrence or,
alternatively, the conclusion of some formalized chain of logical inference that
provides information that can contribute to determining the validity
(truthfulness) of a belief statement
17
OMG. “Precise Semantics for Uncertainty Modeling” Request For Proposals. OMG Document: ad/2017-12-01, 2017.
18. Types of uncertainty (according to their sources)
Measurement uncertainty: A kind of aleatory uncertainty that refers to a set of possible
states or outcomes of a measurement, where probabilities are assigned to each possible
state or outcome
Occurrence uncertainty: a kind of epistemic uncertainty that refers to the degree of belief
that we have on the actual existence of an entity, i.e., the real entity that a model element
represents
Belief uncertainty: A kind of epistemic uncertainty in which a belief agent is uncertain about
any of the statements made about the system or its environment.
Design uncertainty: A kind of epistemic uncertainty that refers to a set of possible design
decisions or options, where probabilities are assigned to each decision or option
Environment uncertainty: lack of certainty about the surroundings, boundaries and usages
of a system and of its elements
Location uncertainty: lack of certainty about the geographical or physical location of a
system, its elements or its environment
Time uncertainty: lack of certainty about the time properties expressed in a statement
about the system or its environment
18
Based on M. Zhang, B. Selic, S. Ali, T. Yue, O. Okariz, and R. Norgren, "Understanding Uncertainty in Cyber-Physical Systems: A
Conceptual Model" In Proc. of ECMFA 2016, LNCS vol. 9764, pp. 247-264. Springer, 2016.
20. Measurement uncertainty
Engineers naturally think about uncertainty
associated with measured values
Uncertainty is explicitly defined in their models and
considered in model-based simulations
Precise notations permit representing and
operating with uncertain values and confidences
20
21. Measurement uncertainty
Measurement uncertainty: A kind of aleatory uncertainty that refers to a set
of possible states or outcomes of a measurement
Normally expressed by a parameter, associated with the result of a measurement 𝑥𝑥,
that characterizes the dispersion of the values that could reasonably be attributed to
the measurand: the standard deviation 𝑢𝑢 of the possible variation of the values of 𝑥𝑥
Representation: 𝒙𝒙 ± 𝒖𝒖 or 𝑥𝑥, 𝑢𝑢
Examples:
21
JCGM 100:2008. Evaluation of measurement data – Guide to the expression of uncertainty in measurement (GUM).
http://www.bipm.org/utils/common/documents/jcgm/JCGM_100_2008_E.pdf
• Normal distribution: (𝑥𝑥, 𝜎𝜎) with mean 𝑥𝑥, and
and standard deviation 𝜎𝜎
• Interval 𝑎𝑎, 𝑏𝑏 : Uniform distribution is assumed
(𝑥𝑥, 𝑢𝑢) with 𝑥𝑥 =
𝑎𝑎+𝑏𝑏
2
, 𝑢𝑢 =
(𝑏𝑏−𝑎𝑎)
2 3
24. Some problems with Measurement Uncertainty
Computations with uncertain values have to respect the propagation of
uncertainty (uncertainty analysis)
In general this is a complex problem, which cannot be manually managed
Comparison of uncertain values is no longer a Boolean property!
How to compare 17.7 ± 0.2 with 17.8 ± 0.2?
Other primitive datatypes are also affected by uncertainty
Strings (OCR)
Enumerations
Collections
24
25. Primitive datatypes extended with Uncertainty
Extended primitive datatypes
Real -> UReal UReal(17.8,0.2) ≡ 17.8 ± 0.2
Boolean -> UBoolean UBoolean(true, 0.8)
String -> Ustring UString(“Implementaci6n”,0.93)
Enum -> UEnum UColor{ (#red,.9), (#orange,0.09), (#purple,0.01) }
An algebra of operations on uncertain datatypes extending OCL/UML types
Operations are closed in this algebra and automatically propagate uncertainty
25
M. F. Bertoa, N. Moreno, L. Burgueño, A. Vallecillo. “Incorporating Measurement Uncertainty into OCL/UML Primitive
Datatypes.” Software and Systems Modeling (Sosym), 2019. https://doi.org/10.1007/s10270-019-00741-0
27. Occurrence uncertainty
Occurrence uncertainty: a kind of epistemic uncertainty that refers to the
degree of belief (confidence) that we have on the actual existence of an
entity, i.e., the real entity that a model element represents
Assigned to individual objects
Permit dealing with false positives (elements in the model that do not exist in
the real system) and false negatives (elements in the real system not
captured in the model)
Normally measured by (Bayesian) probabilities
27
L. Burgueño, M. F. Bertoa, N. Moreno, A. Vallecillo: “Expressing Confidence in model and in model transformation
elements.” In Proc of MODELS 2018: 57-66, 2018.
28. Uncertainty related to OCL invariants (system integrity constraints)
Degree of fulfilment of an OCL invariant
Ocurrence uncertainty of the elements of the system (confidence)
28
[Image borrowed from Mihai Lica Pura “Ad Hoc Networks and Their Security: A Survey”, 2012]
Constraints
inv EnoughSensors: Sensor.allInstances()->size() >= 3000
inv Sorrounded: Enemy.allInstances()->select(e|e.distanceTo(self))->size() < 50
M. Gogolla, A. Vallecillo: “On Softening OCL invariants” Journal of Object Technology 18(2): 6:1-22 (2019).
32. Belief uncertainty
Belief uncertainty: A kind of epistemic uncertainty in which the modeler, or
any other belief agent, is uncertain about any of the statements made about
the system or its environment.
By nature, it is always subjective
Belief agent: An entity (human, institution, even a machine) that holds one or
more beliefs
Belief statement: Statement qualified by a degree of belief
Degree of belief: Confidence assigned to a statement by a belief agent.
Normally expressed by quantitative or qualitative methods (e.g., a grade or a
probability “credence”)
32
Loli Burgueño, Robert Clarisó, Jordi Cabot, Sébastien Gérard, Antonio Vallecillo. “Belief uncertainty in software models.” Proc.
of MiSE 2019@ICSE, pp. 19-26. ACM, 2019. https://dl.acm.org/citation.cfm?id=3340709
33. A simple example of a hotel room
33
Temp. sensor Smoke detector
Alarm center
CO detector
Loli Burgueño, Robert Clarisó, Jordi Cabot, Sébastien Gérard, Antonio Vallecillo. “Belief uncertainty in software models.” Proc.
of MiSE 2019@ICSE, pp. 19-26. ACM, 2019. https://dl.acm.org/citation.cfm?id=3340709
36. Some Belief Statements about the (model of the) system
The CO and smoke detectors that we bought have a reliability of 90% (i.e., 10% of
their readings are not meaningful)
We cannot be completely sure that the precision of the Temperature sensor is ± 0.5o,
as indicated in its datasheet
We are only 95% confident that the presence of high temperature, high CO level and
smoke really means that there is a fire in the room
Bob is from the South, so he only assigns a credibility of 50% to the operations that
indicate if the room is hot or cold. In contrast, Mary thinks they are mostly accurate
Room #3 is close to the kitchen and frequently emits alarms. Everybody thinks that
most of them are false positives
Joe the modeler doubts that the type of attribute “number” of class “Room” is Integer.
He thinks it may contain characters different from digits.
Lucy the modeler is unsure if an “AlarmCenter” has to be attached to only one single
Room. She thinks they can also be attached to several.
36
[About the credibility of the values]
[From individual belief agents]
[About individual instances]
[About the model itself: relations]
[About the behavioral rules]
[About the uncertainty of the values]
[About the model itself: types]
>> How to represent these uncertainties in the system specifications?
>> How to incorporate them into the system structural and behavioral models?
38. Operationalization
A list of pairs (BeliefAgent,credence) for every model statement subject to
Belief Uncertainty
Operations to add and remove pairs from the list of pairs
Query operation to know the credence of a statement
38
isHot_Beliefs : Set(Tuple(beliefAgent : BeliefAgent, degreeOfBelief : Real))
isHot_BeliefsAdd(ba : BeliefAgent, d : Real)
post: self.isHot_Beliefs = self.isHot_Beliefs@pre->reject(t|t.beliefAgent=ba)->
including(Tuple{beliefAgent:ba,degreeOfBelief:d})
isHot_credence(a:BeliefAgent): Real =
let baBoD : … = self.isHot_Beliefs->select(t|t.beliefAgent = a) in
let baBoDnull : … = self.isHot_Beliefs->select(t|t.beliefAgent = null) in
if baBoD->isEmpty then -- no explicit credence by “a”
if baBoDnull->notEmpty then -- but if default value exists
baBoDnull->collect(degreeOfBelief)->any(true)
else 1.0 endif
else baBoD->collect(degreeOfBelief)->any(true) endif
42. Design uncertainty
Design uncertainty: A kind of epistemic uncertainty that refers to a set of
possible design decisions about the system
It refers to the uncertainty that the developer has about what the system should
be like, rather than about what conditions it may face during its operation
(environment uncertainty).
42M. Famelis, M. Chechik: “Managing design-time uncertainty.” Software and Systems Modeling 18(2): 1249-1284 (2019)
43. The Design-Time Uncertainty Management (DeTUM) model
43M. Famelis, M. Chechik: “Managing design-time uncertainty.” Software and Systems Modeling 18(2): 1249-1284 (2019)
44. This is similar to the “Cone of Uncertainty” (CoU)
It represents the best case uncertainty needed to inform the decision makers
of the Probability of Project Success at specific phases of the project
44
45. Further types of uncertainty: Environment
Environment uncertainty: lack of certainty about the surroundings,
boundaries and usages of a system and of its elements
Tackled by approaches such as self-adaptation, probabilistic behavior, or
identifying and explicating operational assumptions.
“Uncertainty-aware” software
45
47. Further types of uncertainty: Location
Location uncertainty: lack of certainty about the geographical or physical
location of a system, its elements or its environment
The submarine can now be somewhere in the Mediterranean sea
Cyber-attacks can come from anywhere
47
48. Further types of uncertainty: Time
Time uncertainty: lack of certainty about the time properties expressed in a
statement about the system or its environment
Mañana (i.e., “not today” )
“We will call you soon”
“A man with a watch knows
what time it is. A man with two
watches is never sure.”
(Segal's law)
48
51. Quality evaluation – Prediction Models
1. Identify your target entities and your target stakeholders
Examples of entities: COTS components, Data stored in DBs, Internet Delivery Service.
Examples of stakeholders: Developers, Advanced Users, Novice Users, …
2. Choose a Quality Model for evaluating your entities
E.g., ISO/IEC 25010 “Product” QM
3. Customize the Quality Model
Select the Characteristics and Subcharacteristics relevant to these entities and
stakeholders
4. Select the Measurable Attributes of the entities relevant to the Quality Model
5. Select the appropriate measures for those measurable attributes
6. Run experiments with samples of entities and groups of stakeholders to:
Empirically evaluate the “perceived” (subjective) quality subcharacteristic of these
entities
Empirically evaluate the “objective” quality subcharacteristic of these entities
7. Run regression analyses to identify the set of measures that better explain each
quality subcharacteristic, and define appropriate quality indicators
51
52. Quality evaluation – Prediction Models
1. Identify your target entities and your target stakeholders
Examples of entities: COTS components, Data stored in DBs, Internet Delivery Service.
Examples of stakeholders: Developers, Advanced Users, Novice Users, Any kinds of users.
Evaluate the quality of software components that are candidates to
be integrated in a software system
Our target stakeholders are system developers and maintainers, who
need to select the best candidate components to form part of their
systems
52
Manuel F. Bertoa, José M. Troya, Antonio Vallecillo: “Measuring the usability of software components”, Journal of Systems and
Software, 79(3):427-439, March 2006
53. Quality evaluation – Prediction Models
2. Choose a Quality Model for evaluating your entities
ISO/IEC 9126
53
54. Quality evaluation – Prediction Models
3. Customize the Quality Model
Select the relevant characteristics and subcharacteristics
54
55. Quality evaluation – Prediction Models
4. Select the relevant Measurable Attributes of the entities w.r.t. the Quality
model
55
56. Quality evaluation – Prediction Models
5. Select the appropriate measures for those measurable attributes
Measures related to “Quality of Documentation”
56
57. Quality evaluation – Prediction Models
5. Select the appropriate measures for those measurable attributes
Measures related to “Design Complexity”
57
58. Quality evaluation – Prediction Models
6. Run experiments with samples of entities and groups of stakeholders to
empirically evaluate the “perceived” (subjective) and “objective” quality
58
59. Quality evaluation – Prediction Models
6. Run experiments with samples of entities and groups of stakeholders to
empirically evaluate the “perceived” (subjective) and “objective” quality
59
60. Quality evaluation – Prediction Models
7. Run regression analyses to identify the set of measures that better explain
each quality subcharacteristic, and define appropriate quality indicators
60
61. Quality evaluation – Prediction Models
7. Run regression analyses to identify the set of measures that better explain
each quality subcharacteristic, and define appropriate quality indicators
61
62. Quality evaluation – Prediction Models
7. Run regression analyses to identify the set of measures that better explain
each quality subcharacteristic, and define appropriate quality indicators
62
Maint = α Und + β Learn + γ Oper
high if Maint > 0.8;
Maintainability = low if Maint < 0.4
medium otherwise
63. Maintainability of models
63
F. Basciani, J. Rocco, D. Ruscio, L. Iovino,A. Pierantonio “ A tool-supported approach for assessing the quality of modeling
artifacts.” Journal of Computer Languages 51:173-192, April 2019.
M. Genero, M. Piattini. “Empirical validation of measures for class diagram structural complexity through controlled
experiments.” Proc. of QAOOSE WS at ECOOP 2001.
64. Maintainability of model transformations
64
F. Basciani, J. Rocco, D. Ruscio, L. Iovino,A. Pierantonio “ A tool-supported approach for assessing the quality of modeling
artifacts.” Journal of Computer Languages 51:173-192, April 2019.
66. Quality evaluation – Sources of uncertainty
1. Identify your target entities and your target stakeholders
Examples of entities: COTS components, Data stored in DBs, Internet Delivery Service.
Examples of stakeholders: Developers, Advanced Users, Novice Users, …
2. Choose a Quality Model for evaluating your entities
E.g., ISO/IEC 25010 “Product” QM
3. Customize the Quality Model
Select the Characteristics and Subcharacteristics relevant to these entities and
stakeholders
4. Select the Measurable Attributes of the entities relevant to the Quality Model
5. Select the appropriate measures for those measurable attributes
6. Run experiments with samples of entities and groups of stakeholders to:
Empirically evaluate the “perceived” (subjective) quality subcharacteristic of these
entities
Empirically evaluate the “objective” quality subcharacteristic of these entities
7. Run regression analyses to identify the set of measures that better explain each
quality subcharacteristic, and define appropriate quality indicators
66
67. Sources of uncertainty
Selection of the subset of the quality model
Selection of incorrect, inappropriate or missing quality subcharacteristics
Selection of quality measures
Selection of incorrect or inappropriate quality measures
Imprecise measurements of quality measures
Empirical experiments
Confidence in the entity samples
Confidence in the selected groups of stakeholders
Evaluation of perceived and objective quality
Incorrect or imprecise experiment results
Statistical analyses and regression tests
Confidence of estimation models
Definition of quality indicators
Confidence in thresholds
Propagation of measurement uncertainty in decision models
67
68. Maintainability of models
68
M. Genero, M. Piattini, E. manso, G. Cantone. “Building UML Class Diagram Maintainability Prediction Models based on Early
Metrics.” Proc. of IEEE METRICS 2003.
70. Estimating quality with uncertainty
70
Maintainability = { (low, 0.8), (medium, 0.18), (high, 0.02) }
with a credence of (0.95)
71. Estimating quality with uncertainty
71
Use > ?system1.maintainability()
-> ULevel((#low, 0.8), (#medium, 0.18), (#high, 0.02)) : ULevel
Use > ?system1.maintainability_credence(agent1)
-> 0.5 : Real
Use > ?r1. maintainability_credence(agent2)
-> 0.99 : Real
Use > ?r1. maintainability_credence(null)
-> 0.95 : Real
72. Summary (on Evaluating Quality in the presence of Uncertainty)
Identify the kinds of uncertainty (and their nature) that affect
Your entities and their attributes
The quality characteristics you need to evaluate
Your target stakeholders’ particular needs and backgrounds
Your quality (base and derived) measures
Your quality indicators
Model uncertainty
Include uncertainty in your quality models and measures as first-class elements
(measurement uncertainty, degrees of belief, credence, etc.)
Evaluate uncertainty
Use tools for quantifying and propagating uncertainty
Document uncertainty
Produce estimates of the magnitude and impact of these uncertainties
Manage your quality considering uncertainty
Make sure decision processes take into account the estimated uncertainties
72
73. Uncertainty as a first class concept in quality modeling and evaluation
From “correctness” to “utility”
Useful, beneficial and profitable to users, instead of objectively correct
Utility permits accommodating trade-offs between different dimensions
From “precise” to “approximate”
Need to evaluate possible deviations and estimate margins
“How accurate are my models and estimations, and how confident I am on
them?”
From “open-loop” to “closed-loop”
Need to (self-)adapt as new information is available, or conditions change
“How do I change when the level of uncertainty changes?”
73
David Garlan “Software Engineering in an Uncertain World.” In Proc. of FoSER 2010: 125-128.
74. Takeaways (on Uncertainty)
“Uncertainty” is not a single concept, it encompasses many different types of
uncertainties (measurement, belief, environment, …)
Each type of uncertainty requires its own notations, underlying logics and
propagation mechanisms
Uncertainty can be aleatory or epistemic (irreducible or reducible)
Uncertainty does not depend so much on knowledge, but on belief
It is mainly subjective, and diffent people may hold different degrees of belief
about the same statement
Learn to manage in the presence of uncertainty; it cannot be eliminated.
You can try to reduce it (for epistemic) with testing, verification, validation,
redundancy and other knowledge acquisition processes.
Aleatory uncertainty and its risks cannot be reduced. It needs to be calculated,
and its values and risks bounded. Margins and bounds can be used to handle it.
74
75. Open problems for Quatic
From the QUATIC 2019 Call for Papers:
Quality Aspects in Requirements Engineering
Quality Aspects in Model-Driven Engineering
Quality Aspects in DevOps Development
Quality Aspects in Process Improvement and Assessment
Quality Aspects in Verification and Validation
Quality Aspects in Evidence-Based Software Engineering
Quality Aspects in Security & Privacy
Quality Aspects in Cloud-based Platforms and Services
Quality Aspects in Business Processes
Quality Aspects in Data Science & Artificial Intelligence
Quality Aspects in Software Maintenance and Comprehension
75
78. Modeling and Evaluating Quality in the
Presence of Uncertainty
QUATIC 2019
Ciudad Real, September 13, 2019
Antonio Vallecillo
Universidad de Málaga, Spain