Once precisely defined so as to include just the explanation’s act, the notion of explanation should be regarded as a central notion in the engineering of intelligent system—not just as an add-on to make them understandable to humans. Based on symbolic AI techniques to match intuitive and rational cognition, explanation should be exploited as a fundamental tool for inter-agent communication among heterogeneous agents in open multi-agent systems. More generally, explanation-ready agents should work as the basic components in the engineering of intelligent systems integrating both symbolic and sub-/non-symbolic AI techniques.
Presented by Andrea Omicini @ AIxIA 2020 Discussion Paper Workshop
Not just for humans: Explanation for agent-to-agent communication
1. Not just for humans
Explanation for agent-to-agent communication
Andrea Omicini
andrea.omicini@unibo.it
Dipartimento di Informatica – Scienza e Ingegneria (DISI)
Alma Mater Studiorum – Università di Bologna a Cesena
AI*IA 2020 Discussion Papers
27 November 2020
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 1 / 19
2. Premises
Next in Line. . .
1 Premises
2 Discussion
3 Conclusion
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 2 / 19
3. Premises
Landscape
explanation is a very popular notion nowadays in AI
and, quite a muddled one indeed
mostly as an add-on to intelligent systems—to make them “socially
acceptable”
as a relevant yet not central notion to (artificial) intelligence and
intelligent systems
symbolic techniques are proving themselves to be essential for
explainability
yet, against the aforementioned landscape, mostly as handmaids of
sub-/non-symbolic ones
like they cannot do the real work
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 3 / 19
4. Premises
Theses
1 explanation needs to be defined quite precisely as an explanator’s act
as a premise for the possible explainee’s understanding, not including it
2 explanation should be an essential tool for any intelligent component
in particular, agents in multi-agent systems
3 intelligent agents should be able to able to explicitly represent their
cognitive process and its results, and manipulate those representations
so that rational explanation would properly complement their ability to
reason and communicate
4 intelligent agents should explain themselves first of all to other agents
not just a system overall towards humans
5 symbolic techniques are to be used for explanations—representing and
manipulating cognitive processes and their results
so, as first-class citizens in both agent modelling and intelligent
systems engineering
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 4 / 19
5. Discussion
Next in Line. . .
1 Premises
2 Discussion
3 Conclusion
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 5 / 19
6. Discussion
On the Meaning of Terms
in CS & AI we do have huge problems as far as definitions of terms
and concepts are concerned
the AI community struggled for decades around the meaning of
intelligence—and gave up just when money came back to AI
nowadays, we are specifically struggling with the new concept of
explanation
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 6 / 19
7. Discussion
Rationality vs. Intuition
two sorts of cognitive processes
esprit de finesse vs. esprit de géométrie—rationality has limits
[Pascal, 1669]
cognitivism against behaviourism in psychology [Skinner, 1985]
two families of AI techniques
symbolic vs. sub-/non-symbolic AI techniques
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 7 / 19
8. Discussion
Sharing
reproducibility and refutability in the scientific process
[Popper, 2002]—human science as a social construct
more generally, sharing is a peculiar trait of humanity: human culture
is a cumulative one
sharing knowledge is what mostly separates humans from other
primates [Dean et al., 2012]
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 8 / 19
9. Discussion
Sharing is Rational
there is intelligence without representation [Brooks, 1991b] and reason
[Brooks, 1991a]
yet, human cumulative culture is based on representation
tools—language, writing, books, the Web
so, repeatable, systematic sharing requires rational representation
Sharing intuitive / implicit knowledge
? how do we share the results of our intuition?
! we do explain
? so now: what is an explanation?
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 9 / 19
10. Discussion
Explanation Everywhere
e.g., GDPR [Voigt and von dem Bussche, 2017] recognises “the citizens’ right
to explanation” [Goodman and Flaxman, 2017]
thus encompassing in the same acceptation of the term ‘explanation’
both the explanator and the explainee acts—as commonly used in
both common sense and scientific definitions
! yet, keeping things separated and distinct is always the best choice in
science
→ explanation as an explanator’s act
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 10 / 1
11. Discussion
Noetics & Semiotics
unsuprisingly, contribution from math teaching [D’Amore, 2005]
noetics — as the conceptual acquisition of an object
semiotics — as the acquisition of a representation built out of
signs
different semiotic representations for the same concept, used to
explain
transformation of treatment — changing representation within the
same register of semiotics
transformation of conversion — changing register of semiotics for the
representation
explanation as a transformation of semiotic register
by the explanator
? yet, who is the explanator, who the explainee now?
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 11 / 1
12. Discussion
Agents Explaining Themselves to Humans
almost all works on explainability aims at making intelligent systems
understandable to humans
explanators are software components – e.g., as agents in MAS –,
explainees are humans
[Rosenfeld and Richardson, 2019, Anjomshoae et al., 2019, Guidotti et al., 2019]
yet, teaching means humans explaining to humans
in socio-technical systems [Whitworth, 2006] this means that just one
dimension is missing
what about agents explaining to agents?
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 12 / 1
13. Discussion
Fake Disclaimer
science by analogy is not always the best way to proceed in multi- and
inter-disciplinary contexts
! it should be feared in the scientific practice for how easily it misleads
researchers
yet, it has worked well and often for MAS, e.g.
intentional stance in agent reasoning [Dennett, 1971]
speech acts in agent communication [Searle, 1969]
activity theory for agent coordination [Vygotski˘ı, 1978]
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 13 / 1
14. Discussion
Agent Sharing in MAS
default communication among intelligent agents in open and
heterogeneous MAS mandates for knowledge sharing among agents
in the same way as humans share knowledge and cognition in
cooperative contexts
explanation – in the general acceptation of transformation of semiotic
register – should work as the main tool for sharing
having the potential to work as an actual enabler of effective
agent-to-agent communication
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 14 / 1
15. Discussion
Case Study: Decision Support System for Legal Process
MAS with heterogeneous legal agents
some legal agents could be deep learning agents trained over diverse
sets of existing legal databases
others might be logic-based agents, rationally elaborating over some
symbolic representation of some legal corpus
each one not just providing its own argument and supporting its
suggestions
not just providing humans with explanations
instead, cooperatively building a shared proposal to be presented to
human decision makers in a potentially-understandable way
! this requires first of all that agents – symbolic vs. sub symbolic vs.
hybrid – are capable of explaining themselves in a rational form that
could be effectively shared not just with humans, but also with other
agents
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 15 / 1
16. Discussion
Future of Agents & MAS
intelligent systems built as MAS made of explanation-ready agents
as agents of any sort equipped with their own specific rational
explanation capability
capable of providing a rational, sharable representation of their own
specific cognitive process and results
as well as of manipulating such a representation as a transformation
of the semiotic register in order to build one or more explanations
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 16 / 1
17. Conclusion
Next in Line. . .
1 Premises
2 Discussion
3 Conclusion
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 17 / 1
18. Conclusion
Overall
explanation as the rational transformation of the semiotic register of
the results of a cognitive process of any sort
agent-to-agent explanation as the essential support to agent-to-agent
communication in open and heterogeneous cooperative MAS
as well as a powerful tool to make both rational and non-rational
intelligent agent fruitfully coexist in complex intelligent systems
with rational explanation as a general tool for the engineering of
intelligent systems as MAS
based on symbolic techniques
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 18 / 1
19. Not just for humans
Explanation for agent-to-agent communication
Andrea Omicini
andrea.omicini@unibo.it
Dipartimento di Informatica – Scienza e Ingegneria (DISI)
Alma Mater Studiorum – Università di Bologna a Cesena
AI*IA 2020 Discussion Papers
27 November 2020
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020 19 / 1
20. References
References I
[Anjomshoae et al., 2019] Anjomshoae, S., Najjar, A., Calvaresi, D., and Främling, K. (2019).
Explainable agents and robots: Results from a systematic literature review.
In 18th
International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’19), pages
1078–1088. IFAAMAS.
[Brooks, 1991a] Brooks, R. A. (1991a).
Intelligence without reason.
In Mylopoulos, J. and Reiter, R., editors, 12th International Joint Conference on Artificial Intelligence (IJCAI
1991), volume 1, pages 569–595, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
[Brooks, 1991b] Brooks, R. A. (1991b).
Intelligence without representation.
Artificial Intelligence, 47:139–159.
[D’Amore, 2005] D’Amore, B. (2005).
Noetica e semiotica nell’apprendimento della matematica.
In Laura, A. R., Eleonora, F., Antonella, M., and Rosa, P., editors, Insegnare la matematica nella scuola di tutti e
di ciascuno, Milano, Italy. Ghisetti & Corvi Editore.
[Dean et al., 2012] Dean, L. G., Kendal, R. L., Schapiro, S. J., Thierry, B., and Laland, K. N. (2012).
Identification of the social and cognitive processes underlying human cumulative culture.
Science, 335(6072):1114–1118.
[Dennett, 1971] Dennett, D. (1971).
Intentional systems.
Journal of Philosophy, 68:87–106.
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020
21. References
References II
[Goodman and Flaxman, 2017] Goodman, B. and Flaxman, S. (2017).
European Union regulations on algorithmic decision-making and a “right to explanation”.
AI Magazine, 38(3):50–57.
[Guidotti et al., 2019] Guidotti, R., Monreale, A., Turini, F., Pedreschi, D., and Giannotti, F. (2019).
A survey of methods for explaining black box models.
ACM Computing Surveys, 51(5):1–42.
[Gunning, 2016] Gunning, D. (2016).
Explainable artificial intelligence (XAI).
Funding Program DARPA-BAA-16-53, Defense Advanced Research Projects Agency (DARPA).
[Pascal, 1669] Pascal, B. (1669).
Pensées.
Guillaume Desprez, Paris, France.
[Popper, 2002] Popper, K. R. (2002).
The Logic of Scientific Discovery.
Routledge.
1st English Edition:1959.
[Rosenfeld and Richardson, 2019] Rosenfeld, A. and Richardson, A. (2019).
Explainability in human-agent systems.
Autonomous Agents and Multi-Agent Systems, 33(6):673–705.
[Searle, 1969] Searle, J. (1969).
Speech Acts: An Essay in the Philosophy of Language.
Cambridge University Press.
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020
22. References
References III
[Skinner, 1985] Skinner, B. F. (1985).
Cognitive science and behaviourism.
British Journal of Psychology, 76(3):291–301.
[Voigt and von dem Bussche, 2017] Voigt, P. and von dem Bussche, A. (2017).
The EU General Data Protection Regulation (GDPR). A Practical Guide.
Springer.
[Vygotski˘ı, 1978] Vygotski˘ı, L. S. (1978).
Mind in Society: Development of Higher Psychological Processes.
Harvard University Press, Cambridge, MA, USA.
[Whitworth, 2006] Whitworth, B. (2006).
Socio-technical systems.
In Ghaou, C., editor, Encyclopedia of Human Computer Interaction, pages 533–541. IGI Global.
[Zambonelli and Omicini, 2004] Zambonelli, F. and Omicini, A. (2004).
Challenges and research directions in agent-oriented software engineering.
Autonomous Agents and Multi-Agent Systems, 9(3):253–283.
Special Issue: Challenges for Agent-Based Computing.
Andrea Omicini (DISI, Univ. Bologna) Explanation for a2a communication AI*IA 2020 DP – 27/11/2020