Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge

by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
Read full post gblog_arrow_right

Towards Explainable Visionary Agents: License to Dare and Imagine

by Giovanni Ciatto, Amro Najjar, Jean-Paul Calbimonte, and Davide Calvaresi Abstract Since their appearance, computer programs have embodied discipline and structured approaches and methodologies. Yet, to this day, equipping machines with imaginative and creative capabilities remains one of the most challenging and fascinating goals we pursue. Intelligent software agents can behave intelligently in well-defined scenarios, relying on Machine Learning (ML), symbolic reasoning, and their developers' capability of tailoring smart behaviors on the particular application domain(s).
Read full post gblog_arrow_right

Shallow2Deep: Restraining Neural Networks Opacity through Neural Architecture Search

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI.
Read full post gblog_arrow_right

Logic Programming library for Machine Learning: API design and prototype

by Giovanni Ciatto, Matteo Castigliò, and Roberta Calegari Abstract In this paper we address the problem of hybridising symbolic and sub-symbolic approaches in artificial intelligence, following the purpose of creating flexible and data-driven systems, which are simultaneously comprehensible and capable of automated learning. In particular, we propose a logic API for supervised machine learning, enabling logic programmers to exploit neural networks – among the others – in their programs.
Read full post gblog_arrow_right

GridEx: An Algorithm for Knowledge Extraction from Black-Box Regressors

by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini Abstract Knowledge extraction methods are applied to ML-based predictors to attain explainable representations of their functioning when the lack of interpretable results constitutes a problem. Several algorithms have been proposed for knowledge extraction, mostly focusing on the extraction of either lists or trees of rules. Yet, most of them only support supervised learning – and, in particular, classification – tasks.
Read full post gblog_arrow_right

Graph Neural Networks as the Copula Mundi between Logic and Machine Learning: A Roadmap

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Combining machine learning (ML) and computational logic (CL) is hard, mostly because of the inherently-different ways they use to represent knowledge. In fact, while ML relies on fixed-size numeric representations leveraging on vectors, matrices, or tensors of real numbers, CL relies on logic terms and clauses—which are unlimited in size and structure. Graph neural networks (GNN) are a novelty in the ML world introduced for dealing with graph-structured data in a sub-symbolic way.
Read full post gblog_arrow_right

On the Design of PSyKE: A Platform for Symbolic Knowledge Extraction

by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini How to access URL: http://ceur-ws.org/Vol-2963/paper14.pdf Abstract A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation.
Read full post gblog_arrow_right

2P-Kt: A Logic-Based Ecosystem for Symbolic AI

by Giovanni Ciatto, Roberta Calegari, and Andrea Omicini Abstract To date, logic-based technologies are either built on top or as extensions of the Prolog language, mostly working as monolithic solutions tailored upon specific inference procedures, unification mechanisms, or knowledge representation techniques. Instead, to maximise their impact, logic-based technologies should support and enable the general-purpose exploitation of all the manifold contributions from logic programming. Accordingly, we present 2P-Kt, a reboot of the tuProlog project offering a general, extensible, and interoperable ecosystem for logic programming and symbolic AI.
Read full post gblog_arrow_right

Towards cooperative argumentation for MAS: an Actor-based approach

by Giuseppe Pisano, Roberta Calegari, and Andrea Omicini Abstract We discuss the problem of cooperative argumentation in multi-agent systems, focusing on the computational model. An actor-based model is proposed as a first step towards cooperative argumentation in multi-agent systems to tackle distribution issues—illustrating a preliminary fully-distributed version of the argumentation process completely based on message passing. How to access URL: http://ceur-ws.org/Vol-2963/paper17.pdf How to cite Bibtex @inproceedings{distributedarg-woa2021, author = {Pisano, Giuseppe and Calegari, Roberta and Omicini, Andrea}, booktitle = {WOA 2021 -- 22nd Workshop ``From Objects to Agents''}, editor = {Calegari, Roberta and Ciatto, Giovanni and Denti, Enrico and Omicini, Andrea and Sartor, Giovanni}, issn = {1613-0073}, keywords = {Argumentation, MAS, cooperative argumentation, distributed argumentation process}, location = {Bologna, Italy}, month = oct, note = {22nd Workshop ``From Objects to Agents'' (WOA 2021), Bologna, Italy, 1--3~} # sep # {~2021.
Read full post gblog_arrow_right

On Explainable Negotiations via Argumentation

by Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide Abstract TBD How to access URL: http://publications.hevs.ch/index.php/publications/show/2883 How to cite Bibtex @incollection{canc-bnaic-2021-explanable-negotiations, address = {}, author = {Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide}, booktitle = {Proceedings of BNAIC 2021}, doi = {}, editor = {}, isbn = {}, isbn-online = {}, issn = {}, keywords = {explainable negotiation}, pages = {}, publisher = {ACM}, series = {}, subseries = {}, title = {On Explainable Negotiations via Argumentation}, url = {http://publications.
Read full post gblog_arrow_right

A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences

by Graziani, Mara and Dutkiewicz, Lidia and Calvaresi, Davide and Amorim, Jos{'e} Pereira and Yordanova, Katerina and Vered, Mor and Nair, Rahul and Abreu, Pedro Henriques and Blanke, Tobias and Pulignano, Valeria and Prior, John O. and Lauwaert, Lode and Reijers, Wessel and Depeursinge, Adrien and Andrearczyk, Vincent and Müller, Henning Abstract Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application.
Read full post gblog_arrow_right

Human-Social Robots Interaction: the blurred line between necessary anthropomorphization and manipulation

by Rachele Carli and Amro Najjar and Davide Calvaresi Abstract TBD How to access URL: http://publications.hevs.ch/index.php/publications/show/2932 DOI: https://doi.org/10.1145/3527188.3563941 How to cite Bibtex @inproceedings{CarliNC22, author = {Rachele Carli and Amro Najjar and Davide Calvaresi}, editor = {Christoph Bartneck and Takayuki Kanda and Mohammad Obaid and Wafa Johal}, title = {Human-Social Robots Interaction: The Blurred Line between Necessary Anthropomorphization and Manipulation}, booktitle = {International Conference on Human-Agent Interaction, {HAI} 2022, Christchurch, New Zealand, December 5-8, 2022}, pages = {321--323}, publisher = {{ACM}}, year = {2022}, url = {https://doi.
Read full post gblog_arrow_right

The quest of parsimonious XAI: A human-agent architecture for explanation formulation

by Yazan Mualla and Igor Tchappi and Timotheus Kampik and Amro Najjar and Davide Calvaresi and Abdeljalil Abbas-Turki and Stéphane Galland and Christophe Nicolle Abstract With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent’s state of mind. Recent empirical studies have confirmed that explaining a system’s behavior to human users fosters the latter’s acceptance of the system.
Read full post gblog_arrow_right

Explanation-Based Negotiation Protocol for Nutrition Virtual Coaching

by Berk Buzcu, Vanitha Varadhajaran, Igor Tchappi, Amro Najjar, Davide Calvaresi and Reyhan Aydoğan Abstract People’s awareness about the importance of healthy lifestyles is rising. This opens new possibilities for personalized intelligent health and coaching applications. In particular, there is a need for more than simple recommendations and mechanistic interactions. Recent studies have identified nutrition virtual coaching systems (NVC) as a technological solution, possibly bridging technologies such as recommender, informative, persuasive, and argumentation systems.
Read full post gblog_arrow_right

Integration of Local and Global Features Explanation with Global Rules Extraction and Generation Tools

by Contreras Ordoñez Victor Hugo, Davide Calvaresi, and Michael I. Schumacher Abstract Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of transparency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering models of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.
Read full post gblog_arrow_right

Bidding Support by the Pocket Negotiator Improves Negotiation Outcomes

by Reyhan Aydoğan, and Catholijn M. Jonker Abstract This paper presents the negotiation support mechanisms provided by the Pocket Negotiator (PN) and an elaborate empirical evaluation of the economic decision support (EDS) mechanisms during the bidding phase of negotiations as provided by the PN. Some of these support mechanisms are offered actively, some passively. With passive support we mean that the user only gets that support by clicking a button, whereas active support is provided without prompting.
Read full post gblog_arrow_right

A Survey of Decision Support Mechanisms for Negotiation

by Reyhan Aydoğan, and Catholijn M. Jonker Abstract This paper introduces a dependency analysis and a categorization of conceptualized and existing economic decision support mechanisms for negotiation. The focus of our survey is on economic decision support mechanisms, although some behavioural support mechanisms were included, to recognize the important work in that area. We categorize support mechanisms from four different aspects: (i) economic versus behavioral decision support, (ii) analytical versus strategical support, (iii) active versus passive support and (iv) implicit versus explicit support.
Read full post gblog_arrow_right

A DEXiRE for Extracting Propositional Rules from Neural Networks via Binarization

by Contreras Ordoñez Victor Hugo, Niccolo Marini, Lora Fanda, Gaetano Manzo, Yazan Mualla, Jean-Paul Calbimonte, Michael I. Schumacher, and Davide Calvaresi Abstract Background: Despite the advancement in eXplainable Artificial Intelligence, the explanations provided by model-agnostic predictors still call for improvements (i.e., lack of accurate descriptions of predictors’ behaviors). Contribution: We present a tool for Deep Explanations and Rule Extraction (DEXiRE) to approximate rules for Deep Learning models with any number of hidden layers.
Read full post gblog_arrow_right

Ethical and legal considerations for nutrition virtual coaches

by Davide Calvaresi, Rachele Carli, Jean-Gabriel Piguet, Contreras Ordoñez Victor Hugo, Gloria Luzzani, Amro Najjar, Jean-Paul Calbimonte, and Michael I. Schumacher Abstract Choices and preferences of individuals are nowadays increasingly influenced by countless inputs and recommendations provided by artificial intelligence-based systems. The accuracy of recommender systems (RS) has achieved remarkable results in several domains, from infotainment to marketing and lifestyle. However, in sensitive use-cases, such as nutrition, there is a need for more complex dynamics and responsibilities beyond conventional RS frameworks.
Read full post gblog_arrow_right

Risk and Exposure of XAI in Persuasion and Argumentation: The case of Manipulation

by Rachele Carli, Amro Najjar, and Davide Calvaresi Abstract In the last decades, Artificial intelligence (AI) systems have been increasingly adopted in assistive (possibly collaborative) decision-making tools. In particular, AI-based persuasive technologies are designed to steer/influence users’ behaviour, habits, and choices to facilitate the achievement of their own - predetermined - goals. Nowadays, the inputs received by the assistive systems leverage heavily AI data-driven approaches. Thus, it is imperative to have transparent and understandable (to the user) both the process leading to the recommendations and the recommendations.
Read full post gblog_arrow_right

Towards cooperative argumentation for MAS: An actor-based approach

by Giuseppe Pisano, Roberta Calegari, and Andrea Omicini Abstract We discuss the problem of cooperative argumentation in multi-agent systems, focusing on the computational model. An actor-based model is proposed as a first step towards cooperative argumentation in multi-agent systems to tackle distribution issues—illustrating a preliminary fully-distributed version of the argumentation process completely based on message passing. How to access URL: http://ceur-ws.org/Vol-2963/paper17.pdf How to cite Bibtex @inproceedings{distributedarg-woa2021, articleno = 12, author = {Pisano, Giuseppe and Calegari, Roberta and Omicini, Andrea}, booktitle = {WOA 2021 -- 22nd Workshop ``From Objects to Agents''}, dblp = {conf/woa/PisanoCO21}, editor = {Calegari, Roberta and Ciatto, Giovanni and Denti, Enrico and Omicini, Andrea and Sartor, Giovanni}, iris = {11585/834366}, issn = {1613-0073}, keywords = {Argumentation, MAS, cooperative argumentation, distributed argumentation process}, location = {Bologna, Italy}, month = oct, note = {22nd Workshop ``From Objects to Agents'' (WOA 2021), Bologna, Italy, 1--3~} # sep # {~2021.
Read full post gblog_arrow_right

Hypercube-Based Methods for Symbolic Knowledge Extraction: Towards a Unified Model

by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini Abstract Symbolic knowledge-extraction (SKE) algorithms proposed by the XAI community to obtain human-intelligible explanations for opaque machine learning predictors are currently being studied and developed with growing interest, also in order to achieve believability in interactions. However, choosing the most adequate extraction procedure amongst the many existing in the literature is becoming more and more challenging, as the amount of available methods increases.
Read full post gblog_arrow_right

On the Design of PSyKI: a Platform for Symbolic Knowledge Injection into Sub-Symbolic Predictors

by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini Abstract A long-standing ambition in artificial intelligence is to integrate predictors' inductive features (i.e., learning from examples) with deductive capabilities (i.e., drawing inferences from prior symbolic knowledge). Many algorithms methods in the literature support injection of prior symbolic knowledge into predictors, generally following the purpose of attaining better (i.e., more effective or efficient w.r.t. predictive performance) predictors. However, to the best of our knowledge, running implementations of these algorithms are currently either proof of concepts or unavailable in most cases.
Read full post gblog_arrow_right

KINS: Knowledge Injection via Network Structuring

by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini Abstract We propose a novel method to inject symbolic knowledge in form of Datalog formulæ into neural networks (NN), called KINS (Knowledge Injection via Network Structuring). The idea behind our method is to extend NN internal structure with ad-hoc layers built out the injected symbolic knowledge. KINS does not constrain NN to any specific architecture, neither requires logic formulæ to be ground.
Read full post gblog_arrow_right

Semantic Web-Based Interoperability for Intelligent Agents with PSyKE

by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini Abstract Modern distributed systems require communicating agents to agree on a shared, formal semantics for the data they exchange and operate upon. The Semantic Web offers tools to encode semantics in the form of ontologies, where data is represented in the form knowledge graphs (KG). Applying such tools to intelligent agents equipped with machine learning (ML) capabilities is of particular interest, as it may enable a higher degree of interoperability among heterogeneous agents.
Read full post gblog_arrow_right

A view to a KILL: Knowledge Injection via Lambda Layer

by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini Abstract We propose KILL (Knowledge Injection via Lambda Layer) as a novel method for the injection of symbolic knowledge into neural networks (NN) allowing data scientists to control what the network should (not) learn. Unlike other similar approaches, our method does not (i) require ground input formulae, (ii) impose any constraint on the NN undergoing injection, (iii) affect the loss function of the NN.
Read full post gblog_arrow_right

Symbolic Knowledge Extraction for Explainable Nutritional Recommenders

by Matteo Magnini, Giovanni Ciatto, Furkan Canturk, Reyhan Aydoğan, and Andrea Omicini Abstract Background and objective This paper focuses on nutritional recommendation systems (RS), i.e. AI-powered automatic systems providing users with suggestions about what to eat to pursue their weight/body shape goals. A trade-off among (potentially) conflictual requirements must be taken into account when designing these kinds of systems, there including: (i) adherence to experts’ prescriptions, (ii) adherence to users’ tastes and preferences, (iii) explainability of the whole recommendation process.
Read full post gblog_arrow_right

A General-Purpose Protocol for Multi-Agent based Explanations

by Giovanni Ciatto, Matteo Magnini, Berk Buzcu, Reyhan Aydoğan, and Andrea Omicini Abstract Building on prior works on explanation negotiation protocols, this paper proposes a general-purpose protocol for multi-agent systems where recommender agents may need to provide explanations for their recommendations. The protocol specifies the roles and responsibilities of the explainee and the explainer agent and the types of information that should be exchanged between them to ensure a clear and effective explanation.
Read full post gblog_arrow_right

Metrics for Evaluating Explainable Recommender Systems

by Joris Hulstijn, Igor Tchappi, Amro Najjar, and Reyhan Aydoğan Abstract Recommender systems aim to support their users by reducing information overload so that they can make better decisions. Recommender systems must be transparent, so users can form mental models about the system’s goals, internal state, and capabilities, that are in line with their actual design. Explanations and transparent behaviour of the system should inspire trust and, ultimately, lead to more persuasive recommendations.
Read full post gblog_arrow_right

Computational Accountability

by Joris Hulstijn Abstract Automated decision making systems take decisions that matter. Some human or legal person remains responsible. Looking back, that person is accountable for the decisions made by the system, and may even be liable in case of damages. That puts constraints on the way in which decision making systems are designed, and how they are deployed in organizations. In this paper, we analyze computational accountability in three steps.
Read full post gblog_arrow_right

Conflict-based negotiation strategy for human-agent negotiation

by Mehmet Onur Keskin, Berk Buzcu, and Reyhan Aydoğan Abstract Day by day, human-agent negotiation becomes more and more vital to reach a socially beneficial agreement when stakeholders need to make a joint decision together. Developing agents who understand not only human preferences but also attitudes is a significant prerequisite for this kind of interaction. Studies on opponent modeling are predominantly based on automated negotiation and may yield good predictions after exchanging hundreds of offers.
Read full post gblog_arrow_right

A Survey of Decision Support Mechanisms for Negotiation

by Berk Buzcu, Melissa Tessa, Igor Tchappi, Amro Najjar, Joris Hulstijn, Davide Calvaresi, and Reyhan Aydoğan Abstract The awareness about healthy lifestyles is increasing, opening to personalized intelligent health coaching applications. A demand for more than mere suggestions and mechanistic interactions has driven attention to nutrition virtual coaching systems (NVC) as a bridge between human–machine interaction and recommender, informative, persuasive, and argumentation systems. NVC can rely on data-driven opaque mechanisms.
Read full post gblog_arrow_right

Symbolic Knowledge Extraction and Injection with Sub-symbolic Predictors: A Systematic Literature Review

by Giovanni Ciatto, Federico Sabbatini, Andrea Agiollo, Matteo Magnini, and Andrea Omicini Abstract In this paper we focus on the issue of opacity of sub-symbolic machine-learning predictors by promoting two complementary activities—namely, symbolic knowledge extraction (SKE) and injection (SKI) from and into sub-symbolic predictors. We consider as symbolic any language being intelligible and interpretable for both humans and computers. Accordingly, we propose general meta-models for both SKE and SKI, along with two taxonomies for the classification of SKE/SKI methods.
Read full post gblog_arrow_right

Conflict-based negotiation strategy for human-agent negotiation

by Mehmet Onur Keskin, Berk Buzcu, and Reyhan Aydoğan Abstract Day by day, human-agent negotiation becomes more and more vital to reach a socially beneficial agreement when stakeholders need to make a joint decision together. Developing agents who understand not only human preferences but also attitudes is a significant prerequisite for this kind of interaction. Studies on opponent modeling are predominantly based on automated negotiation and may yield good predictions after exchanging hundreds of offers.
Read full post gblog_arrow_right