Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge

by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
Read full post gblog_arrow_right

Shallow2Deep: Restraining Neural Networks Opacity through Neural Architecture Search

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI.
Read full post gblog_arrow_right

GridEx: An Algorithm for Knowledge Extraction from Black-Box Regressors

by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini Abstract Knowledge extraction methods are applied to ML-based predictors to attain explainable representations of their functioning when the lack of interpretable results constitutes a problem. Several algorithms have been proposed for knowledge extraction, mostly focusing on the extraction of either lists or trees of rules. Yet, most of them only support supervised learning – and, in particular, classification – tasks.
Read full post gblog_arrow_right

Graph Neural Networks as the Copula Mundi between Logic and Machine Learning: A Roadmap

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Combining machine learning (ML) and computational logic (CL) is hard, mostly because of the inherently-different ways they use to represent knowledge. In fact, while ML relies on fixed-size numeric representations leveraging on vectors, matrices, or tensors of real numbers, CL relies on logic terms and clauses—which are unlimited in size and structure. Graph neural networks (GNN) are a novelty in the ML world introduced for dealing with graph-structured data in a sub-symbolic way.
Read full post gblog_arrow_right

On the Design of PSyKE: A Platform for Symbolic Knowledge Extraction

by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini How to access URL: http://ceur-ws.org/Vol-2963/paper14.pdf Abstract A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation.
Read full post gblog_arrow_right

2P-Kt: A Logic-Based Ecosystem for Symbolic AI

by Giovanni Ciatto, Roberta Calegari, and Andrea Omicini Abstract To date, logic-based technologies are either built on top or as extensions of the Prolog language, mostly working as monolithic solutions tailored upon specific inference procedures, unification mechanisms, or knowledge representation techniques. Instead, to maximise their impact, logic-based technologies should support and enable the general-purpose exploitation of all the manifold contributions from logic programming. Accordingly, we present 2P-Kt, a reboot of the tuProlog project offering a general, extensible, and interoperable ecosystem for logic programming and symbolic AI.
Read full post gblog_arrow_right

Towards cooperative argumentation for MAS: an Actor-based approach

by Giuseppe Pisano, Roberta Calegari, and Andrea Omicini Abstract We discuss the problem of cooperative argumentation in multi-agent systems, focusing on the computational model. An actor-based model is proposed as a first step towards cooperative argumentation in multi-agent systems to tackle distribution issues—illustrating a preliminary fully-distributed version of the argumentation process completely based on message passing. How to access URL: http://ceur-ws.org/Vol-2963/paper17.pdf How to cite Bibtex @inproceedings{distributedarg-woa2021, author = {Pisano, Giuseppe and Calegari, Roberta and Omicini, Andrea}, booktitle = {WOA 2021 -- 22nd Workshop ``From Objects to Agents''}, editor = {Calegari, Roberta and Ciatto, Giovanni and Denti, Enrico and Omicini, Andrea and Sartor, Giovanni}, issn = {1613-0073}, keywords = {Argumentation, MAS, cooperative argumentation, distributed argumentation process}, location = {Bologna, Italy}, month = oct, note = {22nd Workshop ``From Objects to Agents'' (WOA 2021), Bologna, Italy, 1--3~} # sep # {~2021.
Read full post gblog_arrow_right

Towards cooperative argumentation for MAS: An actor-based approach

by Giuseppe Pisano, Roberta Calegari, and Andrea Omicini Abstract We discuss the problem of cooperative argumentation in multi-agent systems, focusing on the computational model. An actor-based model is proposed as a first step towards cooperative argumentation in multi-agent systems to tackle distribution issues—illustrating a preliminary fully-distributed version of the argumentation process completely based on message passing. How to access URL: http://ceur-ws.org/Vol-2963/paper17.pdf How to cite Bibtex @inproceedings{distributedarg-woa2021, articleno = 12, author = {Pisano, Giuseppe and Calegari, Roberta and Omicini, Andrea}, booktitle = {WOA 2021 -- 22nd Workshop ``From Objects to Agents''}, dblp = {conf/woa/PisanoCO21}, editor = {Calegari, Roberta and Ciatto, Giovanni and Denti, Enrico and Omicini, Andrea and Sartor, Giovanni}, iris = {11585/834366}, issn = {1613-0073}, keywords = {Argumentation, MAS, cooperative argumentation, distributed argumentation process}, location = {Bologna, Italy}, month = oct, note = {22nd Workshop ``From Objects to Agents'' (WOA 2021), Bologna, Italy, 1--3~} # sep # {~2021.
Read full post gblog_arrow_right

Hypercube-Based Methods for Symbolic Knowledge Extraction: Towards a Unified Model

by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini Abstract Symbolic knowledge-extraction (SKE) algorithms proposed by the XAI community to obtain human-intelligible explanations for opaque machine learning predictors are currently being studied and developed with growing interest, also in order to achieve believability in interactions. However, choosing the most adequate extraction procedure amongst the many existing in the literature is becoming more and more challenging, as the amount of available methods increases.
Read full post gblog_arrow_right

On the Design of PSyKI: a Platform for Symbolic Knowledge Injection into Sub-Symbolic Predictors

by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini Abstract A long-standing ambition in artificial intelligence is to integrate predictors' inductive features (i.e., learning from examples) with deductive capabilities (i.e., drawing inferences from prior symbolic knowledge). Many algorithms methods in the literature support injection of prior symbolic knowledge into predictors, generally following the purpose of attaining better (i.e., more effective or efficient w.r.t. predictive performance) predictors. However, to the best of our knowledge, running implementations of these algorithms are currently either proof of concepts or unavailable in most cases.
Read full post gblog_arrow_right

KINS: Knowledge Injection via Network Structuring

by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini Abstract We propose a novel method to inject symbolic knowledge in form of Datalog formulæ into neural networks (NN), called KINS (Knowledge Injection via Network Structuring). The idea behind our method is to extend NN internal structure with ad-hoc layers built out the injected symbolic knowledge. KINS does not constrain NN to any specific architecture, neither requires logic formulæ to be ground.
Read full post gblog_arrow_right

Semantic Web-Based Interoperability for Intelligent Agents with PSyKE

by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini Abstract Modern distributed systems require communicating agents to agree on a shared, formal semantics for the data they exchange and operate upon. The Semantic Web offers tools to encode semantics in the form of ontologies, where data is represented in the form knowledge graphs (KG). Applying such tools to intelligent agents equipped with machine learning (ML) capabilities is of particular interest, as it may enable a higher degree of interoperability among heterogeneous agents.
Read full post gblog_arrow_right

A view to a KILL: Knowledge Injection via Lambda Layer

by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini Abstract We propose KILL (Knowledge Injection via Lambda Layer) as a novel method for the injection of symbolic knowledge into neural networks (NN) allowing data scientists to control what the network should (not) learn. Unlike other similar approaches, our method does not (i) require ground input formulae, (ii) impose any constraint on the NN undergoing injection, (iii) affect the loss function of the NN.
Read full post gblog_arrow_right

Symbolic Knowledge Extraction for Explainable Nutritional Recommenders

by Matteo Magnini, Giovanni Ciatto, Furkan Canturk, Reyhan Aydoğan, and Andrea Omicini Abstract Background and objective This paper focuses on nutritional recommendation systems (RS), i.e. AI-powered automatic systems providing users with suggestions about what to eat to pursue their weight/body shape goals. A trade-off among (potentially) conflictual requirements must be taken into account when designing these kinds of systems, there including: (i) adherence to experts’ prescriptions, (ii) adherence to users’ tastes and preferences, (iii) explainability of the whole recommendation process.
Read full post gblog_arrow_right

A General-Purpose Protocol for Multi-Agent based Explanations

by Giovanni Ciatto, Matteo Magnini, Berk Buzcu, Reyhan Aydoğan, and Andrea Omicini Abstract Building on prior works on explanation negotiation protocols, this paper proposes a general-purpose protocol for multi-agent systems where recommender agents may need to provide explanations for their recommendations. The protocol specifies the roles and responsibilities of the explainee and the explainer agent and the types of information that should be exchanged between them to ensure a clear and effective explanation.
Read full post gblog_arrow_right

Symbolic Knowledge Extraction and Injection with Sub-symbolic Predictors: A Systematic Literature Review

by Giovanni Ciatto, Federico Sabbatini, Andrea Agiollo, Matteo Magnini, and Andrea Omicini Abstract In this paper we focus on the issue of opacity of sub-symbolic machine-learning predictors by promoting two complementary activities—namely, symbolic knowledge extraction (SKE) and injection (SKI) from and into sub-symbolic predictors. We consider as symbolic any language being intelligible and interpretable for both humans and computers. Accordingly, we propose general meta-models for both SKE and SKI, along with two taxonomies for the classification of SKE/SKI methods.
Read full post gblog_arrow_right