Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge

by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
Read full post gblog_arrow_right

Shallow2Deep: Restraining Neural Networks Opacity through Neural Architecture Search

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI.
Read full post gblog_arrow_right

GridEx: An Algorithm for Knowledge Extraction from Black-Box Regressors

by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini Abstract Knowledge extraction methods are applied to ML-based predictors to attain explainable representations of their functioning when the lack of interpretable results constitutes a problem. Several algorithms have been proposed for knowledge extraction, mostly focusing on the extraction of either lists or trees of rules. Yet, most of them only support supervised learning – and, in particular, classification – tasks.
Read full post gblog_arrow_right

Graph Neural Networks as the Copula Mundi between Logic and Machine Learning: A Roadmap

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Combining machine learning (ML) and computational logic (CL) is hard, mostly because of the inherently-different ways they use to represent knowledge. In fact, while ML relies on fixed-size numeric representations leveraging on vectors, matrices, or tensors of real numbers, CL relies on logic terms and clauses—which are unlimited in size and structure. Graph neural networks (GNN) are a novelty in the ML world introduced for dealing with graph-structured data in a sub-symbolic way.
Read full post gblog_arrow_right

On the Design of PSyKE: A Platform for Symbolic Knowledge Extraction

by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini How to access URL: http://ceur-ws.org/Vol-2963/paper14.pdf Abstract A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation.
Read full post gblog_arrow_right

2P-Kt: A Logic-Based Ecosystem for Symbolic AI

by Giovanni Ciatto, Roberta Calegari, and Andrea Omicini Abstract To date, logic-based technologies are either built on top or as extensions of the Prolog language, mostly working as monolithic solutions tailored upon specific inference procedures, unification mechanisms, or knowledge representation techniques. Instead, to maximise their impact, logic-based technologies should support and enable the general-purpose exploitation of all the manifold contributions from logic programming. Accordingly, we present 2P-Kt, a reboot of the tuProlog project offering a general, extensible, and interoperable ecosystem for logic programming and symbolic AI.
Read full post gblog_arrow_right

Towards cooperative argumentation for MAS: an Actor-based approach

by Giuseppe Pisano, Roberta Calegari, and Andrea Omicini Abstract We discuss the problem of cooperative argumentation in multi-agent systems, focusing on the computational model. An actor-based model is proposed as a first step towards cooperative argumentation in multi-agent systems to tackle distribution issues—illustrating a preliminary fully-distributed version of the argumentation process completely based on message passing. How to access URL: http://ceur-ws.org/Vol-2963/paper17.pdf How to cite Bibtex @inproceedings{distributedarg-woa2021, author = {Pisano, Giuseppe and Calegari, Roberta and Omicini, Andrea}, booktitle = {WOA 2021 -- 22nd Workshop ``From Objects to Agents''}, editor = {Calegari, Roberta and Ciatto, Giovanni and Denti, Enrico and Omicini, Andrea and Sartor, Giovanni}, issn = {1613-0073}, keywords = {Argumentation, MAS, cooperative argumentation, distributed argumentation process}, location = {Bologna, Italy}, month = oct, note = {22nd Workshop ``From Objects to Agents'' (WOA 2021), Bologna, Italy, 1--3~} # sep # {~2021.
Read full post gblog_arrow_right

Hypercube-Based Methods for Symbolic Knowledge Extraction: Towards a Unified Model

by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini Abstract Symbolic knowledge-extraction (SKE) algorithms proposed by the XAI community to obtain human-intelligible explanations for opaque machine learning predictors are currently being studied and developed with growing interest, also in order to achieve believability in interactions. However, choosing the most adequate extraction procedure amongst the many existing in the literature is becoming more and more challenging, as the amount of available methods increases.
Read full post gblog_arrow_right

On the Design of PSyKI: a Platform for Symbolic Knowledge Injection into Sub-Symbolic Predictors

by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini Abstract A long-standing ambition in artificial intelligence is to integrate predictors' inductive features (i.e., learning from examples) with deductive capabilities (i.e., drawing inferences from prior symbolic knowledge). Many algorithms methods in the literature support injection of prior symbolic knowledge into predictors, generally following the purpose of attaining better (i.e., more effective or efficient w.r.t. predictive performance) predictors. However, to the best of our knowledge, running implementations of these algorithms are currently either proof of concepts or unavailable in most cases.
Read full post gblog_arrow_right

KINS: Knowledge Injection via Network Structuring

by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini Abstract We propose a novel method to inject symbolic knowledge in form of Datalog formulæ into neural networks (NN), called KINS (Knowledge Injection via Network Structuring). The idea behind our method is to extend NN internal structure with ad-hoc layers built out the injected symbolic knowledge. KINS does not constrain NN to any specific architecture, neither requires logic formulæ to be ground.
Read full post gblog_arrow_right

Semantic Web-Based Interoperability for Intelligent Agents with PSyKE

by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini Abstract Modern distributed systems require communicating agents to agree on a shared, formal semantics for the data they exchange and operate upon. The Semantic Web offers tools to encode semantics in the form of ontologies, where data is represented in the form knowledge graphs (KG). Applying such tools to intelligent agents equipped with machine learning (ML) capabilities is of particular interest, as it may enable a higher degree of interoperability among heterogeneous agents.
Read full post gblog_arrow_right

A view to a KILL: Knowledge Injection via Lambda Layer

by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini Abstract We propose KILL (Knowledge Injection via Lambda Layer) as a novel method for the injection of symbolic knowledge into neural networks (NN) allowing data scientists to control what the network should (not) learn. Unlike other similar approaches, our method does not (i) require ground input formulae, (ii) impose any constraint on the NN undergoing injection, (iii) affect the loss function of the NN.
Read full post gblog_arrow_right

Symbolic Knowledge Extraction for Explainable Nutritional Recommenders

by Matteo Magnini, Giovanni Ciatto, Furkan Canturk, Reyhan Aydoğan, and Andrea Omicini Abstract Background and objective This paper focuses on nutritional recommendation systems (RS), i.e. AI-powered automatic systems providing users with suggestions about what to eat to pursue their weight/body shape goals. A trade-off among (potentially) conflictual requirements must be taken into account when designing these kinds of systems, there including: (i) adherence to experts’ prescriptions, (ii) adherence to users’ tastes and preferences, (iii) explainability of the whole recommendation process.
Read full post gblog_arrow_right

A General-Purpose Protocol for Multi-Agent based Explanations

by Giovanni Ciatto, Matteo Magnini, Berk Buzcu, Reyhan Aydoğan, and Andrea Omicini Abstract Building on prior works on explanation negotiation protocols, this paper proposes a general-purpose protocol for multi-agent systems where recommender agents may need to provide explanations for their recommendations. The protocol specifies the roles and responsibilities of the explainee and the explainer agent and the types of information that should be exchanged between them to ensure a clear and effective explanation.
Read full post gblog_arrow_right

Symbolic Knowledge Extraction and Injection with Sub-symbolic Predictors: A Systematic Literature Review

by Giovanni Ciatto, Federico Sabbatini, Andrea Agiollo, Matteo Magnini, and Andrea Omicini Abstract In this paper we focus on the issue of opacity of sub-symbolic machine-learning predictors by promoting two complementary activities—namely, symbolic knowledge extraction (SKE) and injection (SKI) from and into sub-symbolic predictors. We consider as symbolic any language being intelligible and interpretable for both humans and computers. Accordingly, we propose general meta-models for both SKE and SKI, along with two taxonomies for the classification of SKE/SKI methods.
Read full post gblog_arrow_right

GNN2GNN: Graph Neural Networks to Generate Neural Networks

by Andrea Agiollo and Andrea Omicini Abstract The success of neural networks (NNs) is tightly linked with their architectural design—a complex problem by itself. We here introduce a novel framework leveraging Graph Neural Networks to Generate Neural Networks (GNN2GNN) where powerful NN architectures can be learned out of a set of available architecture-performance pairs. GNN2GNN relies on a three-way adversarial training of GNN, to optimise a generator model capable of producing predictions about powerful NN architectures.
Read full post gblog_arrow_right

Position Paper: On the Role of Abductive Reasoning in Semantic Image Segmentation

by Andrea Rafanelli, Stefania Costantini and Andrea Omicini Abstract This position paper provides insights aiming at resolving the most pressing needs and issues of computer vision algorithms. Specifically, these problems relate to the scarcity of data, the inability of such algorithms to adapt to never-seen-before conditions, and the challenge of developing explainable and trustworthy algorithms. This work proposes the incorporation of reasoning systems, and in particular of abductive reasoning, into image segmentation algorithms as a potential solution to the aforementioned issues.
Read full post gblog_arrow_right

Measuring Trustworthiness in Neuro-Symbolic Integration

by Andrea Agiollo and Andrea Omicini Abstract Neuro-symbolic integration of symbolic and subsymbolic techniques represents a fast-growing AI trend aimed at mitigating the issues of neural networks in terms of decision processes, reasoning, and interpretability. Several state-of-the-art neuro-symbolic approaches aim at improving performance, most of them focusing on proving their effectiveness in terms of raw predictive performance and/or reasoning capabilities. Meanwhile, few efforts have been devoted to increasing model trustworthiness, interpretability, and efficiency – mostly due to the complexity of measuring effectively improvements in terms of trustworthiness and interpretability.
Read full post gblog_arrow_right

The Quarrel of Local Post-hoc Explainers for Moral Values Classification in Natural Language Processing

by Andrea Agiollo, Luciano C. Siebert, Pradeep K. Murukannaiah and Andrea Omicini Abstract Although popular and effective, large language models (LLM) are characterised by a performance vs. transparency trade-off that hinders their applicability to sensitive scenarios. This is the main reason behind many approaches focusing on local post-hoc explanations recently proposed by the XAI community. However, to the best of our knowledge, a thorough comparison among available explainability techniques is currently missing, mainly for the lack of a general metric to measure their benefits.
Read full post gblog_arrow_right

Peer-Reviewed Federated Learning

by Mattia Passeri, Andrea Agiollo, and Andrea Omicini Abstract While representing the de-facto framework for enabling distributed training of Machine Learning models, Federated Learning (FL) still suffers convergence issues when non-Independent and Identically Distributed (non-IID) data are considered. In this context, the local model optimisation on different data distributions generate dissimilar updates, which are difficult to aggregate and translate into sub-optimal convergence. To tackle this issues, we propose Peer-Reviewed Federated Learning (PRFL), an extension of the traditional FL training process inspired by the peer-review procedure common in the academic field, where model updates are reviewed by several other clients in the federation before being aggregated at the server-side.
Read full post gblog_arrow_right

Towards a Unified Model for Symbolic Knowledge Extraction with Hypercube-Based Methods

by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini Abstract The XAI community is currently studying and developing symbolic knowledge-extraction (SKE) algorithms as a means to produce human-intelligible explanations for black-box machine learning predictors, so as to achieve believability in human-machine interaction. However, many extraction procedures exist in the literature, and choosing the most adequate one is increasingly cumbersome, as novel methods keep on emerging. Challenges arise from the fact that SKE algorithms are commonly defined based on theoretical assumptions that typically hinder practical applicability.
Read full post gblog_arrow_right

Knowledge injection of Datalog rules via Neural Network Structuring with KINS

by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini Abstract We propose a novel method to inject symbolic knowledge in form of Datalog formulæ into neural networks (NN), called Knowledge Injection via Network Structuring (KINS). The idea behind our method is to extend NN internal structure with ad-hoc layers built out of the injected symbolic knowledge. KINS does not constrain NN to any specific architecture, neither requires logic formulæ to be ground.
Read full post gblog_arrow_right

EneA-FL: Energy-aware Orchestration for Serverless Federated Learning

by Andrea Agiollo, Paolo Bellavista, Matteo Mendula and Andrea Omicini Abstract Federated Learning (FL) represents the de-facto standard paradigm for enabling distributed learning over multiple clients in real-world scenarios. Despite the great strides reached in terms of accuracy and privacy awareness, the real adoption of FL in real-world scenarios, in particular in industrial deployment environments, is still an open thread. This is mainly due to privacy constraints and to the additional complexity stemming from the set of hyperparameters to tune when employing AI techniques on bandwidth-, computing-, and energy-constrained nodes.
Read full post gblog_arrow_right

From Large Language Models to Small Logic Programs: Building Global Explanations from Disagreeing Local Post-hoc Explainers

by Andrea Agiollo, Luciano Siebert Cavalcante, Pradeep Kumar Murukannaiah and Andrea Omicini Abstract The expressive power and efectiveness of large language models (LLMs) is going to increasingly push intelligent agents towards sub-symbolic models for natural language processing (NLP) tasks in human–agent interaction. However, LLMs are characterised by a performance vs. transparency trade-of that hinders their applicability to such sensitive scenarios. This is the main reason behind many approaches focusing on local post-hoc explanations, recently proposed by the XAI community in the NLP realm.
Read full post gblog_arrow_right