-- Explainable AI (XAI) has recently emerged proposing a set of techniques attempting to explain machine learning (ML) models. The recipients (explainee) are intended to be humans or other intelligent virtual entities. Transparency, trust, and debuging are the underlying features calling for XAI. However, in real-world settings, systems are distributed, data are heterogeneous, the “system” knowledge is bounded, and privacy concerns are subject to variable constraints. Current XAI approaches cannot cope with such requirements.
HES-SO
University of Applied Sciences and Arts Western Switzerland
Find out more! UNIBO
Alma Mater Studiorum Università di Bologna
Find out more! UNILU
University of Luxembourg
Find out more! OZU
Özyeğin University
Find out more! LIST
Luxembourg Institute of Science and Technology
The team has multidisciplinary competences sharing the Multi-Agent Systems as common thread.
HES-SO People (from Switzerland) Prof. Michael I. Schumacher
Full Professor at HES-SO
Personal Homepage Dr. Davide Calvaresi
Senior researcher at HES-SO
Personal Homepage Dr. Jean-Paul Calbimonte
Senior researcher at HES-SO
Personal Homepage Victor Hugo Contreras Ordonez
Software Tools PSyKE: A Python library for the extraction of symbolic knowledge from ML predictors. PSyKI: A Python library for the injection of symbolic knowledge into ML predictors. DEXiRE: A Python library for rule extraction from Deep Learning models. SBG: Synthetic Behavioral Generator (SBG) tool simulates user context, and recipe interactions. Pro-DEXiRE: A Python library that complements DEXiRE’s rule based explanations with probabilistic reasoning. Datasets Recipes dataset: Dataset collected querying GPT API with 7000 recipes.
Deliverables [D1.4] Data Management Plan (DMP) [D2.1] Tech report on symbolic knowledge extraction and injection [D2.2] Scientific paper on symbolic knowledge extraction and injection [D2.3] Software libraries supporting extraction and injection [D3.1] Technical report detailing the developed models and data integration [D3.2a] Scientific paper focusing on heterogeneous data integration [D3.2b] Scientific papers focusing on conflict resolution [D4.1] Technical report detailing the developed user model and agent-based profiling [D4.
by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher
Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
by Giovanni Ciatto, Amro Najjar, Jean-Paul Calbimonte, and Davide Calvaresi
Abstract Since their appearance, computer programs have embodied discipline and structured approaches and methodologies. Yet, to this day, equipping machines with imaginative and creative capabilities remains one of the most challenging and fascinating goals we pursue. Intelligent software agents can behave intelligently in well-defined scenarios, relying on Machine Learning (ML), symbolic reasoning, and their developers' capability of tailoring smart behaviors on the particular application domain(s).
by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini
Abstract Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI.
by Giovanni Ciatto, Matteo Castigliò, and Roberta Calegari
Abstract In this paper we address the problem of hybridising symbolic and sub-symbolic approaches in artificial intelligence, following the purpose of creating flexible and data-driven systems, which are simultaneously comprehensible and capable of automated learning. In particular, we propose a logic API for supervised machine learning, enabling logic programmers to exploit neural networks – among the others – in their programs.
by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini
Abstract Knowledge extraction methods are applied to ML-based predictors to attain explainable representations of their functioning when the lack of interpretable results constitutes a problem. Several algorithms have been proposed for knowledge extraction, mostly focusing on the extraction of either lists or trees of rules. Yet, most of them only support supervised learning – and, in particular, classification – tasks.
by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini
Abstract Combining machine learning (ML) and computational logic (CL) is hard, mostly because of the inherently-different ways they use to represent knowledge. In fact, while ML relies on fixed-size numeric representations leveraging on vectors, matrices, or tensors of real numbers, CL relies on logic terms and clauses—which are unlimited in size and structure. Graph neural networks (GNN) are a novelty in the ML world introduced for dealing with graph-structured data in a sub-symbolic way.
by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini
How to access URL: http://ceur-ws.org/Vol-2963/paper14.pdf Abstract A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation.
by Giovanni Ciatto, Roberta Calegari, and Andrea Omicini
Abstract To date, logic-based technologies are either built on top or as extensions of the Prolog language, mostly working as monolithic solutions tailored upon specific inference procedures, unification mechanisms, or knowledge representation techniques. Instead, to maximise their impact, logic-based technologies should support and enable the general-purpose exploitation of all the manifold contributions from logic programming. Accordingly, we present 2P-Kt, a reboot of the tuProlog project offering a general, extensible, and interoperable ecosystem for logic programming and symbolic AI.
by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini
Abstract Symbolic knowledge-extraction (SKE) algorithms proposed by the XAI community to obtain human-intelligible explanations for opaque machine learning predictors are currently being studied and developed with growing interest, also in order to achieve believability in interactions. However, choosing the most adequate extraction procedure amongst the many existing in the literature is becoming more and more challenging, as the amount of available methods increases.
by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini
Abstract A long-standing ambition in artificial intelligence is to integrate predictors' inductive features (i.e., learning from examples) with deductive capabilities (i.e., drawing inferences from prior symbolic knowledge). Many algorithms methods in the literature support injection of prior symbolic knowledge into predictors, generally following the purpose of attaining better (i.e., more effective or efficient w.r.t. predictive performance) predictors. However, to the best of our knowledge, running implementations of these algorithms are currently either proof of concepts or unavailable in most cases.
by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini
Abstract We propose a novel method to inject symbolic knowledge in form of Datalog formulæ into neural networks (NN), called KINS (Knowledge Injection via Network Structuring). The idea behind our method is to extend NN internal structure with ad-hoc layers built out the injected symbolic knowledge. KINS does not constrain NN to any specific architecture, neither requires logic formulæ to be ground.
by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini
Abstract Modern distributed systems require communicating agents to agree on a shared, formal semantics for the data they exchange and operate upon. The Semantic Web offers tools to encode semantics in the form of ontologies, where data is represented in the form knowledge graphs (KG). Applying such tools to intelligent agents equipped with machine learning (ML) capabilities is of particular interest, as it may enable a higher degree of interoperability among heterogeneous agents.
by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini
Abstract We propose KILL (Knowledge Injection via Lambda Layer) as a novel method for the injection of symbolic knowledge into neural networks (NN) allowing data scientists to control what the network should (not) learn. Unlike other similar approaches, our method does not (i) require ground input formulae, (ii) impose any constraint on the NN undergoing injection, (iii) affect the loss function of the NN.
by Matteo Magnini, Giovanni Ciatto, Furkan Canturk, Reyhan Aydoğan, and Andrea Omicini
Abstract Background and objective This paper focuses on nutritional recommendation systems (RS), i.e. AI-powered automatic systems providing users with suggestions about what to eat to pursue their weight/body shape goals. A trade-off among (potentially) conflictual requirements must be taken into account when designing these kinds of systems, there including: (i) adherence to experts’ prescriptions, (ii) adherence to users’ tastes and preferences, (iii) explainability of the whole recommendation process.
by Giovanni Ciatto, Matteo Magnini, Berk Buzcu, Reyhan Aydoğan, and Andrea Omicini
Abstract Building on prior works on explanation negotiation protocols, this paper proposes a general-purpose protocol for multi-agent systems where recommender agents may need to provide explanations for their recommendations. The protocol specifies the roles and responsibilities of the explainee and the explainer agent and the types of information that should be exchanged between them to ensure a clear and effective explanation.
by Giovanni Ciatto, Federico Sabbatini, Andrea Agiollo, Matteo Magnini, and Andrea Omicini
Abstract In this paper we focus on the issue of opacity of sub-symbolic machine-learning predictors by promoting two complementary activities—namely, symbolic knowledge extraction (SKE) and injection (SKI) from and into sub-symbolic predictors. We consider as symbolic any language being intelligible and interpretable for both humans and computers. Accordingly, we propose general meta-models for both SKE and SKI, along with two taxonomies for the classification of SKE/SKI methods.
by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini
Abstract The XAI community is currently studying and developing symbolic knowledge-extraction (SKE) algorithms as a means to produce human-intelligible explanations for black-box machine learning predictors, so as to achieve believability in human-machine interaction. However, many extraction procedures exist in the literature, and choosing the most adequate one is increasingly cumbersome, as novel methods keep on emerging. Challenges arise from the fact that SKE algorithms are commonly defined based on theoretical assumptions that typically hinder practical applicability.
by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini
Abstract We propose a novel method to inject symbolic knowledge in form of Datalog formulæ into neural networks (NN), called Knowledge Injection via Network Structuring (KINS). The idea behind our method is to extend NN internal structure with ad-hoc layers built out of the injected symbolic knowledge. KINS does not constrain NN to any specific architecture, neither requires logic formulæ to be ground.