-- Explainable AI (XAI) has recently emerged proposing a set of techniques attempting to explain machine learning (ML) models. The recipients (explainee) are intended to be humans or other intelligent virtual entities. Transparency, trust, and debuging are the underlying features calling for XAI. However, in real-world settings, systems are distributed, data are heterogeneous, the “system” knowledge is bounded, and privacy concerns are subject to variable constraints. Current XAI approaches cannot cope with such requirements.
HES-SO
University of Applied Sciences and Arts Western Switzerland
Find out more! UNIBO
Alma Mater Studiorum Università di Bologna
Find out more! UNILU
University of Luxembourg
Find out more! OZU
Özyeğin University
Find out more! LIST
Luxembourg Institute of Science and Technology
The team has multidisciplinary competences sharing the Multi-Agent Systems as common thread.
HES-SO People (from Switzerland) Prof. Michael I. Schumacher
Full Professor at HES-SO
Personal Homepage Dr. Davide Calvaresi
Senior researcher at HES-SO
Personal Homepage Dr. Jean-Paul Calbimonte
Senior researcher at HES-SO
Personal Homepage Victor Hugo Contreras Ordonez
Software Tools PSyKE: A Python library for the extraction of symbolic knowledge from ML predictors. PSyKI: A Python library for the injection of symbolic knowledge into ML predictors. DEXiRE: A Python library for rule extraction from Deep Learning models. SBG: Synthetic Behavioral Generator (SBG) tool simulates user context, and recipe interactions. Pro-DEXiRE: A Python library that complements DEXiRE’s rule based explanations with probabilistic reasoning. Datasets Recipes dataset: Dataset collected querying GPT API with 7000 recipes.
Deliverables [D1.4] Data Management Plan (DMP) [D2.1] Tech report on symbolic knowledge extraction and injection [D2.2] Scientific paper on symbolic knowledge extraction and injection [D2.3] Software libraries supporting extraction and injection [D3.1] Technical report detailing the developed models and data integration [D3.2a] Scientific paper focusing on heterogeneous data integration [D3.2b] Scientific papers focusing on conflict resolution [D4.1] Technical report detailing the developed user model and agent-based profiling [D5.
by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher
Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
by Giovanni Ciatto, Amro Najjar, Jean-Paul Calbimonte, and Davide Calvaresi
Abstract Since their appearance, computer programs have embodied discipline and structured approaches and methodologies. Yet, to this day, equipping machines with imaginative and creative capabilities remains one of the most challenging and fascinating goals we pursue. Intelligent software agents can behave intelligently in well-defined scenarios, relying on Machine Learning (ML), symbolic reasoning, and their developers' capability of tailoring smart behaviors on the particular application domain(s).
by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini
Abstract Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI.
by Giovanni Ciatto, Matteo Castigliò, and Roberta Calegari
Abstract In this paper we address the problem of hybridising symbolic and sub-symbolic approaches in artificial intelligence, following the purpose of creating flexible and data-driven systems, which are simultaneously comprehensible and capable of automated learning. In particular, we propose a logic API for supervised machine learning, enabling logic programmers to exploit neural networks – among the others – in their programs.
by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini
Abstract Knowledge extraction methods are applied to ML-based predictors to attain explainable representations of their functioning when the lack of interpretable results constitutes a problem. Several algorithms have been proposed for knowledge extraction, mostly focusing on the extraction of either lists or trees of rules. Yet, most of them only support supervised learning – and, in particular, classification – tasks.
by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini
Abstract Combining machine learning (ML) and computational logic (CL) is hard, mostly because of the inherently-different ways they use to represent knowledge. In fact, while ML relies on fixed-size numeric representations leveraging on vectors, matrices, or tensors of real numbers, CL relies on logic terms and clauses—which are unlimited in size and structure. Graph neural networks (GNN) are a novelty in the ML world introduced for dealing with graph-structured data in a sub-symbolic way.
by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini
How to access URL: http://ceur-ws.org/Vol-2963/paper14.pdf Abstract A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation.
by Giovanni Ciatto, Roberta Calegari, and Andrea Omicini
Abstract To date, logic-based technologies are either built on top or as extensions of the Prolog language, mostly working as monolithic solutions tailored upon specific inference procedures, unification mechanisms, or knowledge representation techniques. Instead, to maximise their impact, logic-based technologies should support and enable the general-purpose exploitation of all the manifold contributions from logic programming. Accordingly, we present 2P-Kt, a reboot of the tuProlog project offering a general, extensible, and interoperable ecosystem for logic programming and symbolic AI.
by Giuseppe Pisano, Roberta Calegari, and Andrea Omicini
Abstract We discuss the problem of cooperative argumentation in multi-agent systems, focusing on the computational model. An actor-based model is proposed as a first step towards cooperative argumentation in multi-agent systems to tackle distribution issues—illustrating a preliminary fully-distributed version of the argumentation process completely based on message passing.
How to access URL: http://ceur-ws.org/Vol-2963/paper17.pdf How to cite Bibtex @inproceedings{distributedarg-woa2021, author = {Pisano, Giuseppe and Calegari, Roberta and Omicini, Andrea}, booktitle = {WOA 2021 -- 22nd Workshop ``From Objects to Agents''}, editor = {Calegari, Roberta and Ciatto, Giovanni and Denti, Enrico and Omicini, Andrea and Sartor, Giovanni}, issn = {1613-0073}, keywords = {Argumentation, MAS, cooperative argumentation, distributed argumentation process}, location = {Bologna, Italy}, month = oct, note = {22nd Workshop ``From Objects to Agents'' (WOA 2021), Bologna, Italy, 1--3~} # sep # {~2021.
by Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide
Abstract TBD
How to access URL: http://publications.hevs.ch/index.php/publications/show/2883 How to cite Bibtex @incollection{canc-bnaic-2021-explanable-negotiations, address = {}, author = {Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide}, booktitle = {Proceedings of BNAIC 2021}, doi = {}, editor = {}, isbn = {}, isbn-online = {}, issn = {}, keywords = {explainable negotiation}, pages = {}, publisher = {ACM}, series = {}, subseries = {}, title = {On Explainable Negotiations via Argumentation}, url = {http://publications.
by Graziani, Mara and Dutkiewicz, Lidia and Calvaresi, Davide and Amorim, Jos{'e} Pereira and Yordanova, Katerina and Vered, Mor and Nair, Rahul and Abreu, Pedro Henriques and Blanke, Tobias and Pulignano, Valeria and Prior, John O. and Lauwaert, Lode and Reijers, Wessel and Depeursinge, Adrien and Andrearczyk, Vincent and Müller, Henning
Abstract Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application.
by Rachele Carli and Amro Najjar and Davide Calvaresi
Abstract TBD
How to access URL: http://publications.hevs.ch/index.php/publications/show/2932 DOI: https://doi.org/10.1145/3527188.3563941 How to cite Bibtex @inproceedings{CarliNC22, author = {Rachele Carli and Amro Najjar and Davide Calvaresi}, editor = {Christoph Bartneck and Takayuki Kanda and Mohammad Obaid and Wafa Johal}, title = {Human-Social Robots Interaction: The Blurred Line between Necessary Anthropomorphization and Manipulation}, booktitle = {International Conference on Human-Agent Interaction, {HAI} 2022, Christchurch, New Zealand, December 5-8, 2022}, pages = {321--323}, publisher = {{ACM}}, year = {2022}, url = {https://doi.
by Yazan Mualla and Igor Tchappi and Timotheus Kampik and Amro Najjar and Davide Calvaresi and Abdeljalil Abbas-Turki and Stéphane Galland and Christophe Nicolle
Abstract With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent’s state of mind. Recent empirical studies have confirmed that explaining a system’s behavior to human users fosters the latter’s acceptance of the system.
by Berk Buzcu, Vanitha Varadhajaran, Igor Tchappi, Amro Najjar, Davide Calvaresi and Reyhan Aydoğan
Abstract People’s awareness about the importance of healthy lifestyles is rising. This opens new possibilities for personalized intelligent health and coaching applications. In particular, there is a need for more than simple recommendations and mechanistic interactions. Recent studies have identified nutrition virtual coaching systems (NVC) as a technological solution, possibly bridging technologies such as recommender, informative, persuasive, and argumentation systems.
by Contreras Ordoñez Victor Hugo, Davide Calvaresi, and Michael I. Schumacher
Abstract Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of transparency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering models of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.
by Reyhan Aydoğan, and Catholijn M. Jonker
Abstract This paper presents the negotiation support mechanisms provided by the Pocket Negotiator (PN) and an elaborate empirical evaluation of the economic decision support (EDS) mechanisms during the bidding phase of negotiations as provided by the PN. Some of these support mechanisms are offered actively, some passively. With passive support we mean that the user only gets that support by clicking a button, whereas active support is provided without prompting.
by Reyhan Aydoğan, and Catholijn M. Jonker
Abstract This paper introduces a dependency analysis and a categorization of conceptualized and existing economic decision support mechanisms for negotiation. The focus of our survey is on economic decision support mechanisms, although some behavioural support mechanisms were included, to recognize the important work in that area. We categorize support mechanisms from four different aspects: (i) economic versus behavioral decision support, (ii) analytical versus strategical support, (iii) active versus passive support and (iv) implicit versus explicit support.
by Contreras Ordoñez Victor Hugo, Niccolo Marini, Lora Fanda, Gaetano Manzo, Yazan Mualla, Jean-Paul Calbimonte, Michael I. Schumacher, and Davide Calvaresi
Abstract Background: Despite the advancement in eXplainable Artificial Intelligence, the explanations provided by model-agnostic predictors still call for improvements (i.e., lack of accurate descriptions of predictors’ behaviors). Contribution: We present a tool for Deep Explanations and Rule Extraction (DEXiRE) to approximate rules for Deep Learning models with any number of hidden layers.
by Davide Calvaresi, Rachele Carli, Jean-Gabriel Piguet, Contreras Ordoñez Victor Hugo, Gloria Luzzani, Amro Najjar, Jean-Paul Calbimonte, and Michael I. Schumacher
Abstract Choices and preferences of individuals are nowadays increasingly influenced by countless inputs and recommendations provided by artificial intelligence-based systems. The accuracy of recommender systems (RS) has achieved remarkable results in several domains, from infotainment to marketing and lifestyle. However, in sensitive use-cases, such as nutrition, there is a need for more complex dynamics and responsibilities beyond conventional RS frameworks.
by Rachele Carli, Amro Najjar, and Davide Calvaresi
Abstract In the last decades, Artificial intelligence (AI) systems have been increasingly adopted in assistive (possibly collaborative) decision-making tools. In particular, AI-based persuasive technologies are designed to steer/influence users’ behaviour, habits, and choices to facilitate the achievement of their own - predetermined - goals. Nowadays, the inputs received by the assistive systems leverage heavily AI data-driven approaches. Thus, it is imperative to have transparent and understandable (to the user) both the process leading to the recommendations and the recommendations.
by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini
Abstract Symbolic knowledge-extraction (SKE) algorithms proposed by the XAI community to obtain human-intelligible explanations for opaque machine learning predictors are currently being studied and developed with growing interest, also in order to achieve believability in interactions. However, choosing the most adequate extraction procedure amongst the many existing in the literature is becoming more and more challenging, as the amount of available methods increases.
by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini
Abstract A long-standing ambition in artificial intelligence is to integrate predictors' inductive features (i.e., learning from examples) with deductive capabilities (i.e., drawing inferences from prior symbolic knowledge). Many algorithms methods in the literature support injection of prior symbolic knowledge into predictors, generally following the purpose of attaining better (i.e., more effective or efficient w.r.t. predictive performance) predictors. However, to the best of our knowledge, running implementations of these algorithms are currently either proof of concepts or unavailable in most cases.
by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini
Abstract We propose a novel method to inject symbolic knowledge in form of Datalog formulæ into neural networks (NN), called KINS (Knowledge Injection via Network Structuring). The idea behind our method is to extend NN internal structure with ad-hoc layers built out the injected symbolic knowledge. KINS does not constrain NN to any specific architecture, neither requires logic formulæ to be ground.
by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini
Abstract Modern distributed systems require communicating agents to agree on a shared, formal semantics for the data they exchange and operate upon. The Semantic Web offers tools to encode semantics in the form of ontologies, where data is represented in the form knowledge graphs (KG). Applying such tools to intelligent agents equipped with machine learning (ML) capabilities is of particular interest, as it may enable a higher degree of interoperability among heterogeneous agents.
by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini
Abstract We propose KILL (Knowledge Injection via Lambda Layer) as a novel method for the injection of symbolic knowledge into neural networks (NN) allowing data scientists to control what the network should (not) learn. Unlike other similar approaches, our method does not (i) require ground input formulae, (ii) impose any constraint on the NN undergoing injection, (iii) affect the loss function of the NN.
by Matteo Magnini, Giovanni Ciatto, Furkan Canturk, Reyhan Aydoğan, and Andrea Omicini
Abstract Background and objective This paper focuses on nutritional recommendation systems (RS), i.e. AI-powered automatic systems providing users with suggestions about what to eat to pursue their weight/body shape goals. A trade-off among (potentially) conflictual requirements must be taken into account when designing these kinds of systems, there including: (i) adherence to experts’ prescriptions, (ii) adherence to users’ tastes and preferences, (iii) explainability of the whole recommendation process.
by Giovanni Ciatto, Matteo Magnini, Berk Buzcu, Reyhan Aydoğan, and Andrea Omicini
Abstract Building on prior works on explanation negotiation protocols, this paper proposes a general-purpose protocol for multi-agent systems where recommender agents may need to provide explanations for their recommendations. The protocol specifies the roles and responsibilities of the explainee and the explainer agent and the types of information that should be exchanged between them to ensure a clear and effective explanation.
by Contreras Ordoñez Victor Hugo, Andrea Bagante, Niccolò Marini, Michael I. Schumacher, Vincent Andrearczyk and Davide Calvaresi
Abstract TBD
How to access TBD
How to cite Bibtex TBD
by Joris Hulstijn, Igor Tchappi, Amro Najjar, and Reyhan Aydoğan
Abstract Recommender systems aim to support their users by reducing information overload so that they can make better decisions. Recommender systems must be transparent, so users can form mental models about the system’s goals, internal state, and capabilities, that are in line with their actual design. Explanations and transparent behaviour of the system should inspire trust and, ultimately, lead to more persuasive recommendations.
by Joris Hulstijn
Abstract Automated decision making systems take decisions that matter. Some human or legal person remains responsible. Looking back, that person is accountable for the decisions made by the system, and may even be liable in case of damages. That puts constraints on the way in which decision making systems are designed, and how they are deployed in organizations. In this paper, we analyze computational accountability in three steps.
by Giovanni Ciatto, Federico Sabbatini, Andrea Agiollo, Matteo Magnini, and Andrea Omicini
Abstract In this paper we focus on the issue of opacity of sub-symbolic machine-learning predictors by promoting two complementary activities—namely, symbolic knowledge extraction (SKE) and injection (SKI) from and into sub-symbolic predictors. We consider as symbolic any language being intelligible and interpretable for both humans and computers. Accordingly, we propose general meta-models for both SKE and SKI, along with two taxonomies for the classification of SKE/SKI methods.
by Mehmet Onur Keskin, Berk Buzcu, and Reyhan Aydoğan
Abstract Day by day, human-agent negotiation becomes more and more vital to reach a socially beneficial agreement when stakeholders need to make a joint decision together. Developing agents who understand not only human preferences but also attitudes is a significant prerequisite for this kind of interaction. Studies on opponent modeling are predominantly based on automated negotiation and may yield good predictions after exchanging hundreds of offers.
by Berk Buzcu, Melissa Tessa, Igor Tchappi, Amro Najjar, Joris Hulstijn, Davide Calvaresi and Reyhan Aydoğan
Abstract The awareness about healthy lifestyles is increasing, opening to personalized intelligent health coaching applications. A demand for more than mere suggestions and mechanistic interactions has driven attention to nutrition virtual coaching systems (NVC) as a bridge between human–machine interaction and recommender, informative, persuasive, and argumentation systems. NVC can rely on data-driven opaque mechanisms.
by Berk Buzcu, Kuru Emre, and Reyhan Aydoğan
Abstract With the pervasive usage of recommendation systems across various domains, there is a growing need for transparent and convincing interactions to build a rapport with the system users. Incorporating explainability into recommendation systems has become a promising strategy to bolster user trust and sociability. This study centers on recommendation systems that leverage varying explainability techniques to cultivate trust by delivering comprehensible customized explanations for the given recommendations.
by Mehmet Onur Keskin, Selen Akay, Ayse Dogan, Berkecan Koçyigit, Junko Kanero and Reyhan Aydoğan
Abstract This report presents two experimental studies examining whether relatively subtle differences in the appearances of humanoid robots impact (1) the outcomes of human-robot negotiation (i.e., utility scores) and (2) the participant’s attitudes toward their robot negotiation partner. Study I compared Nao and Pepper, and Study II compared Nao and QT in identical negotiation settings.
by Joris Hulstijn and Leon Van der Torre
Abstract There is a lot of interest in explainable AI [2, 11]. When a system takes decisions that affect people, they can demand an explanation of how the decision was derived, or a justification of why the decision is justified. Note that explanation and justification are related, but not the same [1]. The need for explanation or justification is more pressing, when the system makes legal decisions [3], or when the decision is based on social or ethical norms [5].
by Albrecht Richard, Amro Najjar, Igor Tchappi and Joris Hulstijn
Abstract Due to recent development and improvements in the field of artificial intelligence (AI), methods of that field are increasingly adopted in various domains, including historical research. However, modern state-of-the-art machine learning (ML) models are black-boxes that lack transparency and interpretability. Therefore, explainable AI (XAI) methods are used to make black-box models more transparent and inspire user trust.
by Rachele Carli, and Davide Calvaresi
Abstract There has been a growing interest in Explainable Artificial Intelligence (henceforth XAI) models among researchers and AI programmers in recent years. Indeed, the development of highly interactive technologies that can collaborate closely with users has made explainability a necessity. This intends to reduce mistrust and the sense of unpredictability that AI can create, especially among non-experts. Moreover, the potential of XAI as a valuable resource has been recognized, considering that it can make intelligent systems more user-friendly and reduce the negative impact of black box systems.
by Simona Tiribelli, and Davide Calvaresi
Abstract Health Recommender Systems (HRS) are promising Articial-Intelligence (AI)-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging (AA). However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis.
by Contreras Ordoñez Victor Hugo, Michael I. Schumacher and Davide Calvaresi
Abstract Deep Learning (DL) models are increasingly dealing with heterogeneous data (i.e., a mix of structured and unstructured data), calling for adequate eXplainable Artificial Intelligence (XAI) methods. Nevertheless, only some of the existing techniques consider the uncertainty inherent to the data. To this end, this study proposes a pipeline to explain heterogeneous data-based DL models by combining embed- ding analysis, rule extraction methods, and probabilistic models.
by Berk Buzcu, Emre Kuru, Davide Calvaresi and Reyhan Aydoğan
Abstract As recommendation systems become increasingly prevalent in numerous fields, the need for clear and persuasive interactions with users is rising. Integrating explainability into these systems is emerging as an effective approach to enhance user trust and sociability. This research focuses on recommendation systems that utilize a range of explainability techniques to foster trust by providing understandable personalized explanations for the recommendations made.
by Berk Buzcu, Yvan Pannatier, Reyhan Aydoğan, Michael I. Schumacher, Jean-Paul Calbimonte, and Davide Calvaresi
Abstract Existing agent-based chatbot frameworks need seamless mechanisms to include explainable dialogic engines within the contextual flow. To this end, this paper presents a set of novel modules within the EREBOTS agent-based framework for chatbot development, including dialog-based plug-and-play custom algorithms, agnostic back/front ends, and embedded interactive explainable engines that can manage human feedback at run time.
by Andrea Agiollo and Andrea Omicini
Abstract The success of neural networks (NNs) is tightly linked with their architectural design—a complex problem by itself. We here introduce a novel framework leveraging Graph Neural Networks to Generate Neural Networks (GNN2GNN) where powerful NN architectures can be learned out of a set of available architecture-performance pairs. GNN2GNN relies on a three-way adversarial training of GNN, to optimise a generator model capable of producing predictions about powerful NN architectures.
by Andrea Rafanelli, Stefania Costantini and Andrea Omicini
Abstract This position paper provides insights aiming at resolving the most pressing needs and issues of computer vision algorithms. Specifically, these problems relate to the scarcity of data, the inability of such algorithms to adapt to never-seen-before conditions, and the challenge of developing explainable and trustworthy algorithms. This work proposes the incorporation of reasoning systems, and in particular of abductive reasoning, into image segmentation algorithms as a potential solution to the aforementioned issues.
by Andrea Agiollo and Andrea Omicini
Abstract Neuro-symbolic integration of symbolic and subsymbolic techniques represents a fast-growing AI trend aimed at mitigating the issues of neural networks in terms of decision processes, reasoning, and interpretability. Several state-of-the-art neuro-symbolic approaches aim at improving performance, most of them focusing on proving their effectiveness in terms of raw predictive performance and/or reasoning capabilities. Meanwhile, few efforts have been devoted to increasing model trustworthiness, interpretability, and efficiency – mostly due to the complexity of measuring effectively improvements in terms of trustworthiness and interpretability.
by Andrea Agiollo, Luciano C. Siebert, Pradeep K. Murukannaiah and Andrea Omicini
Abstract Although popular and effective, large language models (LLM) are characterised by a performance vs. transparency trade-off that hinders their applicability to sensitive scenarios. This is the main reason behind many approaches focusing on local post-hoc explanations recently proposed by the XAI community. However, to the best of our knowledge, a thorough comparison among available explainability techniques is currently missing, mainly for the lack of a general metric to measure their benefits.
by Mattia Passeri, Andrea Agiollo, and Andrea Omicini
Abstract While representing the de-facto framework for enabling distributed training of Machine Learning models, Federated Learning (FL) still suffers convergence issues when non-Independent and Identically Distributed (non-IID) data are considered. In this context, the local model optimisation on different data distributions generate dissimilar updates, which are difficult to aggregate and translate into sub-optimal convergence. To tackle this issues, we propose Peer-Reviewed Federated Learning (PRFL), an extension of the traditional FL training process inspired by the peer-review procedure common in the academic field, where model updates are reviewed by several other clients in the federation before being aggregated at the server-side.
by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini
Abstract The XAI community is currently studying and developing symbolic knowledge-extraction (SKE) algorithms as a means to produce human-intelligible explanations for black-box machine learning predictors, so as to achieve believability in human-machine interaction. However, many extraction procedures exist in the literature, and choosing the most adequate one is increasingly cumbersome, as novel methods keep on emerging. Challenges arise from the fact that SKE algorithms are commonly defined based on theoretical assumptions that typically hinder practical applicability.
by Matteo Magnini, Giovanni Ciatto, and Andrea Omicini
Abstract We propose a novel method to inject symbolic knowledge in form of Datalog formulæ into neural networks (NN), called Knowledge Injection via Network Structuring (KINS). The idea behind our method is to extend NN internal structure with ad-hoc layers built out of the injected symbolic knowledge. KINS does not constrain NN to any specific architecture, neither requires logic formulæ to be ground.
by Andrea Agiollo, Paolo Bellavista, Matteo Mendula and Andrea Omicini
Abstract Federated Learning (FL) represents the de-facto standard paradigm for enabling distributed learning over multiple clients in real-world scenarios. Despite the great strides reached in terms of accuracy and privacy awareness, the real adoption of FL in real-world scenarios, in particular in industrial deployment environments, is still an open thread. This is mainly due to privacy constraints and to the additional complexity stemming from the set of hyperparameters to tune when employing AI techniques on bandwidth-, computing-, and energy-constrained nodes.
by Andrea Agiollo, Luciano Siebert Cavalcante, Pradeep Kumar Murukannaiah and Andrea Omicini
Abstract The expressive power and efectiveness of large language models (LLMs) is going to increasingly push intelligent agents towards sub-symbolic models for natural language processing (NLP) tasks in human–agent interaction. However, LLMs are characterised by a performance vs. transparency trade-of that hinders their applicability to such sensitive scenarios. This is the main reason behind many approaches focusing on local post-hoc explanations, recently proposed by the XAI community in the NLP realm.