Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge

by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
Read full post gblog_arrow_right

Towards Explainable Visionary Agents: License to Dare and Imagine

by Giovanni Ciatto, Amro Najjar, Jean-Paul Calbimonte, and Davide Calvaresi Abstract Since their appearance, computer programs have embodied discipline and structured approaches and methodologies. Yet, to this day, equipping machines with imaginative and creative capabilities remains one of the most challenging and fascinating goals we pursue. Intelligent software agents can behave intelligently in well-defined scenarios, relying on Machine Learning (ML), symbolic reasoning, and their developers' capability of tailoring smart behaviors on the particular application domain(s).
Read full post gblog_arrow_right

On Explainable Negotiations via Argumentation

by Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide Abstract TBD How to access URL: http://publications.hevs.ch/index.php/publications/show/2883 How to cite Bibtex @incollection{canc-bnaic-2021-explanable-negotiations, address = {}, author = {Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide}, booktitle = {Proceedings of BNAIC 2021}, doi = {}, editor = {}, isbn = {}, isbn-online = {}, issn = {}, keywords = {explainable negotiation}, pages = {}, publisher = {ACM}, series = {}, subseries = {}, title = {On Explainable Negotiations via Argumentation}, url = {http://publications.
Read full post gblog_arrow_right

A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences

by Graziani, Mara and Dutkiewicz, Lidia and Calvaresi, Davide and Amorim, Jos{'e} Pereira and Yordanova, Katerina and Vered, Mor and Nair, Rahul and Abreu, Pedro Henriques and Blanke, Tobias and Pulignano, Valeria and Prior, John O. and Lauwaert, Lode and Reijers, Wessel and Depeursinge, Adrien and Andrearczyk, Vincent and Müller, Henning Abstract Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application.
Read full post gblog_arrow_right

Human-Social Robots Interaction: the blurred line between necessary anthropomorphization and manipulation

by Rachele Carli and Amro Najjar and Davide Calvaresi Abstract TBD How to access URL: http://publications.hevs.ch/index.php/publications/show/2932 DOI: https://doi.org/10.1145/3527188.3563941 How to cite Bibtex @inproceedings{CarliNC22, author = {Rachele Carli and Amro Najjar and Davide Calvaresi}, editor = {Christoph Bartneck and Takayuki Kanda and Mohammad Obaid and Wafa Johal}, title = {Human-Social Robots Interaction: The Blurred Line between Necessary Anthropomorphization and Manipulation}, booktitle = {International Conference on Human-Agent Interaction, {HAI} 2022, Christchurch, New Zealand, December 5-8, 2022}, pages = {321--323}, publisher = {{ACM}}, year = {2022}, url = {https://doi.
Read full post gblog_arrow_right

The quest of parsimonious XAI: A human-agent architecture for explanation formulation

by Yazan Mualla and Igor Tchappi and Timotheus Kampik and Amro Najjar and Davide Calvaresi and Abdeljalil Abbas-Turki and Stéphane Galland and Christophe Nicolle Abstract With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent’s state of mind. Recent empirical studies have confirmed that explaining a system’s behavior to human users fosters the latter’s acceptance of the system.
Read full post gblog_arrow_right

Explanation-Based Negotiation Protocol for Nutrition Virtual Coaching

by Berk Buzcu, Vanitha Varadhajaran, Igor Tchappi, Amro Najjar, Davide Calvaresi and Reyhan Aydoğan Abstract People’s awareness about the importance of healthy lifestyles is rising. This opens new possibilities for personalized intelligent health and coaching applications. In particular, there is a need for more than simple recommendations and mechanistic interactions. Recent studies have identified nutrition virtual coaching systems (NVC) as a technological solution, possibly bridging technologies such as recommender, informative, persuasive, and argumentation systems.
Read full post gblog_arrow_right

Integration of Local and Global Features Explanation with Global Rules Extraction and Generation Tools

by Contreras Ordoñez Victor Hugo, Davide Calvaresi, and Michael I. Schumacher Abstract Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of transparency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering models of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.
Read full post gblog_arrow_right

A DEXiRE for Extracting Propositional Rules from Neural Networks via Binarization

by Contreras Ordoñez Victor Hugo, Niccolo Marini, Lora Fanda, Gaetano Manzo, Yazan Mualla, Jean-Paul Calbimonte, Michael I. Schumacher, and Davide Calvaresi Abstract Background: Despite the advancement in eXplainable Artificial Intelligence, the explanations provided by model-agnostic predictors still call for improvements (i.e., lack of accurate descriptions of predictors’ behaviors). Contribution: We present a tool for Deep Explanations and Rule Extraction (DEXiRE) to approximate rules for Deep Learning models with any number of hidden layers.
Read full post gblog_arrow_right

Ethical and legal considerations for nutrition virtual coaches

by Davide Calvaresi, Rachele Carli, Jean-Gabriel Piguet, Contreras Ordoñez Victor Hugo, Gloria Luzzani, Amro Najjar, Jean-Paul Calbimonte, and Michael I. Schumacher Abstract Choices and preferences of individuals are nowadays increasingly influenced by countless inputs and recommendations provided by artificial intelligence-based systems. The accuracy of recommender systems (RS) has achieved remarkable results in several domains, from infotainment to marketing and lifestyle. However, in sensitive use-cases, such as nutrition, there is a need for more complex dynamics and responsibilities beyond conventional RS frameworks.
Read full post gblog_arrow_right

Risk and Exposure of XAI in Persuasion and Argumentation: The case of Manipulation

by Rachele Carli, Amro Najjar, and Davide Calvaresi Abstract In the last decades, Artificial intelligence (AI) systems have been increasingly adopted in assistive (possibly collaborative) decision-making tools. In particular, AI-based persuasive technologies are designed to steer/influence users’ behaviour, habits, and choices to facilitate the achievement of their own - predetermined - goals. Nowadays, the inputs received by the assistive systems leverage heavily AI data-driven approaches. Thus, it is imperative to have transparent and understandable (to the user) both the process leading to the recommendations and the recommendations.
Read full post gblog_arrow_right

Towards interactive explanation-based nutrition virtual coaching systems

by Berk Buzcu, Melissa Tessa, Igor Tchappi, Amro Najjar, Joris Hulstijn, Davide Calvaresi and Reyhan Aydoğan Abstract The awareness about healthy lifestyles is increasing, opening to personalized intelligent health coaching applications. A demand for more than mere suggestions and mechanistic interactions has driven attention to nutrition virtual coaching systems (NVC) as a bridge between human–machine interaction and recommender, informative, persuasive, and argumentation systems. NVC can rely on data-driven opaque mechanisms.
Read full post gblog_arrow_right

The Wildcard XAI: from a Necessity, to a Resource, to a Dangerous Decoy

by Rachele Carli, and Davide Calvaresi Abstract There has been a growing interest in Explainable Artificial Intelligence (henceforth XAI) models among researchers and AI programmers in recent years. Indeed, the development of highly interactive technologies that can collaborate closely with users has made explainability a necessity. This intends to reduce mistrust and the sense of unpredictability that AI can create, especially among non-experts. Moreover, the potential of XAI as a valuable resource has been recognized, considering that it can make intelligent systems more user-friendly and reduce the negative impact of black box systems.
Read full post gblog_arrow_right

Rethinking Health Recommender Systems for Active Aging: An Autonomy-Based Ethical Analysis

by Simona Tiribelli, and Davide Calvaresi Abstract Health Recommender Systems (HRS) are promising Articial-Intelligence (AI)-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging (AA). However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis.
Read full post gblog_arrow_right

Explanation of Deep Learning Models via Logic Rules Enhanced by Embeddings Analysis, and Probabilistic Models

by Contreras Ordoñez Victor Hugo, Michael I. Schumacher and Davide Calvaresi Abstract Deep Learning (DL) models are increasingly dealing with heterogeneous data (i.e., a mix of structured and unstructured data), calling for adequate eXplainable Artificial Intelligence (XAI) methods. Nevertheless, only some of the existing techniques consider the uncertainty inherent to the data. To this end, this study proposes a pipeline to explain heterogeneous data-based DL models by combining embed- ding analysis, rule extraction methods, and probabilistic models.
Read full post gblog_arrow_right

Evaluation of the User-centric Explanation Strategies for Interactive Recommenders

by Berk Buzcu, Emre Kuru, Davide Calvaresi and Reyhan Aydoğan Abstract As recommendation systems become increasingly prevalent in numerous fields, the need for clear and persuasive interactions with users is rising. Integrating explainability into these systems is emerging as an effective approach to enhance user trust and sociability. This research focuses on recommendation systems that utilize a range of explainability techniques to foster trust by providing understandable personalized explanations for the recommendations made.
Read full post gblog_arrow_right

A Framework for Explainable Multi-purpose Virtual Assistants: A Nutrition-Focused Case Study

by Berk Buzcu, Yvan Pannatier, Reyhan Aydoğan, Michael I. Schumacher, Jean-Paul Calbimonte, and Davide Calvaresi Abstract Existing agent-based chatbot frameworks need seamless mechanisms to include explainable dialogic engines within the contextual flow. To this end, this paper presents a set of novel modules within the EREBOTS agent-based framework for chatbot development, including dialog-based plug-and-play custom algorithms, agnostic back/front ends, and embedded interactive explainable engines that can manage human feedback at run time.
Read full post gblog_arrow_right