Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge

by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
Read full post gblog_arrow_right

Integration of Local and Global Features Explanation with Global Rules Extraction and Generation Tools

by Contreras Ordoñez Victor Hugo, Davide Calvaresi, and Michael I. Schumacher Abstract Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of transparency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering models of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.
Read full post gblog_arrow_right

A DEXiRE for Extracting Propositional Rules from Neural Networks via Binarization

by Contreras Ordoñez Victor Hugo, Niccolo Marini, Lora Fanda, Gaetano Manzo, Yazan Mualla, Jean-Paul Calbimonte, Michael I. Schumacher, and Davide Calvaresi Abstract Background: Despite the advancement in eXplainable Artificial Intelligence, the explanations provided by model-agnostic predictors still call for improvements (i.e., lack of accurate descriptions of predictors’ behaviors). Contribution: We present a tool for Deep Explanations and Rule Extraction (DEXiRE) to approximate rules for Deep Learning models with any number of hidden layers.
Read full post gblog_arrow_right

Ethical and legal considerations for nutrition virtual coaches

by Davide Calvaresi, Rachele Carli, Jean-Gabriel Piguet, Contreras Ordoñez Victor Hugo, Gloria Luzzani, Amro Najjar, Jean-Paul Calbimonte, and Michael I. Schumacher Abstract Choices and preferences of individuals are nowadays increasingly influenced by countless inputs and recommendations provided by artificial intelligence-based systems. The accuracy of recommender systems (RS) has achieved remarkable results in several domains, from infotainment to marketing and lifestyle. However, in sensitive use-cases, such as nutrition, there is a need for more complex dynamics and responsibilities beyond conventional RS frameworks.
Read full post gblog_arrow_right