Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge

by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
Read full post gblog_arrow_right

Towards Explainable Visionary Agents: License to Dare and Imagine

by Giovanni Ciatto, Amro Najjar, Jean-Paul Calbimonte, and Davide Calvaresi Abstract Since their appearance, computer programs have embodied discipline and structured approaches and methodologies. Yet, to this day, equipping machines with imaginative and creative capabilities remains one of the most challenging and fascinating goals we pursue. Intelligent software agents can behave intelligently in well-defined scenarios, relying on Machine Learning (ML), symbolic reasoning, and their developers' capability of tailoring smart behaviors on the particular application domain(s).
Read full post gblog_arrow_right

Shallow2Deep: Restraining Neural Networks Opacity through Neural Architecture Search

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI.
Read full post gblog_arrow_right

Logic Programming library for Machine Learning: API design and prototype

by Giovanni Ciatto, Matteo Castigliò, and Roberta Calegari Abstract In this paper we address the problem of hybridising symbolic and sub-symbolic approaches in artificial intelligence, following the purpose of creating flexible and data-driven systems, which are simultaneously comprehensible and capable of automated learning. In particular, we propose a logic API for supervised machine learning, enabling logic programmers to exploit neural networks – among the others – in their programs.
Read full post gblog_arrow_right

GridEx: An Algorithm for Knowledge Extraction from Black-Box Regressors

by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini Abstract Knowledge extraction methods are applied to ML-based predictors to attain explainable representations of their functioning when the lack of interpretable results constitutes a problem. Several algorithms have been proposed for knowledge extraction, mostly focusing on the extraction of either lists or trees of rules. Yet, most of them only support supervised learning – and, in particular, classification – tasks.
Read full post gblog_arrow_right