Towards Explainable Visionary Agents: License to Dare and Imagine

by Giovanni Ciatto, Amro Najjar, Jean-Paul Calbimonte, and Davide Calvaresi Abstract Since their appearance, computer programs have embodied discipline and structured approaches and methodologies. Yet, to this day, equipping machines with imaginative and creative capabilities remains one of the most challenging and fascinating goals we pursue. Intelligent software agents can behave intelligently in well-defined scenarios, relying on Machine Learning (ML), symbolic reasoning, and their developers' capability of tailoring smart behaviors on the particular application domain(s).
Read full post

Shallow2Deep: Restraining Neural Networks Opacity through Neural Architecture Search

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI.
Read full post

GridEx: An Algorithm for Knowledge Extraction from Black-Box Regressors

by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini Abstract Knowledge extraction methods are applied to ML-based predictors to attain explainable representations of their functioning when the lack of interpretable results constitutes a problem. Several algorithms have been proposed for knowledge extraction, mostly focusing on the extraction of either lists or trees of rules. Yet, most of them only support supervised learning – and, in particular, classification – tasks.
Read full post

Graph Neural Networks as the Copula Mundi between Logic and Machine Learning: A Roadmap

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Combining machine learning (ML) and computational logic (CL) is hard, mostly because of the inherently-different ways they use to represent knowledge. In fact, while ML relies on fixed-size numeric representations leveraging on vectors, matrices, or tensors of real numbers, CL relies on logic terms and clauses—which are unlimited in size and structure. Graph neural networks (GNN) are a novelty in the ML world introduced for dealing with graph-structured data in a sub-symbolic way.
Read full post

On the Design of PSyKE: A Platform for Symbolic Knowledge Extraction

by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini How to access URL: http://ceur-ws.org/Vol-2963/paper14.pdf Abstract A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation.
Read full post