Shallow2Deep: Restraining Neural Networks Opacity through Neural Architecture Search

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI.
Read full post gblog_arrow_right

Graph Neural Networks as the Copula Mundi between Logic and Machine Learning: A Roadmap

by Andrea Agiollo, Giovanni Ciatto, and Andrea Omicini Abstract Combining machine learning (ML) and computational logic (CL) is hard, mostly because of the inherently-different ways they use to represent knowledge. In fact, while ML relies on fixed-size numeric representations leveraging on vectors, matrices, or tensors of real numbers, CL relies on logic terms and clauses—which are unlimited in size and structure. Graph neural networks (GNN) are a novelty in the ML world introduced for dealing with graph-structured data in a sub-symbolic way.
Read full post gblog_arrow_right

Symbolic Knowledge Extraction and Injection with Sub-symbolic Predictors: A Systematic Literature Review

by Giovanni Ciatto, Federico Sabbatini, Andrea Agiollo, Matteo Magnini, and Andrea Omicini Abstract In this paper we focus on the issue of opacity of sub-symbolic machine-learning predictors by promoting two complementary activities—namely, symbolic knowledge extraction (SKE) and injection (SKI) from and into sub-symbolic predictors. We consider as symbolic any language being intelligible and interpretable for both humans and computers. Accordingly, we propose general meta-models for both SKE and SKI, along with two taxonomies for the classification of SKE/SKI methods.
Read full post gblog_arrow_right