GridEx: An Algorithm for Knowledge Extraction from Black-Box Regressors

by Federico Sabbatini, Giovanni Ciatto, and Andrea Omicini Abstract Knowledge extraction methods are applied to ML-based predictors to attain explainable representations of their functioning when the lack of interpretable results constitutes a problem. Several algorithms have been proposed for knowledge extraction, mostly focusing on the extraction of either lists or trees of rules. Yet, most of them only support supervised learning – and, in particular, classification – tasks.
Read full post gblog_arrow_right

On the Design of PSyKE: A Platform for Symbolic Knowledge Extraction

by Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini How to access URL: http://ceur-ws.org/Vol-2963/paper14.pdf Abstract A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation.
Read full post gblog_arrow_right