Project Abstract

-- Explainable AI (XAI) has recently emerged proposing a set of techniques attempting to explain machine learning (ML) models. The recipients (explainee) are intended to be humans or other intelligent virtual entities. Transparency, trust, and debuging are the underlying features calling for XAI. However, in real-world settings, systems are distributed, data are heterogeneous, the “system” knowledge is bounded, and privacy concerns are subject to variable constraints. Current XAI approaches cannot cope with such requirements.
Read full post

The Consortium

HES-SO University of Applied Sciences and Arts Western Switzerland Find out more! UNIBO Alma Mater Studiorum Università di Bologna Find out more! UNILU University of Luxembourg Find out more! OZU Özyeğin University Find out more!

A proud team!

The team has multidisciplinary competences sharing the Multi-Agent Systems as common thread. HES-SO People (from Switzerland) Prof. Michael I. Schumacher Full Professor at HES-SO Personal Homepage Dr. Davide Calvaresi Senior researcher at HES-SO Personal Homepage Dr. Jean-Paul Calbimonte Senior researcher at HES-SO Personal Homepage Victor Hugo Contreras Ordonez PhD Student at HES-SO
Read full post

Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge

by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
Read full post