Project Abstract

-- Explainable AI (XAI) has recently emerged proposing a set of techniques attempting to explain machine learning (ML) models. The recipients (explainee) are intended to be humans or other intelligent virtual entities. Transparency, trust, and debuging are the underlying features calling for XAI. However, in real-world settings, systems are distributed, data are heterogeneous, the “system” knowledge is bounded, and privacy concerns are subject to variable constraints. Current XAI approaches cannot cope with such requirements.
Read full post gblog_arrow_right

The Consortium

HES-SO University of Applied Sciences and Arts Western Switzerland <span class=“gblog-button gblog-button–regular” Find out more! UNIBO Alma Mater Studiorum Università di Bologna <span class=“gblog-button gblog-button–regular” Find out more! UNILU University of Luxembourg <span class=“gblog-button gblog-button–regular” Find out more! OZU Özyeğin University <span class=“gblog-button gblog-button–regular”
Read full post gblog_arrow_right

A proud team!

The team has multidisciplinary competences sharing the Multi-Agent Systems as common thread. HES-SO People (from Switzerland) Prof. Michael I. Schumacher Full Professor at HES-SO <span class=“gblog-button gblog-button–regular” Personal Homepage Dr. Davide Calvaresi Senior researcher at HES-SO <span class=“gblog-button gblog-button–regular” Personal Homepage Dr. Jean-Paul Calbimonte Senior researcher at HES-SO <span class=“gblog-button gblog-button–regular”
Read full post gblog_arrow_right

Deliverables

Year 1 [D1.4] Data Management Plan (DMP) [D2.1] Tech report on symbolic knowledge extraction and injection [D2.2] Scientific paper on symbolic knowledge extraction and injection [D2.3] Software libraries supporting extraction and injection