Metrics for Evaluating Explainable Recommender Systems

by Joris Hulstijn, Igor Tchappi, Amro Najjar, and Reyhan Aydoğan Abstract Recommender systems aim to support their users by reducing information overload so that they can make better decisions. Recommender systems must be transparent, so users can form mental models about the system’s goals, internal state, and capabilities, that are in line with their actual design. Explanations and transparent behaviour of the system should inspire trust and, ultimately, lead to more persuasive recommendations.
Read full post gblog_arrow_right

Computational Accountability

by Joris Hulstijn Abstract Automated decision making systems take decisions that matter. Some human or legal person remains responsible. Looking back, that person is accountable for the decisions made by the system, and may even be liable in case of damages. That puts constraints on the way in which decision making systems are designed, and how they are deployed in organizations. In this paper, we analyze computational accountability in three steps.
Read full post gblog_arrow_right

A Survey of Decision Support Mechanisms for Negotiation

by Berk Buzcu, Melissa Tessa, Igor Tchappi, Amro Najjar, Joris Hulstijn, Davide Calvaresi, and Reyhan Aydoğan Abstract The awareness about healthy lifestyles is increasing, opening to personalized intelligent health coaching applications. A demand for more than mere suggestions and mechanistic interactions has driven attention to nutrition virtual coaching systems (NVC) as a bridge between human–machine interaction and recommender, informative, persuasive, and argumentation systems. NVC can rely on data-driven opaque mechanisms.
Read full post gblog_arrow_right

Towards interactive explanation-based nutrition virtual coaching systems

by Berk Buzcu, Melissa Tessa, Igor Tchappi, Amro Najjar, Joris Hulstijn, Davide Calvaresi and Reyhan Aydoğan Abstract The awareness about healthy lifestyles is increasing, opening to personalized intelligent health coaching applications. A demand for more than mere suggestions and mechanistic interactions has driven attention to nutrition virtual coaching systems (NVC) as a bridge between human–machine interaction and recommender, informative, persuasive, and argumentation systems. NVC can rely on data-driven opaque mechanisms.
Read full post gblog_arrow_right

What should I do and why?

by Joris Hulstijn and Leon Van der Torre Abstract There is a lot of interest in explainable AI [2, 11]. When a system takes decisions that affect people, they can demand an explanation of how the decision was derived, or a justification of why the decision is justified. Note that explanation and justification are related, but not the same [1]. The need for explanation or justification is more pressing, when the system makes legal decisions [3], or when the decision is based on social or ethical norms [5].
Read full post gblog_arrow_right

Towards interactive and social explainable artificial intelligence for digital history

by Albrecht Richard, Amro Najjar, Igor Tchappi and Joris Hulstijn Abstract Due to recent development and improvements in the field of artificial intelligence (AI), methods of that field are increasingly adopted in various domains, including historical research. However, modern state-of-the-art machine learning (ML) models are black-boxes that lack transparency and interpretability. Therefore, explainable AI (XAI) methods are used to make black-box models more transparent and inspire user trust.
Read full post gblog_arrow_right