Metrics for Evaluating Explainable Recommender Systems
by Joris Hulstijn, Igor Tchappi, Amro Najjar, and Reyhan Aydoğan
Recommender systems aim to support their users by reducing information overload so that they can make better decisions. Recommender systems must be transparent, so users can form mental models about the system’s goals, internal state, and capabilities, that are in line with their actual design. Explanations and transparent behaviour of the system should inspire trust and, ultimately, lead to more persuasive recommendations. Here, explanations convey reasons why a recommendation is given or how the system forms its recommendations. This paper focuses on the question how such claims about effectiveness of explanations can be evaluated. Accordingly, we investigate various models that are used to assess the effects of explanations and recommendations. We discuss objective and subjective measurement and argue that both are needed. We define a set of metrics for measuring the effectiveness of explanations and recommendations. The feasibility of using these metrics is discussed in the context of a specific explainable recommender system in the food and health domain.
@inproceedings{HulstijnTNA23,
author = {Joris Hulstijn and
Igor Tchappi and
Amro Najjar and
Reyhan Aydogan},
editor = {Davide Calvaresi et al},
title = {Metrics for Evaluating Explainable Recommender Systems},
booktitle = {Explainable and Transparent {AI} and Multi-Agent Systems - 5th International
Workshop, (EXTRAAMAS 2023), London, UK.
Papers},
series = {Lecture Notes in Computer Science},
volume = {14127},
pages = {212--230},
publisher = {Springer},
year = {2023},
url = {https://link.springer.com/chapter/10.1007/978-3-031-40878-6_12},
doi = {10.1007/978-3-031-40878-6_12}
}