The quest of parsimonious XAI: A human-agent architecture for explanation formulation

by Yazan Mualla and Igor Tchappi and Timotheus Kampik and Amro Najjar and Davide Calvaresi and Abdeljalil Abbas-Turki and Stéphane Galland and Christophe Nicolle Abstract With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent’s state of mind. Recent empirical studies have confirmed that explaining a system’s behavior to human users fosters the latter’s acceptance of the system.
Read full post gblog_arrow_right

Explanation-Based Negotiation Protocol for Nutrition Virtual Coaching

by Berk Buzcu, Vanitha Varadhajaran, Igor Tchappi, Amro Najjar, Davide Calvaresi and Reyhan Aydoğan Abstract People’s awareness about the importance of healthy lifestyles is rising. This opens new possibilities for personalized intelligent health and coaching applications. In particular, there is a need for more than simple recommendations and mechanistic interactions. Recent studies have identified nutrition virtual coaching systems (NVC) as a technological solution, possibly bridging technologies such as recommender, informative, persuasive, and argumentation systems.
Read full post gblog_arrow_right

Ethical and legal considerations for nutrition virtual coaches

by Davide Calvaresi, Rachele Carli, Jean-Gabriel Piguet, Contreras Ordoñez Victor Hugo, Gloria Luzzani, Amro Najjar, Jean-Paul Calbimonte, and Michael I. Schumacher Abstract Choices and preferences of individuals are nowadays increasingly influenced by countless inputs and recommendations provided by artificial intelligence-based systems. The accuracy of recommender systems (RS) has achieved remarkable results in several domains, from infotainment to marketing and lifestyle. However, in sensitive use-cases, such as nutrition, there is a need for more complex dynamics and responsibilities beyond conventional RS frameworks.
Read full post gblog_arrow_right

Risk and Exposure of XAI in Persuasion and Argumentation: The case of Manipulation

by Rachele Carli, Amro Najjar, and Davide Calvaresi Abstract In the last decades, Artificial intelligence (AI) systems have been increasingly adopted in assistive (possibly collaborative) decision-making tools. In particular, AI-based persuasive technologies are designed to steer/influence users’ behaviour, habits, and choices to facilitate the achievement of their own - predetermined - goals. Nowadays, the inputs received by the assistive systems leverage heavily AI data-driven approaches. Thus, it is imperative to have transparent and understandable (to the user) both the process leading to the recommendations and the recommendations.
Read full post gblog_arrow_right