Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge

by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
Read full post gblog_arrow_right

Towards Explainable Visionary Agents: License to Dare and Imagine

by Giovanni Ciatto, Amro Najjar, Jean-Paul Calbimonte, and Davide Calvaresi Abstract Since their appearance, computer programs have embodied discipline and structured approaches and methodologies. Yet, to this day, equipping machines with imaginative and creative capabilities remains one of the most challenging and fascinating goals we pursue. Intelligent software agents can behave intelligently in well-defined scenarios, relying on Machine Learning (ML), symbolic reasoning, and their developers' capability of tailoring smart behaviors on the particular application domain(s).
Read full post gblog_arrow_right

On Explainable Negotiations via Argumentation

by Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide Abstract TBD How to access URL: http://publications.hevs.ch/index.php/publications/show/2883 How to cite Bibtex @incollection{canc-bnaic-2021-explanable-negotiations, address = {}, author = {Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide}, booktitle = {Proceedings of BNAIC 2021}, doi = {}, editor = {}, isbn = {}, isbn-online = {}, issn = {}, keywords = {explainable negotiation}, pages = {}, publisher = {ACM}, series = {}, subseries = {}, title = {On Explainable Negotiations via Argumentation}, url = {http://publications.
Read full post gblog_arrow_right

Human-Social Robots Interaction: the blurred line between necessary anthropomorphization and manipulation

by Rachele Carli and Amro Najjar and Davide Calvaresi Abstract TBD How to access URL: http://publications.hevs.ch/index.php/publications/show/2932 DOI: https://doi.org/10.1145/3527188.3563941 How to cite Bibtex @inproceedings{CarliNC22, author = {Rachele Carli and Amro Najjar and Davide Calvaresi}, editor = {Christoph Bartneck and Takayuki Kanda and Mohammad Obaid and Wafa Johal}, title = {Human-Social Robots Interaction: The Blurred Line between Necessary Anthropomorphization and Manipulation}, booktitle = {International Conference on Human-Agent Interaction, {HAI} 2022, Christchurch, New Zealand, December 5-8, 2022}, pages = {321--323}, publisher = {{ACM}}, year = {2022}, url = {https://doi.
Read full post gblog_arrow_right

The quest of parsimonious XAI: A human-agent architecture for explanation formulation

by Yazan Mualla and Igor Tchappi and Timotheus Kampik and Amro Najjar and Davide Calvaresi and Abdeljalil Abbas-Turki and Stéphane Galland and Christophe Nicolle Abstract With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent’s state of mind. Recent empirical studies have confirmed that explaining a system’s behavior to human users fosters the latter’s acceptance of the system.
Read full post gblog_arrow_right

Explanation-Based Negotiation Protocol for Nutrition Virtual Coaching

by Berk Buzcu, Vanitha Varadhajaran, Igor Tchappi, Amro Najjar, Davide Calvaresi and Reyhan Aydoğan Abstract People’s awareness about the importance of healthy lifestyles is rising. This opens new possibilities for personalized intelligent health and coaching applications. In particular, there is a need for more than simple recommendations and mechanistic interactions. Recent studies have identified nutrition virtual coaching systems (NVC) as a technological solution, possibly bridging technologies such as recommender, informative, persuasive, and argumentation systems.
Read full post gblog_arrow_right

Ethical and legal considerations for nutrition virtual coaches

by Davide Calvaresi, Rachele Carli, Jean-Gabriel Piguet, Contreras Ordoñez Victor Hugo, Gloria Luzzani, Amro Najjar, Jean-Paul Calbimonte, and Michael I. Schumacher Abstract Choices and preferences of individuals are nowadays increasingly influenced by countless inputs and recommendations provided by artificial intelligence-based systems. The accuracy of recommender systems (RS) has achieved remarkable results in several domains, from infotainment to marketing and lifestyle. However, in sensitive use-cases, such as nutrition, there is a need for more complex dynamics and responsibilities beyond conventional RS frameworks.
Read full post gblog_arrow_right

Risk and Exposure of XAI in Persuasion and Argumentation: The case of Manipulation

by Rachele Carli, Amro Najjar, and Davide Calvaresi Abstract In the last decades, Artificial intelligence (AI) systems have been increasingly adopted in assistive (possibly collaborative) decision-making tools. In particular, AI-based persuasive technologies are designed to steer/influence users’ behaviour, habits, and choices to facilitate the achievement of their own - predetermined - goals. Nowadays, the inputs received by the assistive systems leverage heavily AI data-driven approaches. Thus, it is imperative to have transparent and understandable (to the user) both the process leading to the recommendations and the recommendations.
Read full post gblog_arrow_right

Metrics for Evaluating Explainable Recommender Systems

by Joris Hulstijn, Igor Tchappi, Amro Najjar, and Reyhan Aydoğan Abstract Recommender systems aim to support their users by reducing information overload so that they can make better decisions. Recommender systems must be transparent, so users can form mental models about the system’s goals, internal state, and capabilities, that are in line with their actual design. Explanations and transparent behaviour of the system should inspire trust and, ultimately, lead to more persuasive recommendations.
Read full post gblog_arrow_right

A Survey of Decision Support Mechanisms for Negotiation

by Berk Buzcu, Melissa Tessa, Igor Tchappi, Amro Najjar, Joris Hulstijn, Davide Calvaresi, and Reyhan Aydoğan Abstract The awareness about healthy lifestyles is increasing, opening to personalized intelligent health coaching applications. A demand for more than mere suggestions and mechanistic interactions has driven attention to nutrition virtual coaching systems (NVC) as a bridge between human–machine interaction and recommender, informative, persuasive, and argumentation systems. NVC can rely on data-driven opaque mechanisms.
Read full post gblog_arrow_right

Towards interactive explanation-based nutrition virtual coaching systems

by Berk Buzcu, Melissa Tessa, Igor Tchappi, Amro Najjar, Joris Hulstijn, Davide Calvaresi and Reyhan Aydoğan Abstract The awareness about healthy lifestyles is increasing, opening to personalized intelligent health coaching applications. A demand for more than mere suggestions and mechanistic interactions has driven attention to nutrition virtual coaching systems (NVC) as a bridge between human–machine interaction and recommender, informative, persuasive, and argumentation systems. NVC can rely on data-driven opaque mechanisms.
Read full post gblog_arrow_right

Towards interactive and social explainable artificial intelligence for digital history

by Albrecht Richard, Amro Najjar, Igor Tchappi and Joris Hulstijn Abstract Due to recent development and improvements in the field of artificial intelligence (AI), methods of that field are increasingly adopted in various domains, including historical research. However, modern state-of-the-art machine learning (ML) models are black-boxes that lack transparency and interpretability. Therefore, explainable AI (XAI) methods are used to make black-box models more transparent and inspire user trust.
Read full post gblog_arrow_right