by Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, and Michael I. Schumacher
Abstract Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis.
by Giovanni Ciatto, Amro Najjar, Jean-Paul Calbimonte, and Davide Calvaresi
Abstract Since their appearance, computer programs have embodied discipline and structured approaches and methodologies. Yet, to this day, equipping machines with imaginative and creative capabilities remains one of the most challenging and fascinating goals we pursue. Intelligent software agents can behave intelligently in well-defined scenarios, relying on Machine Learning (ML), symbolic reasoning, and their developers' capability of tailoring smart behaviors on the particular application domain(s).
by Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide
Abstract TBD
How to access URL: http://publications.hevs.ch/index.php/publications/show/2883 How to cite Bibtex @incollection{canc-bnaic-2021-explanable-negotiations, address = {}, author = {Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide}, booktitle = {Proceedings of BNAIC 2021}, doi = {}, editor = {}, isbn = {}, isbn-online = {}, issn = {}, keywords = {explainable negotiation}, pages = {}, publisher = {ACM}, series = {}, subseries = {}, title = {On Explainable Negotiations via Argumentation}, url = {http://publications.
by Graziani, Mara and Dutkiewicz, Lidia and Calvaresi, Davide and Amorim, Jos{'e} Pereira and Yordanova, Katerina and Vered, Mor and Nair, Rahul and Abreu, Pedro Henriques and Blanke, Tobias and Pulignano, Valeria and Prior, John O. and Lauwaert, Lode and Reijers, Wessel and Depeursinge, Adrien and Andrearczyk, Vincent and Müller, Henning
Abstract Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application.
by Rachele Carli and Amro Najjar and Davide Calvaresi
Abstract TBD
How to access URL: http://publications.hevs.ch/index.php/publications/show/2932 DOI: https://doi.org/10.1145/3527188.3563941 How to cite Bibtex @inproceedings{CarliNC22, author = {Rachele Carli and Amro Najjar and Davide Calvaresi}, editor = {Christoph Bartneck and Takayuki Kanda and Mohammad Obaid and Wafa Johal}, title = {Human-Social Robots Interaction: The Blurred Line between Necessary Anthropomorphization and Manipulation}, booktitle = {International Conference on Human-Agent Interaction, {HAI} 2022, Christchurch, New Zealand, December 5-8, 2022}, pages = {321--323}, publisher = {{ACM}}, year = {2022}, url = {https://doi.
by Yazan Mualla and Igor Tchappi and Timotheus Kampik and Amro Najjar and Davide Calvaresi and Abdeljalil Abbas-Turki and Stéphane Galland and Christophe Nicolle
Abstract With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent’s state of mind. Recent empirical studies have confirmed that explaining a system’s behavior to human users fosters the latter’s acceptance of the system.
by Berk Buzcu, Vanitha Varadhajaran, Igor Tchappi, Amro Najjar, Davide Calvaresi and Reyhan Aydoğan
Abstract People’s awareness about the importance of healthy lifestyles is rising. This opens new possibilities for personalized intelligent health and coaching applications. In particular, there is a need for more than simple recommendations and mechanistic interactions. Recent studies have identified nutrition virtual coaching systems (NVC) as a technological solution, possibly bridging technologies such as recommender, informative, persuasive, and argumentation systems.
by Contreras Ordoñez Victor Hugo, Davide Calvaresi, and Michael I. Schumacher
Abstract Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of transparency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering models of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.
by Contreras Ordoñez Victor Hugo, Niccolo Marini, Lora Fanda, Gaetano Manzo, Yazan Mualla, Jean-Paul Calbimonte, Michael I. Schumacher, and Davide Calvaresi
Abstract Background: Despite the advancement in eXplainable Artificial Intelligence, the explanations provided by model-agnostic predictors still call for improvements (i.e., lack of accurate descriptions of predictors’ behaviors). Contribution: We present a tool for Deep Explanations and Rule Extraction (DEXiRE) to approximate rules for Deep Learning models with any number of hidden layers.
by Davide Calvaresi, Rachele Carli, Jean-Gabriel Piguet, Contreras Ordoñez Victor Hugo, Gloria Luzzani, Amro Najjar, Jean-Paul Calbimonte, and Michael I. Schumacher
Abstract Choices and preferences of individuals are nowadays increasingly influenced by countless inputs and recommendations provided by artificial intelligence-based systems. The accuracy of recommender systems (RS) has achieved remarkable results in several domains, from infotainment to marketing and lifestyle. However, in sensitive use-cases, such as nutrition, there is a need for more complex dynamics and responsibilities beyond conventional RS frameworks.
by Rachele Carli, Amro Najjar, and Davide Calvaresi
Abstract In the last decades, Artificial intelligence (AI) systems have been increasingly adopted in assistive (possibly collaborative) decision-making tools. In particular, AI-based persuasive technologies are designed to steer/influence users’ behaviour, habits, and choices to facilitate the achievement of their own - predetermined - goals. Nowadays, the inputs received by the assistive systems leverage heavily AI data-driven approaches. Thus, it is imperative to have transparent and understandable (to the user) both the process leading to the recommendations and the recommendations.
by Contreras Ordoñez Victor Hugo, Andrea Bagante, Niccolò Marini, Michael I. Schumacher, Vincent Andrearczyk and Davide Calvaresi
Abstract TBD
How to access TBD
How to cite Bibtex TBD
by Berk Buzcu, Melissa Tessa, Igor Tchappi, Amro Najjar, Joris Hulstijn, Davide Calvaresi and Reyhan Aydoğan
Abstract The awareness about healthy lifestyles is increasing, opening to personalized intelligent health coaching applications. A demand for more than mere suggestions and mechanistic interactions has driven attention to nutrition virtual coaching systems (NVC) as a bridge between human–machine interaction and recommender, informative, persuasive, and argumentation systems. NVC can rely on data-driven opaque mechanisms.
by Rachele Carli, and Davide Calvaresi
Abstract There has been a growing interest in Explainable Artificial Intelligence (henceforth XAI) models among researchers and AI programmers in recent years. Indeed, the development of highly interactive technologies that can collaborate closely with users has made explainability a necessity. This intends to reduce mistrust and the sense of unpredictability that AI can create, especially among non-experts. Moreover, the potential of XAI as a valuable resource has been recognized, considering that it can make intelligent systems more user-friendly and reduce the negative impact of black box systems.
by Simona Tiribelli, and Davide Calvaresi
Abstract Health Recommender Systems (HRS) are promising Articial-Intelligence (AI)-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging (AA). However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis.
by Contreras Ordoñez Victor Hugo, Michael I. Schumacher and Davide Calvaresi
Abstract Deep Learning (DL) models are increasingly dealing with heterogeneous data (i.e., a mix of structured and unstructured data), calling for adequate eXplainable Artificial Intelligence (XAI) methods. Nevertheless, only some of the existing techniques consider the uncertainty inherent to the data. To this end, this study proposes a pipeline to explain heterogeneous data-based DL models by combining embed- ding analysis, rule extraction methods, and probabilistic models.
by Berk Buzcu, Emre Kuru, Davide Calvaresi and Reyhan Aydoğan
Abstract As recommendation systems become increasingly prevalent in numerous fields, the need for clear and persuasive interactions with users is rising. Integrating explainability into these systems is emerging as an effective approach to enhance user trust and sociability. This research focuses on recommendation systems that utilize a range of explainability techniques to foster trust by providing understandable personalized explanations for the recommendations made.
by Berk Buzcu, Yvan Pannatier, Reyhan Aydoğan, Michael I. Schumacher, Jean-Paul Calbimonte, and Davide Calvaresi
Abstract Existing agent-based chatbot frameworks need seamless mechanisms to include explainable dialogic engines within the contextual flow. To this end, this paper presents a set of novel modules within the EREBOTS agent-based framework for chatbot development, including dialog-based plug-and-play custom algorithms, agnostic back/front ends, and embedded interactive explainable engines that can manage human feedback at run time.