by Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide
Abstract TBD
How to access URL: http://publications.hevs.ch/index.php/publications/show/2883 How to cite Bibtex @incollection{canc-bnaic-2021-explanable-negotiations, address = {}, author = {Contreras, Victor and Aydoğan, Reyhan and Najjar, Amro and Calvaresi, Davide}, booktitle = {Proceedings of BNAIC 2021}, doi = {}, editor = {}, isbn = {}, isbn-online = {}, issn = {}, keywords = {explainable negotiation}, pages = {}, publisher = {ACM}, series = {}, subseries = {}, title = {On Explainable Negotiations via Argumentation}, url = {http://publications.
by Yazan Mualla and Igor Tchappi and Timotheus Kampik and Amro Najjar and Davide Calvaresi and Abdeljalil Abbas-Turki and Stéphane Galland and Christophe Nicolle
Abstract With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent’s state of mind. Recent empirical studies have confirmed that explaining a system’s behavior to human users fosters the latter’s acceptance of the system.
by Berk Buzcu, Vanitha Varadhajaran, Igor Tchappi, Amro Najjar, Davide Calvaresi and Reyhan Aydoğan
Abstract People’s awareness about the importance of healthy lifestyles is rising. This opens new possibilities for personalized intelligent health and coaching applications. In particular, there is a need for more than simple recommendations and mechanistic interactions. Recent studies have identified nutrition virtual coaching systems (NVC) as a technological solution, possibly bridging technologies such as recommender, informative, persuasive, and argumentation systems.
by Contreras Ordoñez Victor Hugo, Davide Calvaresi, and Michael I. Schumacher
Abstract Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of transparency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering models of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.
by Contreras Ordoñez Victor Hugo, Niccolo Marini, Lora Fanda, Gaetano Manzo, Yazan Mualla, Jean-Paul Calbimonte, Michael I. Schumacher, and Davide Calvaresi
Abstract Background: Despite the advancement in eXplainable Artificial Intelligence, the explanations provided by model-agnostic predictors still call for improvements (i.e., lack of accurate descriptions of predictors’ behaviors). Contribution: We present a tool for Deep Explanations and Rule Extraction (DEXiRE) to approximate rules for Deep Learning models with any number of hidden layers.