Risk and Exposure of XAI in Persuasion and Argumentation: The case of Manipulation

by Rachele Carli, Amro Najjar, and Davide Calvaresi


In the last decades, Artificial intelligence (AI) systems have been increasingly adopted in assistive (possibly collaborative) decision-making tools. In particular, AI-based persuasive technologies are designed to steer/influence users’ behaviour, habits, and choices to facilitate the achievement of their own - predetermined - goals. Nowadays, the inputs received by the assistive systems leverage heavily AI data-driven approaches. Thus, it is imperative to have transparent and understandable (to the user) both the process leading to the recommendations and the recommendations. The Explainable AI (XAI) community has progressively contributed to “opening the black box”, ensuring the interaction’s effectiveness, and pursuing the safety of the individuals involved. However, principles and methods ensuring the efficacy and information retain on the human have not been introduced yet. The risk is to underestimate the context dependency and subjectivity of the explanations’ understanding, interpretation, and relevance. Moreover, even a plausible (and possibly expected) explanation can lead to an imprecise or incorrect outcome or its understanding. This can lead to unbalanced and unfair circumstances, such as giving a financial advantage to the system owner/provider and the detriment of the user.

This paper highlights that the sole explanations - especially in the context of persuasive technologies - are not self-sufficient to protect users’ psychological and physical integrity. Conversely, explanations could be misused, becoming themselves a tool of manipulation. Therefore, we suggest characteristics safeguarding the explanation from being manipulative and legal principles to be used as criteria for evaluating the operation of XAI systems, both from an ex-ante and ex-post perspective.

How to access

How to cite


  author       = {Rachele Carli and
                  Amro Najjar and
                  Davide Calvaresi},
  editor       = {Davide Calvaresi and
                  Amro Najjar and
                  Michael Winikoff and
                  Kary Fr{\"{a}}mling},
  title        = {Risk and Exposure of {XAI} in Persuasion and Argumentation: The case
                  of Manipulation},
  booktitle    = {Explainable and Transparent {AI} and Multi-Agent Systems - 4th International
                  Workshop, {EXTRAAMAS} 2022, Virtual Event, May 9-10, 2022, Revised
                  Selected Papers},
  series       = {Lecture Notes in Computer Science},
  volume       = {13283},
  pages        = {204--220},
  publisher    = {Springer},
  year         = {2022},
  url          = {https://doi.org/10.1007/978-3-031-15565-9\_13},
  doi          = {10.1007/978-3-031-15565-9\_13},
  timestamp    = {Tue, 18 Oct 2022 22:16:54 +0200},
  biburl       = {https://dblp.org/rec/conf/atal/CarliNC22.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}