Computational Accountability

by Joris Hulstijn

Abstract

Automated decision making systems take decisions that matter. Some human or legal person remains responsible. Looking back, that person is accountable for the decisions made by the system, and may even be liable in case of damages. That puts constraints on the way in which decision making systems are designed, and how they are deployed in organizations. In this paper, we analyze computational accountability in three steps. First, being accountable is analyzed as a relationship between an actor deploying the system and a critical forum of subjects, users, experts and developers. Second, we discuss system design. In principle, evidence must be collected about the decision rule and the case data that were applied. However, many AI algorithms are not interpretable for humans. Alternatively, internal controls must ensure that a system uses valid algorithms and reliable data sets for training, which are appropriate for the application domain. Third, we discuss the governance model: roles, responsibilities, procedures and infrastructure, to ensure effective operation of these controls. The paper ends with a case study in the IT audit domain, to illustrate practical feasibility.

How to access

How to cite

Bibtex

@inproceedings{Hulstijn23,
  author       = {Joris Hulstijn},
  editor       = {Matthias Grabmair and
                  Francisco Andrade and
                  Paulo Novais},
  title        = {Computational Accountability},
  booktitle    = {Proceedings of the Nineteenth International Conference on Artificial
                  Intelligence and Law, {ICAIL} 2023, Braga, Portugal, June 19-23, 2023},
  pages        = {121--130},
  publisher    = {{ACM}},
  year         = {2023},
  url          = {https://dl.acm.org/doi/10.1145/3594536.3595122},
  doi          = {10.1145/3594536.3595122}
}