Date
Until
Presentation at the WORK 2025 conference in Turku, Finland.
Abstract
A lot of effort is put into managing the experience of unruliness of AI through diverse attempts to explain how it works. Explainable AI (XAI) and parallel attempts to increase understandability and transparency of AI systems are facing multiple challenges from the difficult of determining what is an adequate explanation to what it means to understand an AI system. Based on insights from two parallel projects inquiring into the preserving records of AI use (INTERPARES Trust AI, www.interparestrustai.org) and documenting data practices (CAPTURE, www.uu.se/en/research/capture) using paradata (i.e. information on the making, processing and use of data), this presentation shows how much of the experienced unruliness of AI can be traced back to limits of how the scope of what needs to be understood and documented are defined. The work draws from analysis of stipulations and practices of documenting technologies, and extensive qualitative and quantitative research on process documentation, user needs and preferences.
In the efforts to document and make AI explainable and understandable, it is often reduces to individual algorithms and technologies requiring explanation and documentation, whereas the social practice of engaging with AI is omitted – essentially leaving the human either completely out of the loop, or included but not articulated in sufficient detail. Rather than merely trying to make AI understandable as a constellation of 'technological technologies', it needs to be accounted for as an arrangement of cultural techniques embedded in a socio-techno-informational and material agencement of practices. Recent research on paradata shows that its adequacy depends heavily on its capacity and its momentary need to embody or inscribe necessary aspects of practice rather than on the documentation of any individual technical details. Any documentation that only covers a technology without accounting its linkage to specific practices, is likely to be ineffective not because of its technical incomprehensiveness but because it misses the human in the loop.
File attachments
IstoHuvilaUnrulyAI-WORK2025-handout.pdf
(921.88 KB)
- Log in to post comments
