This project aimed to address two key gaps in the field of Explainability of AI: the lack of accounting for stakeholder backgrounds, and the need for a more holistic approach considering social and technical implications. The goal of the project was to explore ways to build a shared understanding of future AI systems.
To achieve this, the project focused on speculative design and the stack as the main design approaches. Speculative design provided immersion and context, while the stack facilitated the breakdown and analysis of system layers. The project specifically applied these approaches to a case study of residential shared mobility.
The project involved stakeholders such as a real estate company, a transport provider, and technology and behavioral researchers. Through interviews and discussions with these stakeholders, the project aimed to surface tensions and challenges within the shared mobility system and establish a shared understanding among the participants.
Four speculative artifacts related to shared mobility were designed to provoke discussions and explore implications and challenges. Participants interacted with these artifacts individually, followed by group discussions to gather perspectives and reflections.
Overall, the project fostered an environment for shared understandings by considering stakeholder backgrounds, employing design methodologies, and addressing social and technical implications. The project helped shed light on three aspects. The case study of residential shared mobility, how an ideal system can be achieved, and the role of different stakeholders in this future. Reflection on the process of using speculative design and stack in combination to support each other. Finally insight into shared understandings as an approach to explainability.
In this project we partnered with FreedomLab.