Future Design Practices

The shiny magic box (ESR 14)

Pulling out the relations

The narrative of the black box algorithm has led to many efforts to make the processes of AI and machine learning more transparent. However, in the last years, the narrative around AI has shifted more towards understanding the underlying algorithms as almost magic entities, and the previously mysterious black box has received a new coat of paint and is presented much more as the shiny magic box, that will solve all our challenges and struggles. We investigate these potential shifts of perspectives by tracing the ends-pushing narratives and seek to find the cracks and fractures of commercial ends driven narratives around algorithmic magic boxes and prototypes levers of co-creation for a collective shifting of imaginaries. Through a collection of ethnographic observations and workshops, we collect qualitative scenarios and expressions of industry practitioners that we analyzed for the projected entanglements of AI and morality. 

While techno-solutionism has presented emerging technology as the solution to our troubles for a while, the current narratives of AI push these promises to new levels. With promises of AI therapists, AI girlfriends or AI lawyers, these algorithms don’t just take on the mundane laborious tasks of repetitive labor, but also the messy, tricky, and challenging work of relational maintenance. This asks of us to trust it with a whole new layer of entangled inputs – our values, social connections, dreams, and aspirations, in the hopes that the shiny box will work its magic on it and unlock what has previously been kept from us based on our own inabilities to move beyond human flaws such as bias, egoism, jealousy and greed.

However, this notion ignores two aspects – first, relationships and social entanglements have always shaped the algorithms in whichever box or form, on political and normative levels. This leads to the structuring of social relations on a deep level, and it will be impossible to ever treat these aspects only as pure inputs and not actual building blocks of the algorithmic boxes. Second, whatever we might intentionally and unintentionally feed into these shiny boxes will be changed and affected by its structures in ways that we do not fully trace. Values and norms always stand in relational tension that arise between people, and hence are being ongoingly reconfigured and constantly shift. These fluid elements cannot be fed into our algorithmic boxes as static inputs. And similarly, they will keep shape shifting within these boxes, changing their forms and impacts as they are being squashed into the confined measurements of algorithmic molds, coming out as “unintended consequences” in very different shapes than they previously held.

What could take the place of utility oriented product and service narratives, and how would a world look like in which we abandon those ends-oriented narratives in favour for a means-oriented practice in which we do not turn to boxes and tools to take the process of labor from us, but rather turn towards the work of engaging with the complicated nature of shifting relations and social entanglements from a place of care and curiosity? How would a practice look like that considers this kind of work as worthwhile beyond what it could provide in commercial value but looks at it critically as one of the cornerstones of what it means to live in a society of kinship? Based on our observations, we present a framework and a vocabulary to identify current tropes of AI functionality and their relations to the automation of morality, along with the impact these speculative automations have on our sociopolitical relations, and how these tropes contribute to a political purification of AI algorithms, and with it the discursive construction of AI as the final moral device, the infallible shiny magic box. 

Related work: https://zenodo.org/record/8156178

Share On:
Twitter LinkedIn