Annual Showcase

Every year, we proudly showcase studies, design experiments, workshop formats, and more from our 15 PhD fellows, the prototeams, and the DCODE Labs graduates.

Annual Showcase 2022

This is the first DCODE showcase, featuring the amazing work of our PhD cohort and master students since the project's kickoff in 2021. Projects vary in themes from algorithmic imaginaries and data commons to rethinking digital consent and queering AI.
  • All
  • Labs
  • Projects
  • Prototeams
  • Photo: AI and the challenge of speculative Ethics (ESR14)

    AI and the challenge of speculative Ethics (ESR14)

    Future Design Practices

    AI imaginaries and their contemporary socio-technical aspects are the focus of this research project. The moral commitments that underlie utopian and dystopian visions with the development of AI are revealed.

     

     

    Photo: Un-Working Health Data (ESR8)

    Un-Working Health Data (ESR8)

    Sustainable Socio-Economic Models

    A deeper understanding of how health data comes into value is needed to persuasively defend any alternative data governance models. This research project traces the flows of value(s) during the health data creation process. 

    Photo: Urban Recipes (ESR4 + ESR12)

    Urban Recipes (ESR4 + ESR12)

    Trusted Inter­actions

    Urban explorations are carried out by making and exchanging recipes. This design experiment combines spatial drifting with the recursive practices of recipe-making to facilitate more-than-human commoning.

    Photo: Plumbing the Machine Learning Pipeline (Prototeam 1)

    Plumbing the Machine Learning Pipeline (Prototeam 1)

    Future Design Practices

    Developing machine learning systems requires multidisciplinary teams working across the ML pipeline. In this workshop, participants are invited to act out a fictional ML design scenario and reflect on how values are embedded and lost in industry practices.

    Photo: Rethinking Digital Consent (DCODE Labs)

    Rethinking Digital Consent (DCODE Labs)

    Democratic Data Governance
    • Aniek Kempeneers - TU Delft MSc

    Digital platforms rely heavily on harvesting end-user data to provide personalized content. This master thesis offers a design vision of digital consent practices and disclosure interactions as a process rather than a moment.

    Photo: Creating monsters (DCODE Labs)

    Creating monsters (DCODE Labs)

    Future Design Practices
    • Anne Arzberger - TU Delft MSc

    We all have unconscious tendencies to categorise and discriminate. These biases are perpetuated by structures of power baked into the data we feed to algorithms. This master thesis offers a collection of queer ambiguous toys, each illustrating a different kind of reflexive practice and designer-AI collaboration.

    Photo: Decolonizing AI Histories in Practice (ESR15)

    Decolonizing AI Histories in Practice (ESR15)

    Future Design Practices

    We cannot decolonize our digital futures without confronting the ghosts from our pasts. This research project extends the temporality of the current speculative model to allow for and actively make space for multiplicity in AI design practices.

    Photo: Speculating and experimenting with alternative ToS (ESR12)

    Speculating and experimenting with alternative ToS (ESR12)

    Democratic Data Governance

    Ongoing and experimental speculative readings of Terms of Service (ToS) serve the purpose to reveal what it represents. By doing so, this research project takes a reflective and transformative alternative route.

    Photo: Tracing algorithmic imaginaries in everyday life (ESR1)

    Tracing algorithmic imaginaries in everyday life (ESR1)

    Inclusive Digital Futures

    By expanding design anthropology through the lens of social learning theories and developing novel methods for an ethnographic study of algorithms, this research project contributes to the more-than-human turn in HCI research and to decentralizing the study of algorithmic impact.

     

    Photo: Principled requirements of ML workflows (ESR2)

    Principled requirements of ML workflows (ESR2)

    Inclusive Digital Futures

    By translating ethical guidelines into actual system requirements, this research project focuses on overcoming forms of harmful algorithmic behavior and explores computational strategies for putting ethics into practice in current Machine Learning workflows.