Nowadays, attempts are made to mitigate bias by introducing artificial intelligence (AI). Machine learning (a form of AI) is being used more frequently in both the public and private sectors to make decisions, which has exposed new intersectional categories of protected identities as well as new categories of algorithmic discrimination. Despite its many promising and interesting abilities, recent concern was raised about AI, further exacerbating social norms, values and categories by baking in the structures of power and prejudices, thus increasing injustice, rather than combating it.
Recognising both the potential but also the limitations of AI, this project explores the novel idea of reflexive designer-AI interactions, as a new form of human-machine interaction towards more reflective design practitioners who are able to surface, dismantle and re-think personal and collective imaginings. Seeing a key role in reflection, often criticised behaviour of AI, like inconsistency, unpredictability and confrontation, are being explored as potentially meaningful for triggering critical and self-reflective thinking and decision making in design. Following a unique speculative and introspective research through design approach, this project explores such reflexive interactions in the context of gender representation in child toys, despite being potentially useful for all kinds of biases and contexts. Children however form a very vulnerable target group. Looking at the formation of individual identity, the fact that identity reflects the power relations that are part of the social practices of inclusion and exclusion, makes the formation of gender identity a great subject to the perpetuation of norms and values in a society. Toys have evolved into ‘cultural signifiers’ that symbolise contemporary society norms and values.
Taking into account the insights from the experiment, as well as prototype testing with children and evaluating expert interviews, this project concludes that reflexive interactions – as proposed alternative to traditional human-AI interactions and in addition to the current design practice – are potentially productive to surface, dismantle and re-familiarize personal bias and collective imaginings. This in turn is expected to minimize problematic bias as often unconsciously incorporated and materialized in design products. Furthermore, does this experiment suggest that AI’s often negatively described behaviour like confusion and inconsistency, also carry the power to trigger reflective practices that help surfacing and challenge bias.
Check out the video to learn more about her new practice in designer-AI collaboration, ‘Creating Monsters’.