OpenAI’s ChatGPT grew to 2 million users in the first two months of its release, making it one of the fastest growing apps in history. As Generative AI’s popularity grows so does the miss-representation and potential for counter-representation of statistically diverse minority or underrepresented communities and individuals. It is becoming crucial to explore how policymaking could respond to emerging issues and phenomena around representation in generative AI rooted in normative, extractive and colonial practices.
As we hand over more and more of our data that inevitably feed generative AI models, a few of us are asking:
- Who and what counts and is counted in the data used to build generative AI models and tools?
- Whose identities and realities are being (mis)interpreted, distorted or even erased through generative AI models and tools?
- How do we comprehend and recognize ourselves in and through generative AI models?
- What images and stories then are being mirrored back to us, shaping how we come to know ourselves and the world?
To explore this topic a few DCODE PhD fellows, in collaboration with Open Future, are inviting curious technologists, designers and artists, the radical policymakers, and activists, to contribute to the making of an ongoing archive we’re calling the Hall of Mirrors. Ultimately, these contributions in sharing instances and examples of (mis)interpretation of marginalised identities, cultures, stories, places and things being created by generative AI tools will help us gain insight into implications, ethics and politics of (mis)interpretation for individuals and society as a whole.
The concept of mirrors in relation to AI can be understood in different ways:
- Self-Reflection: Mirrors can be seen as a metaphor for self-reflection in the context of AI. Systems can be designed to reflect upon their own actions and decision-making processes. This self-reflective capability can enable AI to analyse its own behaviour, biases, and make improvements to its algorithms.
- Data Mirroring: In the realm of data management, mirroring refers to the practice of creating exact copies of data sets. In the context of AI, data mirroring can be employed to enhance system reliability and protect against data loss or corruption.
- Simulating Human Perception: Mirrors are often used as tools to project an image of reality. In AI, mirrors can be employed as a metaphor for simulating human perception. By analysing and processing visual or sensory data, AI systems can mimic the human ability to perceive and understand the surrounding environment.
- Reflecting User Behaviour: In AI, mirrors can be seen as a metaphor for reflecting user behaviour. As users interact with AI systems, the systems can collect data on their preferences, choices, and actions. This data can then be used to personalize the user experience, provide recommendations, or adapt the system’s behaviour to better align with the user.
How you can help
Help us resist cultural engineering and the hegemonic force of informatics of domination and get to a place of more statistically diverse data and models for the underrepresented majority amongst us.
- Step 1 – Using your generative AI tool of choice (i.e. ChatGPT, Stable Diffusion and Midjourney) write a prompt that will raise a few eyebrows, make it personal to you, your identity, the culture(s), communitie(s) and place(s) you belong to. Generate a piece of text or image that raises questions and invites critique. This will help us generate evidence to gain insight into the questions we’ve listed above.
- Step 2 – Upload your text-based prompt and AI generated text and images to this link https://discord.com/invite/yxKzCZWWF
In this project we partnered with Open Future.
Images were created using Stable Diffusion and Midjourney in exploration of (mis)representations in Generative AI: Queer futurity through the lens of national identity, (i.e., Syria, The Republic of the Congo, Mexico, Colombia, Australia)