Automated decision-making systems are increasingly being used by governments and public sector organisations for better service provision, optimising public infrastructures, and improve overall working and management of public institutions. The proliferation of such systems, along with digitisation of workflows, is shifting our socio-technological landscape while reshaping our interactions with public institutions and services. As organisations leverage Artificial Intelligence (AI) to achieve greater efficiencies, there is an urgent need to establish robust mechanisms of governance to reduce risks and potential harm to the public.
Current discussions of AI governance are often limited to regulation or policy debates and fail to consider the underlying workflows, actors, and organizational practices in place for decision-making regarding AI. There is a lack of focus on how such decisions are made, who makes them, and on what basis. Furthermore, much is invisible about the relationships among multiple stakeholders and institutions involved in decision-making regarding various aspects of AI.
This project used qualitative methods to explore contemporary practices in place for governing AI and identify challenges faced by public sector organisations and actors grappling with a shifting technical and regulatory landscape. It attempted to observe relationships and interdependencies among actors and institutions, and their potential influence on decision-making regarding AI.