People’s Compute: Design and the Politics of AI Infrastructures

This post is adapted from a talk given at “The Politics of AI: Governance, Resistance, Alternatives” at Goldsmiths, University of London, on 18 September 2025.

The hidden layer

When we talk about making AI more fair, transparent, or accountable, we typically focus on the apps and interfaces that people actually use. We ask: How can we make this recommendation algorithm more explainable? How can users appeal a decision made by this system? These are important questions. But they miss something fundamental.

The premise of this talk is simple: AI infrastructure constrains what’s possible at the application level. The data centers, the compute resources, the platforms, and APIs that AI applications are built on top of all shape what designers can and cannot do. Before a single line of application code is written, decisions have already been made about who controls the underlying systems.

This plays out across every domain where AI is being deployed. Agricultural machinery depends on cloud services. Medical imaging runs on specific hardware platforms. Smart city projects require massive infrastructure investments. In each case, the infrastructure layer sets the terms.

Design’s complicity

Here’s an uncomfortable truth for designers: every smart device we create reinforces Big Tech’s control over infrastructure. Even products that position themselves as alternatives to smartphones, like the Humane AI Pin or the Rabbit R1, ultimately depend on the same concentrated cloud computing resources. The innovation happens at the surface while the underlying power structures remain unchanged.

A different question

What if communities owned the compute that shapes their futures?

This isn’t a new question. In the early 1990s, Amsterdam launched De Digitale Stad (The Digital City). At a time when internet access was restricted to technical elites, DDS provided free accounts, email, and web space to anyone with a modem. Public terminals made the net accessible to all residents. The project used the city as an organizing metaphor, with virtual squares, houses, and public spaces.

The history of DDS is instructive. It started as a radical access project, became enormously popular, and eventually faced tensions between community demands for democratic control and organizational leadership. It was privatized, sold to British Telecom, and absorbed by a commercial provider. But volunteers responded by creating “De echte Digitale Stad” (The Real Digital City) to continue the original mission.

The lesson here is that community-initiated digital infrastructure can thrive. But it also reveals the tensions between user democracy and organizational control, as well as how privatization redirects public goods toward profit. Communities persist in reclaiming their digital commons, even when the institutions they built are captured.

A political framework

To think through AI infrastructure politics, I draw on what’s called the socialist republican ideal. The core idea is that freedom should be understood as collective self-determination, not just individual choice. This is different from the liberal framework that dominates most AI ethics discussions, which treats autonomy as something individuals possess and must protect from external interference.

If we take collective self-determination seriously, then the question isn’t just whether an individual user can contest an AI decision. The question is whether communities can shape the technological systems that structure their lives.

Alternative ways of doing AI

Research on the political economy of AI shows that current development is characterized by resource-intensive centralization. The massive compute requirements of large language models benefit companies with huge infrastructure investments. This creates dependencies and reinforces existing power imbalances.

But technical choices are political choices. Researchers have called for alternative approaches to AI development that prioritize reduced compute requirements, greater transparency, and broader accessibility. There’s work being done on Indigenous data sovereignty, circular systems, and sustainable technology that challenge the dominant model. These aren’t just technical alternatives. They represent different visions of who AI is for and who gets to shape it.

Three moves

To design for democratic AI, I propose we need to make three moves.

From applications to infrastructures

Design must reveal and engage the invisible. Data centers, compute resources, maintenance work: these things typically remain hidden in everyday life. But they shape what’s possible. Designers need to expand their focus beyond user interfaces to understand how technical abstractions manifest in the products they create.

This means adopting methodological approaches that capture extended temporal and spatial dimensions. It means studying infrastructure over time, tracking how technologies move across contexts, and using speculative design to explore capabilities, limitations, and dependencies.

Ethnographic methods, focusing on what researchers call “infrastructure time,” offer underexplored pathways for connecting material, everyday experiences with the invisible forces of AI infrastructure.

From individuals to collectives

Most design has historically focused on individual consumers or users. Even approaches that address collective needs typically treat collectives as just collections of individuals. However, designing for groups as a whole requires a different approach.

This means shifting from designing for users to designing with publics. It means repositioning designers as embedded accomplices who build capacity within communities. The goal isn’t to deliver a finished product but to create infrastructures for ongoing appropriation, what some researchers call “design after design.”

We live in an era where many member-based associations that could serve as vehicles for collective action have weakened or disappeared. Part of the design task may be helping to construct new forms of collective organization around technological concerns.

From idealism to realism

Most design that seeks to transform social arrangements starts from abstract principles and then applies them to concrete situations. A realist approach inverts this. It starts from actual power relations, not abstract ethics. It asks: Who does what to whom, for whose benefit?

This means beginning from specific situations in their historical context. It means focusing on the interests of people in those contexts and how those interests are collectively articulated. And it means accounting for how design work interacts with existing power relations, rather than assuming that good intentions or participatory processes automatically produce beneficial outcomes.

Why now

We’re at an interesting political moment. The post-political neoliberal technocracy that dominated recent decades is being challenged by populist anti-politics. Between these two unsatisfying options lies a democratic possibility.

Contemporary global politics has shifted toward what’s called sovereigntism, and this trend has notably affected AI policy discussions. A focus on infrastructures, collectives, and real politics can help articulate positions that differ from both technocratic management and reactionary populism. It might help us make actual headway toward social progress rather than oscillating between these poles.

Get involved

This is ongoing work. The full paper is available as a preprint at osf.io/uaewn_v2. I welcome feedback and collaboration at c.p.alfrink@tudelft.nl.


Kars Alfrink is a researcher at Delft University of Technology working on contestable AI. More at contestable.ai.