Postdoc update – December 2025

Six months since my last update. The pace hasn’t slowed. Here’s what I’ve been up to and what’s on the horizon for the next six months or so. But first, a very welcome December break. Happy holidays, dear reader.

Happenings

People’s Compute at Goldsmiths: On September 18, I presented my research agenda, “People’s Compute: Design and the Politics of AI Infrastructures,” at the Politics of AI symposium at Goldsmiths, University of London. Many thanks to Dan McQuillan, Fieke Jansen, Jo Lindsay Walton, and Pat Brody for the invitation and for putting together such a thought-provoking program. Read the transcript.

Digital Autonomy Unconference: On October 3, I attended the Digital Autonomy Unconference in Amsterdam, which was organized in collaboration with Code for NL and focused on enhancing digital autonomy within Dutch public institutions. The Digital Autonomy Competence Center was also launched at this event, for which I serve as a research associate. Read the news item.

Master’s Graduation Projects: Two more of my students have graduated. Ameya Sawant completed a project about designer autonomy and GenAI (August 29, with Fernando Secomandi as chair). David Mieras completed a project about the responsible use of AI in policy preparation (October 28, with Lianne Simonse as chair).

Personal Grant: I mentioned going for a personal grant the last time around. Unfortunately, I did not advance to the final round. However, I did receive some useful feedback and will try again next year. Onwards and upwards.

Designing Responsible AI: Sara Colombo, Francesca Mauri, and I ran the second iteration of our master’s elective course, which builds on responsible research and innovation, value-sensitive design, and design fiction. See the course description here. A more detailed write-up of how the course works is forthcoming.

International Contestable AI Workshop: On November 18, I had the pleasure of hosting a delegation from Denmark and the UK for a full-day workshop about Contestable AI at TU Delft. Read the report.

Enterprise UX: On November 21, I delivered an invited talk titled “Reclaiming Autonomy: Designing AI-Enhanced Work Tools That Empower Users” at the Enterprise UX conference in Amersfoort. Thanks to Peter Boersma for the invitation. The references to Office Space and Luddism were surprisingly well-received. Read the transcript.

Your difficult design doctor, holding forth at Enterprise UX.

NIAS Workshop: I participated in a workshop at NIAS on November 26-27, exploring permacomputing, server collectives, and networks of consent. Incredibly inspiring, it has given me many new ideas for approaching my own ongoing research. Many thanks to John Boy for the invitation. View the event page.

Stop the Cuts (continued): The fight against cuts to higher education continues. On December 9 we once again went on strike and I joined the demonstration in Amsterdam with over 7,000 participants. Our far-right government may have fallen, but the cuts remain on the table. Now is the time to maintain pressure on the parties forming a government. If you work in academia and want to act, join a union (Aob or FNV) and sign up for the WOinActie newsletter.

Advisory Today, Co-Decisive Tomorrow? A paper based on a year-long participant observation of a smart city project in Amsterdam, co-authored with Mike de Kreek, Tessa Steenkamp, and Martijn de Waal (part of the Human Values for Smarter Cities project), has been accepted for the 2026 Participatory Design Conference. Very pleased about that one. A preprint will be up once we submit the final camera-ready version, I think.

ThingsCon TH/NGS: At this year’s ThingsCon conference on December 12, Fieke Jansen, Sunjoo Lee, Lena Trotereau, and I ran a workshop titled “From Mud to Models” exploring regenerative futures for community AI. Thanks to Iskander Smit for bringing us together. A report is forthcoming.

Participants building clocks powered by mud batteries at our ThingsCon workshop.

Open Letter on AI Policy: I was part of the supporting team for an open letter calling on Dutch politicians to develop a national AI policy that promotes social progress. A special thanks goes out to Cristina Zaga for taking the initiative and leading the charge on this one, but also to the core team members Roel Dobbe, Iris van Rooij, Lilian de Jong, Wouter Nieuwenhuizen, Marcela Suarez, Wiep Hamstra, and Olivia Guest, and the supporting team that myself was part of Felienne Hermans, Mark D., Eelco Herder, Emile van Bergen, Siri Beerends, Nolen Gertz, Paul Peters, Gerry McGovern, Kars Alfrink, and Jelle van der Ster. Sign and share the letter here.

On deck

Looking ahead to the new year, I have several writing projects to complete: one chapter on contestability for an edited volume on the philosophy of engineering, and another chapter for an edited volume on community AI.

I will be wrapping up my duties as associate chair for the CHI 2026 design subcommittee. And I will also serve as associate chair for the DIS 2026 artifacts & systems subcommittee.

I will do the analysis and write-up of a field evaluation of the Vision Model Macroscope prototype (also part of the aforementioned Human Values for Smarter Cities project). I am also providing support on several other papers that will hopefully find their way into venues such as FAccT, DIS, and elsewhere.

Mockup of the Vision Model Macroscope prototype.

Finally, I am part of several small grant applications exploring topics that include the potential of computational argumentation techniques to enable more interactive implementations of contestable AI, as well as contestability in digital systems used for evidence management in international criminal justice.

That’s most of it, although not all of it, but this has gotten way too long already. Thanks for reading this far, if you have, and best wishes for 2026.

People’s Compute: Design and the Politics of AI Infrastructures

This post is adapted from a talk given at “The Politics of AI: Governance, Resistance, Alternatives” at Goldsmiths, University of London, on 18 September 2025.

The hidden layer

When we talk about making AI more fair, transparent, or accountable, we typically focus on the apps and interfaces that people actually use. We ask: How can we make this recommendation algorithm more explainable? How can users appeal a decision made by this system? These are important questions. But they miss something fundamental.

The premise of this talk is simple: AI infrastructure constrains what’s possible at the application level. The data centers, the compute resources, the platforms, and APIs that AI applications are built on top of all shape what designers can and cannot do. Before a single line of application code is written, decisions have already been made about who controls the underlying systems.

This plays out across every domain where AI is being deployed. Agricultural machinery depends on cloud services. Medical imaging runs on specific hardware platforms. Smart city projects require massive infrastructure investments. In each case, the infrastructure layer sets the terms.

Design’s complicity

Here’s an uncomfortable truth for designers: every smart device we create reinforces Big Tech’s control over infrastructure. Even products that position themselves as alternatives to smartphones, like the Humane AI Pin or the Rabbit R1, ultimately depend on the same concentrated cloud computing resources. The innovation happens at the surface while the underlying power structures remain unchanged.

A different question

What if communities owned the compute that shapes their futures?

This isn’t a new question. In the early 1990s, Amsterdam launched De Digitale Stad (The Digital City). At a time when internet access was restricted to technical elites, DDS provided free accounts, email, and web space to anyone with a modem. Public terminals made the net accessible to all residents. The project used the city as an organizing metaphor, with virtual squares, houses, and public spaces.

The history of DDS is instructive. It started as a radical access project, became enormously popular, and eventually faced tensions between community demands for democratic control and organizational leadership. It was privatized, sold to British Telecom, and absorbed by a commercial provider. But volunteers responded by creating “De echte Digitale Stad” (The Real Digital City) to continue the original mission.

The lesson here is that community-initiated digital infrastructure can thrive. But it also reveals the tensions between user democracy and organizational control, as well as how privatization redirects public goods toward profit. Communities persist in reclaiming their digital commons, even when the institutions they built are captured.

A political framework

To think through AI infrastructure politics, I draw on what’s called the socialist republican ideal. The core idea is that freedom should be understood as collective self-determination, not just individual choice. This is different from the liberal framework that dominates most AI ethics discussions, which treats autonomy as something individuals possess and must protect from external interference.

If we take collective self-determination seriously, then the question isn’t just whether an individual user can contest an AI decision. The question is whether communities can shape the technological systems that structure their lives.

Alternative ways of doing AI

Research on the political economy of AI shows that current development is characterized by resource-intensive centralization. The massive compute requirements of large language models benefit companies with huge infrastructure investments. This creates dependencies and reinforces existing power imbalances.

But technical choices are political choices. Researchers have called for alternative approaches to AI development that prioritize reduced compute requirements, greater transparency, and broader accessibility. There’s work being done on Indigenous data sovereignty, circular systems, and sustainable technology that challenge the dominant model. These aren’t just technical alternatives. They represent different visions of who AI is for and who gets to shape it.

Three moves

To design for democratic AI, I propose we need to make three moves.

From applications to infrastructures

Design must reveal and engage the invisible. Data centers, compute resources, maintenance work: these things typically remain hidden in everyday life. But they shape what’s possible. Designers need to expand their focus beyond user interfaces to understand how technical abstractions manifest in the products they create.

This means adopting methodological approaches that capture extended temporal and spatial dimensions. It means studying infrastructure over time, tracking how technologies move across contexts, and using speculative design to explore capabilities, limitations, and dependencies.

Ethnographic methods, focusing on what researchers call “infrastructure time,” offer underexplored pathways for connecting material, everyday experiences with the invisible forces of AI infrastructure.

From individuals to collectives

Most design has historically focused on individual consumers or users. Even approaches that address collective needs typically treat collectives as just collections of individuals. However, designing for groups as a whole requires a different approach.

This means shifting from designing for users to designing with publics. It means repositioning designers as embedded accomplices who build capacity within communities. The goal isn’t to deliver a finished product but to create infrastructures for ongoing appropriation, what some researchers call “design after design.”

We live in an era where many member-based associations that could serve as vehicles for collective action have weakened or disappeared. Part of the design task may be helping to construct new forms of collective organization around technological concerns.

From idealism to realism

Most design that seeks to transform social arrangements starts from abstract principles and then applies them to concrete situations. A realist approach inverts this. It starts from actual power relations, not abstract ethics. It asks: Who does what to whom, for whose benefit?

This means beginning from specific situations in their historical context. It means focusing on the interests of people in those contexts and how those interests are collectively articulated. And it means accounting for how design work interacts with existing power relations, rather than assuming that good intentions or participatory processes automatically produce beneficial outcomes.

Why now

We’re at an interesting political moment. The post-political neoliberal technocracy that dominated recent decades is being challenged by populist anti-politics. Between these two unsatisfying options lies a democratic possibility.

Contemporary global politics has shifted toward what’s called sovereigntism, and this trend has notably affected AI policy discussions. A focus on infrastructures, collectives, and real politics can help articulate positions that differ from both technocratic management and reactionary populism. It might help us make actual headway toward social progress rather than oscillating between these poles.

Get involved

This is ongoing work. The full paper is available as a preprint at osf.io/uaewn_v2. I welcome feedback and collaboration at c.p.alfrink@tudelft.nl.


Kars Alfrink is a researcher at Delft University of Technology working on contestable AI. More at contestable.ai.