Postdoc update – December 2025

Six months since my last update. The pace hasn’t slowed. Here’s what I’ve been up to and what’s on the horizon for the next six months or so. But first, a very welcome December break. Happy holidays, dear reader.

Happenings

People’s Compute at Goldsmiths: On September 18, I presented my research agenda, “People’s Compute: Design and the Politics of AI Infrastructures,” at the Politics of AI symposium at Goldsmiths, University of London. Many thanks to Dan McQuillan, Fieke Jansen, Jo Lindsay Walton, and Pat Brody for the invitation and for putting together such a thought-provoking program. Read the transcript.

Digital Autonomy Unconference: On October 3, I attended the Digital Autonomy Unconference in Amsterdam, which was organized in collaboration with Code for NL and focused on enhancing digital autonomy within Dutch public institutions. The Digital Autonomy Competence Center was also launched at this event, for which I serve as a research associate. Read the news item.

Master’s Graduation Projects: Two more of my students have graduated. Ameya Sawant completed a project about designer autonomy and GenAI (August 29, with Fernando Secomandi as chair). David Mieras completed a project about the responsible use of AI in policy preparation (October 28, with Lianne Simonse as chair).

Personal Grant: I mentioned going for a personal grant the last time around. Unfortunately, I did not advance to the final round. However, I did receive some useful feedback and will try again next year. Onwards and upwards.

Designing Responsible AI: Sara Colombo, Francesca Mauri, and I ran the second iteration of our master’s elective course, which builds on responsible research and innovation, value-sensitive design, and design fiction. See the course description here. A more detailed write-up of how the course works is forthcoming.

International Contestable AI Workshop: On November 18, I had the pleasure of hosting a delegation from Denmark and the UK for a full-day workshop about Contestable AI at TU Delft. Read the report.

Enterprise UX: On November 21, I delivered an invited talk titled “Reclaiming Autonomy: Designing AI-Enhanced Work Tools That Empower Users” at the Enterprise UX conference in Amersfoort. Thanks to Peter Boersma for the invitation. The references to Office Space and Luddism were surprisingly well-received. Read the transcript.

Your difficult design doctor, holding forth at Enterprise UX.

NIAS Workshop: I participated in a workshop at NIAS on November 26-27, exploring permacomputing, server collectives, and networks of consent. Incredibly inspiring, it has given me many new ideas for approaching my own ongoing research. Many thanks to John Boy for the invitation. View the event page.

Stop the Cuts (continued): The fight against cuts to higher education continues. On December 9 we once again went on strike and I joined the demonstration in Amsterdam with over 7,000 participants. Our far-right government may have fallen, but the cuts remain on the table. Now is the time to maintain pressure on the parties forming a government. If you work in academia and want to act, join a union (Aob or FNV) and sign up for the WOinActie newsletter.

Advisory Today, Co-Decisive Tomorrow? A paper based on a year-long participant observation of a smart city project in Amsterdam, co-authored with Mike de Kreek, Tessa Steenkamp, and Martijn de Waal (part of the Human Values for Smarter Cities project), has been accepted for the 2026 Participatory Design Conference. Very pleased about that one. A preprint will be up once we submit the final camera-ready version, I think.

ThingsCon TH/NGS: At this year’s ThingsCon conference on December 12, Fieke Jansen, Sunjoo Lee, Lena Trotereau, and I ran a workshop titled “From Mud to Models” exploring regenerative futures for community AI. Thanks to Iskander Smit for bringing us together. A report is forthcoming.

Participants building clocks powered by mud batteries at our ThingsCon workshop.

Open Letter on AI Policy: I was part of the supporting team for an open letter calling on Dutch politicians to develop a national AI policy that promotes social progress. A special thanks goes out to Cristina Zaga for taking the initiative and leading the charge on this one, but also to the core team members Roel Dobbe, Iris van Rooij, Lilian de Jong, Wouter Nieuwenhuizen, Marcela Suarez, Wiep Hamstra, and Olivia Guest, and the supporting team that myself was part of Felienne Hermans, Mark D., Eelco Herder, Emile van Bergen, Siri Beerends, Nolen Gertz, Paul Peters, Gerry McGovern, Kars Alfrink, and Jelle van der Ster. Sign and share the letter here.

On deck

Looking ahead to the new year, I have several writing projects to complete: one chapter on contestability for an edited volume on the philosophy of engineering, and another chapter for an edited volume on community AI.

I will be wrapping up my duties as associate chair for the CHI 2026 design subcommittee. And I will also serve as associate chair for the DIS 2026 artifacts & systems subcommittee.

I will do the analysis and write-up of a field evaluation of the Vision Model Macroscope prototype (also part of the aforementioned Human Values for Smarter Cities project). I am also providing support on several other papers that will hopefully find their way into venues such as FAccT, DIS, and elsewhere.

Mockup of the Vision Model Macroscope prototype.

Finally, I am part of several small grant applications exploring topics that include the potential of computational argumentation techniques to enable more interactive implementations of contestable AI, as well as contestability in digital systems used for evidence management in international criminal justice.

That’s most of it, although not all of it, but this has gotten way too long already. Thanks for reading this far, if you have, and best wishes for 2026.

People’s Compute: Design and the Politics of AI Infrastructures

This post is adapted from a talk given at “The Politics of AI: Governance, Resistance, Alternatives” at Goldsmiths, University of London, on 18 September 2025.

The hidden layer

When we talk about making AI more fair, transparent, or accountable, we typically focus on the apps and interfaces that people actually use. We ask: How can we make this recommendation algorithm more explainable? How can users appeal a decision made by this system? These are important questions. But they miss something fundamental.

The premise of this talk is simple: AI infrastructure constrains what’s possible at the application level. The data centers, the compute resources, the platforms, and APIs that AI applications are built on top of all shape what designers can and cannot do. Before a single line of application code is written, decisions have already been made about who controls the underlying systems.

This plays out across every domain where AI is being deployed. Agricultural machinery depends on cloud services. Medical imaging runs on specific hardware platforms. Smart city projects require massive infrastructure investments. In each case, the infrastructure layer sets the terms.

Design’s complicity

Here’s an uncomfortable truth for designers: every smart device we create reinforces Big Tech’s control over infrastructure. Even products that position themselves as alternatives to smartphones, like the Humane AI Pin or the Rabbit R1, ultimately depend on the same concentrated cloud computing resources. The innovation happens at the surface while the underlying power structures remain unchanged.

A different question

What if communities owned the compute that shapes their futures?

This isn’t a new question. In the early 1990s, Amsterdam launched De Digitale Stad (The Digital City). At a time when internet access was restricted to technical elites, DDS provided free accounts, email, and web space to anyone with a modem. Public terminals made the net accessible to all residents. The project used the city as an organizing metaphor, with virtual squares, houses, and public spaces.

The history of DDS is instructive. It started as a radical access project, became enormously popular, and eventually faced tensions between community demands for democratic control and organizational leadership. It was privatized, sold to British Telecom, and absorbed by a commercial provider. But volunteers responded by creating “De echte Digitale Stad” (The Real Digital City) to continue the original mission.

The lesson here is that community-initiated digital infrastructure can thrive. But it also reveals the tensions between user democracy and organizational control, as well as how privatization redirects public goods toward profit. Communities persist in reclaiming their digital commons, even when the institutions they built are captured.

A political framework

To think through AI infrastructure politics, I draw on what’s called the socialist republican ideal. The core idea is that freedom should be understood as collective self-determination, not just individual choice. This is different from the liberal framework that dominates most AI ethics discussions, which treats autonomy as something individuals possess and must protect from external interference.

If we take collective self-determination seriously, then the question isn’t just whether an individual user can contest an AI decision. The question is whether communities can shape the technological systems that structure their lives.

Alternative ways of doing AI

Research on the political economy of AI shows that current development is characterized by resource-intensive centralization. The massive compute requirements of large language models benefit companies with huge infrastructure investments. This creates dependencies and reinforces existing power imbalances.

But technical choices are political choices. Researchers have called for alternative approaches to AI development that prioritize reduced compute requirements, greater transparency, and broader accessibility. There’s work being done on Indigenous data sovereignty, circular systems, and sustainable technology that challenge the dominant model. These aren’t just technical alternatives. They represent different visions of who AI is for and who gets to shape it.

Three moves

To design for democratic AI, I propose we need to make three moves.

From applications to infrastructures

Design must reveal and engage the invisible. Data centers, compute resources, maintenance work: these things typically remain hidden in everyday life. But they shape what’s possible. Designers need to expand their focus beyond user interfaces to understand how technical abstractions manifest in the products they create.

This means adopting methodological approaches that capture extended temporal and spatial dimensions. It means studying infrastructure over time, tracking how technologies move across contexts, and using speculative design to explore capabilities, limitations, and dependencies.

Ethnographic methods, focusing on what researchers call “infrastructure time,” offer underexplored pathways for connecting material, everyday experiences with the invisible forces of AI infrastructure.

From individuals to collectives

Most design has historically focused on individual consumers or users. Even approaches that address collective needs typically treat collectives as just collections of individuals. However, designing for groups as a whole requires a different approach.

This means shifting from designing for users to designing with publics. It means repositioning designers as embedded accomplices who build capacity within communities. The goal isn’t to deliver a finished product but to create infrastructures for ongoing appropriation, what some researchers call “design after design.”

We live in an era where many member-based associations that could serve as vehicles for collective action have weakened or disappeared. Part of the design task may be helping to construct new forms of collective organization around technological concerns.

From idealism to realism

Most design that seeks to transform social arrangements starts from abstract principles and then applies them to concrete situations. A realist approach inverts this. It starts from actual power relations, not abstract ethics. It asks: Who does what to whom, for whose benefit?

This means beginning from specific situations in their historical context. It means focusing on the interests of people in those contexts and how those interests are collectively articulated. And it means accounting for how design work interacts with existing power relations, rather than assuming that good intentions or participatory processes automatically produce beneficial outcomes.

Why now

We’re at an interesting political moment. The post-political neoliberal technocracy that dominated recent decades is being challenged by populist anti-politics. Between these two unsatisfying options lies a democratic possibility.

Contemporary global politics has shifted toward what’s called sovereigntism, and this trend has notably affected AI policy discussions. A focus on infrastructures, collectives, and real politics can help articulate positions that differ from both technocratic management and reactionary populism. It might help us make actual headway toward social progress rather than oscillating between these poles.

Get involved

This is ongoing work. The full paper is available as a preprint at osf.io/uaewn_v2. I welcome feedback and collaboration at c.p.alfrink@tudelft.nl.


Kars Alfrink is a researcher at Delft University of Technology working on contestable AI. More at contestable.ai.

Reclaiming Autonomy: Designing AI-Enhanced Work Tools That Empower Users

Based on an invited talk delivered at Enterprise UX, on November 21, 2025 in Amersfoort, the Netherlands.

In a previous life, I was a practicing designer. These days I’m a postdoc at TU Delft, researching something called Contestable AI. Today I want to explore how we can design AI work tools that preserve worker autonomy—focusing specifically on large language models and knowledge work.

The Meeting We’ve All Been In

Who’s been in this meeting? Your CEO saw a demo, and now your PM is asking you to build some kind of AI feature into your product.

This is very much like Office Space: decisions about tools and automation are being made top-down, without consulting the people who actually do the work.

What I want to explore are the questions you should be asking before you go off and build that thing. Because we shouldn’t just be asking “can we build it?” but also “should we build it?” And if so, how do we build it in a way that empowers workers rather than diminishes them?

Part 1: Reality Check

What We’re Actually Building

Large language models can be thought of as databases containing programs for transforming text (Chollet, 2022). When we prompt, we’re querying that database.

The simpler precursors of LLMs would let you take the word “king” and ask it to make it female, outputting “queen.” Now, language models work similarly but can do much more complex transformations—give it a poem, ask it to write in the style of Shakespeare, and it outputs a transformed poem.

The key point: they are sophisticated text transformation machines. They are not magic. Understanding this helps us design better.

Three Assumptions to Challenge

Before adding AI, we should challenge three things:

  1. Functionality: Does it actually work?
  2. Power: Who really benefits?
  3. Practice: What skills or processes are transformed?

1. Functionality: Does It Work?

One problem with AI projects is that functionality is often assumed instead of demonstrated (Raji et al., 2022). And historically, service sector automation has not led to expected productivity gains (Benanav, 2020).

What this means: don’t just trust the demo. Demand evidence in your actual context. Ask for them to show it working in production, not a prototype.

2. Power: Who Benefits?

Current AI developments seem to favor employers over workers. Because of this, some have started taking inspiration from the Luddites (Merchant, 2023).

It’s a common misconception that Luddites hated technology. They hated losing control over their craft. They smashed frames operated by unskilled workers that undercut skilled craftspeople (Sabie et al., 2023).

What we should be asking: who gains power, and who loses it? This isn’t about being anti-technology. It’s about being pro-empowerment.

3. Practice: What Changes?

AI-enabled work tools can have second-order effects on work practices. Automation breaks skill transmission from experts to novices (Beane, 2024). For example, surgical robots that can be remotely operated by expert surgeons mean junior surgeons don’t learn by doing.

Some work that is challenging, complex, and requires human connection should be preserved so that learning can happen.

On the other hand, before we automate a task, we should ask whether a process should exist at all. Otherwise, we may be simply reifying bureaucracy. As Michael Hammer put it: “don’t automate, obliterate” (1990).

Every automation project is an opportunity to liberate skilled professionals from bureaucracy.

Part 2: Control → Autonomy

All three questions are really about control. Control over whether tools serve you. Control over developing expertise. This is fundamentally about autonomy.

What Autonomy Is

A common definition of autonomy is the effective capacity for self-governance (Prunkl, 2022). It consists of two dimensions:

  • Authenticity: holding beliefs that are free from manipulation
  • Agency: having meaningful options to act on those beliefs

Both are necessary for autonomy.

Office Space examples:

  • Authenticity: Joanna’s manager tells her the minimum is 15 pieces of flair, then criticizes her for wearing “only” the minimum. Her understanding of the rules gets manipulated.
  • Agency: Lumbergh tells Peter, “Yeah, if you could come in on Saturday, that would be great.” Technically a request, but the power structure eliminates any real choice.

How AI Threatens Autonomy

AI can threaten autonomy in a variety of ways. Here are a few examples.

Manipulation — Like TikTok’s recommendation algorithm. It exploits cognitive vulnerabilities, creating personalized content loops that maximize engagement time. This makes it difficult for users to make autonomous decisions about their attention and time use.

Restricted choice — LinkedIn’s automated hiring tools can automatically exclude qualified candidates based on biased pattern matching. Candidates are denied opportunities without human review and lack the ability to contest the decision.

Diminished competence — Routinely outsourcing writing, problem-solving, or analysis to ChatGPT without critical engagement can lead to atrophying the very skills that make professionals valuable. Similar to how reliance on GPS erodes navigational abilities.

These are real risks, not hypothetical. But we can design AI systems to protect against these threats—and we can do more. We can design AI systems to actively promote autonomy.

A Toolkit for Designing AI for Autonomy

Here’s a provisional toolkit with two parts: one focusing on design process, the other on product features (Alfrink, 2025).

Process:

  • Reflexive design
  • Impact assessment
  • Stakeholder negotiation

Product:

  • Override mechanisms
  • Transparency
  • Non-manipulative interfaces
  • Collective autonomy support

I’ll focus on three elements that I think are most novel: relfexive design, stakeholder negotiation, and collective autonomy support.

Part 3: Application

Example: LegalMike

LegalMike is a Dutch legal AI platform that helps lawyers draft contracts, summarize case law, and so forth. It’s a perfect example to apply my framework—it uses an LLM and focuses on knowledge work.

1. Reflexive Design

The question here: what happens to “legal judgment” when AI drafts clauses? Does competence shift from “knowing how to argue” to “knowing how to prompt”?

We should map this before we start shipping.

This is new because standard UX doesn’t often ask how AI tools redefine the work itself.

2. Stakeholder Negotiation

Run workshops with juniors, partners, and clients:

  • Juniors might fear deskilling
  • Partners want quality control
  • Clients may want transparency

By running workshops like this, we make tensions visible and negotiate boundaries between stakeholders.

This is new because we have stakeholders negotiate what autonomy should look like, rather than just accept what exists.

3. Collective Autonomy Support

LegalMike could isolate, or connect. Isolating means everyone with their own AI. But we could deliberately design it to surface connections:

  • Show which partner’s work the AI drew from
  • Create prompts that encourage juniors to consult seniors
  • Show how firm expertise flows, not just individual outputs

This counters the “individual productivity” framing that dominates AI products today.

Tool → Medium

These interventions would shift LegalMike from a pure efficiency tool to a medium for collaborative legal work that preserves professional judgment, surfaces power dynamics, and strengthens collective expertise—not just individual output.

Think of LLMs not as a robot arm that automates away knowledge work tasks—like in a Korean noodle shop. Instead, it can be the robot arm that mediates collaboration between humans to produce entirely new ways of working—like in the CRTA visual identity project for the University of Zagreb.

Conclusion

AI isn’t neutral. It’s embedded in power structures. As designers, we’re not just building features—we’re brokers of autonomy.

Every design choice we make either empowers or disempowers workers. We should choose deliberately.

And seriously, watch Office Space if you haven’t seen it. It’s the best “documentary” about workplace autonomy ever made. Mike Judge understood this as early as 1999.

Designing Learning Experiences in a Post-ChatGPT World

Transcript of a talk delivered at LXDCON’25 on June 12, 2025.

My name is Kars. I am a postdoc at TU Delft. I research contestable AI—how to use design to ensure AI systems remain subject to societal control. I teach the responsible design of AI systems. In a previous life, I was a practicing designer of digital products and services. I will talk about designing learning experiences in a post-ChatGPT world.

Let’s start at this date.

This is when OpenAI released an early demo of ChatGPT. The chatbot quickly went viral on social media. Users shared examples of what it could do. Stories and samples included everything from travel planning to writing fables to coding computer programs. Within five days, the chatbot had attracted over one million users.

Fast forward to today, 2 years, 6 months, and 14 days later, we’ve seen a massive impact across domains, including on education.

For example, the article on the left talks about how AI cheating has become pervasive in higher education. It is fundamentally undermining the educational process itself. Students are using ChatGPT for nearly every assignment while educators struggle with ineffective detection methods and question whether traditional academic work has lost all meaning.

The one on the right talks about how students are accusing professors of being hypocritical. Teachers are using AI tools for things like course materials and grading while telling students they cannot use them.

What we’re looking at is a situation where academic integrity was already in question, on top of that, both students and faculty are quickly adopting AI, and institutions aren’t really ready for it.

These transformations in higher education give me pause. What should we change about how we design learning experiences given this new reality?

So, just to clarify, when I mention “AI” in this talk, I’m specifically referring to generative AI, or GenAI, and even more specifically, to chatbots that are powered by large language models, like ChatGPT.

Throughout this talk I will use this example of a learning experience that makes use of GenAI. Sharad Goel, Professor at Harvard Kennedy School, developed an AI Slackbot named “StatGPT” that aims to enhance student learning through interactive engagement.

It was tested in a statistics course with positive feedback from students. They described it as supportive and easily accessible, available anytime for student use. There are plans to implement StatGPT in various other courses. They say it assists in active problem-solving and consider it an example of how AI can facilitate learning, rather than replace it.

The debate around GenAI and learning has become polarized. I see the challenge as trying to find a balance. On one side, there’s complete skepticism about AI, and on the other, there’s this blind acceptance of it. What I propose is that we need an approach I call Conscious Adaptation: moving forward with full awareness of what’s being transformed.

To build the case for this approach, I will be looking at two common positions in the debates around AI and education. I’ll be focusing on four pieces of writing.

Two of them are by Ethan Mollick, from his blog. He’s a professor at the University of Pennsylvania specializing in innovation and entrepreneurship, known for his work on the potential of AI to transform different fields.

The other two pieces are by Ian Bogost, published at The Atlantic. He’s a media studies scholar, author, and game designer who teaches at Washington University. He’s known for his sobering, realist critiques of the impact of technology on society.

These, to me, exemplify two strands of the debate around AI in education.

Ethan Mollick’s position, in essence, is that AI in education is an inevitable transformation that educators must embrace and redesign around, not fight.

You could say Mollick is an optimist. But he is also really clear-eyed about how much disruption is going on. He even refers to it as the “Homework Apocalypse.” He talks about some serious issues: there are failures in detection, students are not learning as well (with exam performance dropping by about 17%), and there are a lot of misunderstandings about AI on both sides—students and faculty.

But his perspective is more about adapting to a tough situation. He’s always focused on solutions, constantly asking, “What can we do about this?” He believes that with thoughtful human efforts, we can really influence the outcomes positively.

On the other hand, Ian Bogost’s view is that AI has created an unsolvable crisis that’s fundamentally breaking traditional education and leaving teachers demoralized.

Bogost, I would describe as a realist. He accepts the inevitability of AI, noting that the “arms race will continue” and that technology will often outpace official policies. He also highlights the negative impact on faculty morale, the dependency of students, and the chaos in institutions.

He’s not suggesting that we should ban AI or go back to a time before it existed. He sees AI as something that might be the final blow to a profession that’s already struggling with deeper issues. At the same time, he emphasizes the need for human agency by calling out the lack of reflection and action from institutions.

So, they both observe the same reality, but they look at it differently. Mollick sees it as an engineering challenge—one that’s complicated but can be tackled with smart design. On the other hand, Bogost views it as a social issue that uncovers deeper problems that can’t just be fixed with technology.

Mollick thinks it’s possible to rebuild after a sort of collapse, while Bogost questions if the institutions that are supposed to do that rebuilding are really fit for the job.

Mollick would likely celebrate it as an example of co-intelligence. Bogost would likely ask what the rollout of the bot would be at the expense of, or what deeper problems its deployment unveils.

Getting past the conflict between these two views isn’t just about figuring out the best technical methods or the right order of solutions. The real challenge lies in our ability as institutions to make real changes, and we need to be careful that focusing on solutions doesn’t distract us from the important discussions we need to have.

I see three strategies that work together to create an approach that addresses the conflict between these two perspectives in a way that I believe will be more effective.

First, institutional realism is about designing interventions assuming institutions will resist change, capture innovations, or abandon initiatives. Given this, we could focus on individual teacher practices, learner-level tools, and changes that don’t require systemic transformation. We could treat every implementation as a diagnostic probe revealing actual (vs. stated) institutional capacity.

Second, loss-conscious innovation is about before implementing AI-enhanced practices, explicitly identifying what human learning processes, relationships, or skills are being replaced. We could develop metrics that track preservation alongside progress. We could build “conservation” components into new approaches to protect irreplaceable educational values.

Third, and finally, we should recognize that Mollick-style solution-building and Bogost-style critical analysis serve different but essential roles. Practitioners need actionable guidance; while the broader field needs diagnostic consciousness. We should avoid a false synthesis but instead maintain both approaches as distinct intellectual work that informs each other.

In short, striking a balance may not be the main focus; it’s more about taking practical actions while considering the overall context. Progress is important, but it’s also worth reflecting on what gets left behind. Conscious adaptation.

So, applying these strategies to Harvard’s chatbot, we could ask: (1) How can we create a feedback loop between an intervention like this and the things it uncovers about institutional limits, so that those can be addressed in the appropriate place? (2) How can we measure what value this bot adds for students and for teachers? What is it replacing, what is it adding, what is it making room for? (3) What critique of learning at Harvard is implied by this intervention?

What does all of this mean, finally, for LXD? This is an LXD conference, so I don’t need to spend a lot of time explaining what it is. But let’s just use this basic definition as a starting point. It’s about experiences, it’s about centering the learner, it’s about achieving learning outcomes, etc.

Comparing my conscious adaptation approach to what typifies LXD, I can see a number of alignments.

Both LXD and Conscious Adaptation prioritize authentic human engagement over efficiency. LXD through human-centered design, conscious adaptation through protecting meaningful intellectual effort from AI displacement.

LXD’s focus on holistic learning journeys aligns with both Mollick’s “effort is the point” and Bogost’s concern that AI shortcuts undermine the educational value embedded in struggle and synthesis.

LXD’s experimental, prototype-driven approach mirrors my “diagnostic pragmatism”—both treat interventions as learning opportunities that reveal what actually works rather than pursuing idealized solutions.

So, going back one final time to Harvard’s bot, an LXD practice aligned in this way would lead us to ask: (1) Is this leveraging GenAI to protect and promote genuine intellectual effort? (2) Are teachers and learners meaningfully engaged in the ongoing development of this technology? (3) Is this prototype properly embedded, so that its potential to create learning for the organization can be realized?

So, where does this leave us as learning experience designers? I see three practical imperatives for Conscious Adaptation.

First, we need to protect meaningful human effort while leveraging AI’s strengths. Remember that “the effort is the point” in learning. Rather than asking “can AI do this?”, we should ask “should it?” Harvard’s bot works because it scaffolds thinking rather than replacing it. We should use AI for feedback and iteration while preserving human work for synthesis and struggle.

Second, we must design for real institutions, not ideal ones. Institutions resist change, capture innovations, and abandon initiatives. We need to design assuming limited budgets, overworked staff, and competing priorities. Every implementation becomes a diagnostic probe that reveals what resistance actually tells us about institutional capacity.

Third, we have to recognize the limits of design. AI exposes deeper structural problems like grade obsession, teacher burnout, and test-driven curricula. You can’t design your way out of systemic issues, and sometimes the best move is recognizing when the problem isn’t experiential at all.

This is Conscious Adaptation—moving forward with eyes wide open.

Thanks.

On the design and regulation of technology

The following is a section from a manuscript in press on the similarities and differences in approaches to explainable and contestable AI in design and law (Schmude et al., 2025). It ended up on the cutting room floor, but it is the kind of thing I find handy to refer back to, so I chose to share it here.

The responsible design of AI, including practices that seek to make AI systems more explainable and contestable, must somehow relate to legislation and regulations. Writing about responsible research and innovation (RRI) more broadly, Stilgoe et al. (2013) assert that RRI, which we would say includes design, must be embedded in regulation. But does it really make sense to think of the relationship between design and regulation in this way? Understood abstractly, there are in fact at least four ways in which we can think about the relationship between the design and regulation of technology (Figure 1).

Figure 1: We see four possible ways that the relationship between (D) the design and (R) regulation of technology can be conceptualized: (1) design and regulation are independent spheres, (2) design and regulation partially overlap, (3) design is embedded inside of regulation, or (4) regulation is embedded inside design. In all cases, we assume an interactional relation between the two spheres.

To establish the relationship between design and regulation, we first need to establish how we should think about regulation, and related concepts such as governance and policymaking more generally. One straightforward definition would be that regulation entails formal rules and enforcement mechanisms that constrain behavior. These are backed by authority—typically state authority, but increasingly also non-stake actors. Regulation and governance are interactive and mutually constitutive. Regulation is one mechanism within governance systems. Governance frameworks establish contexts for regulation. Policymaking connects politics to governance by translating political contestation into actionable frameworks. Politics, then, influences all these domains: policymaking, governance, and regulation. And they, in turn, operate within and reciprocally shape society writ large. See Table 1 for working definitions of ‘regulation’ and associated concepts.

ConceptDefinition
RegulationFormal rules and enforcement mechanisms that constrain behavior, typically state-backed but increasingly emerging from non-state actors (industry self-regulation, transnational regulatory bodies).
GovernanceBroader arrangements for coordinating social action across public, private, and civil society spheres through both formal and informal mechanisms.
PolicymakingProcess of formulating courses of action to address public problems.
PoliticsContestation of power, interests, and values that shapes governance arrangements.
SocietyBroader context of social relations, norms, and institutions.
Table 1: Working definitions of ‘regulation’ and associated concepts.

What about design? Scholars of regulation have adopted the notion of ‘regulation by design’ (RBD) to refer to the inscribing of rules into the world through the creation and implementation of technological artifacts. Prifti (2024) identifies two prevailing approaches to RBD: The essentialist view treats RBD as policy enactments, or “rules for design.” By contrast, the functionalist view treats design as a mere instrument, or “ruling by design.” We agree with Prifti when he states that both approaches are limited. Essentialism neglects the complexity of regulatory environments, while functionalism neglects the autonomy and complexity of design as a practice.

Prifti proposes a pragmatist reconstruction that views regulation as a rule-making activity (“regulativity”) performed through social practices including design (the “rule of design”). Design is conceptualized as a contextual, situated social practice that performs changes in the environment, rather than just a tool or set of rules. Unlike the law, markets, or social norms, which rely on incentives and sanctions, design can simply disable the possibility of non-compliance, making it a uniquely powerful form of regulation. The pragmatist approach distinguishes between regulation and governance, with governance being a meta-regulative activity that steers how other practices (like design) regulate. This reconceptualization helps address legitimacy concerns by allowing for greater accountability for design practices that might bypass democratic processes.

Returning to the opening question then, out of the four basic ways in which the relationship between design and regulation can be drawn (Figure 1), if we were to adopt Prifti’s pragmatist view, Type 3 would most accurately capture the relationship, with design being one of a variety of more specific ways in which regulation (understood as regulativity) actually makes changes in the world. These other forms of regulatory practice are not depicted in the figure. This seems to align with Stilgoe et al.’s aforementioned position that responsible design must be embedded within regulation. Although there is a slight nuance to our position: Design is conceived of as a form of regulation always, regardless of active work on the part of designers to ‘embed’ their work inside regulatory practices. Stilgoe et al.’s admonition can be better understood as a normative claim: Responsible designers would do well to understand and align their design work with extant laws and regulations. Furthermore, following Prifti, design is beholden to governance and must be reflexively aware of how governance steers its practices (cf. Figure 2).

Figure 2: Conceptual model of the relationship between design, ‘classical’ regulation (i.e., law-making), and governance. Both design and law-making are forms of regulation (i.e., ‘regulativity’). Governance steers how design and law-making regulate, and design and law-making are both accountable to (and reflexively aware of) governance.

Bibliography

  • Prifti, K. (2024). The theory of ‘Regulation By Design’: Towards a pragmatist reconstruction. Technology and Regulation2024, 152–166. https://doi.org/10/g9dr24
  • Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy42(9), 1568–1580. https://doi.org/10/f5gv8h

Postdoc update – July 2025

I am over one year into my postdoc at TU Delft. Where did the time go? By way of an annual report, here’s a rundown of my most notable outputs and activities since the previous update from June 2024. And also, some notes on what I am up to now.

Happenings

Participatory AI and ML Engineering: On 13 February 2024 at a Human Values for Smarter Cities meeting and on 11 June 2024 at a Cities Coalition for Digital Rights meeting, I presented a talk on participatory AI and ML engineering (blogged here). This has since evolved into a study I am currently running with the working title “Vision Model Macroscope.” We are designing, building, and evaluating an interface that allows municipal workers to understand and debate value-laden technical decisions made by machine learning engineers in the construction of camera vehicles. For the design, I am collaborating with CLEVER°FRANKE. The study is part of the Human Values for Smarter Cities projected headed up by the Civic Interaction Design group at AUAS.

Envisioning Contestability Loops: My article “Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI” (with Ianus Keller, Mireia Yurrita Semperena, Denis Bulygin, Gerd Kortuem, and Neelke Doorn) was published in She Ji on 17 June 2024. (I had already published the infographic “Contestability Loops for Public AI,” which the article revolves around, on 17 April 2024.) Later in the year, on 5 September 2024, I ran the workshop that the study builds on as a ThingsCon Salon. And on 27 September 2024, I presented the article at Lawtomation Days in Madrid, Spain, as part of the panel “Methods in law and technology research: inter- and cross-disciplinary challenges and opportunities,” chaired by Kostina Prifti (slides). (Also, John Thackara said nice things about the article online.)

Contestability Loops for Public AI infographic
Envisioning Contestability Loops workshop at ThingsCon Salon in progress.

Democratizing AI Through Continuous Adaptability: I presented on “Democratizing AI Through Continuous Adaptability: The Role of DevOps” at the TILTing Perspectives 2024 panel “The mutual shaping of democratic practices & AI,” which was chaired and moderated by Merel Noorman on 14 July 2024. I later reprised this talk at NWO ICT.OPEN on 16 April 2025 as part of the track “Human-Computer Interaction and Societal Impact in the Netherlands,” chaired by ArmaÄŸan KarahanoÄŸlu and Max Birk (PDF of slides).

From Stem to Stern: I was part of the organizing team of the CSCW 2024 workshop “From Stem to Stern: Contestability Along AI Value Chains,” which took place as a hybrid one-day session on 9 November 2024. I blogged a summary and some takeaways of the workshop here. Shoutout to Agathe Balayn and Yulu Pi for leading this endeavor.

Contestable AI Talks: I was invited to speak on my PhD research at various meetings and events organized by studios, agencies, consultancies, schools, and public sector organizations. On 3 September 2024, at the data design agency CLEVER°FRANKE (slides). On 10 January 2025, at the University of Utrecht Computational Sociology group. On 19 February 2025, at digital ethics consultancy The Green Land (slides). On 6 March 2024, at Communication and Multimedia Design Amsterdam (slides). And on 17 March 2025, at the Advisory Board on Open Government and Information Management.

Designing Responsible AI: Over the course of 2024, Sara Colombo, Francesca Mauri, and I developed and taught for the first time a new Integrated Product Design master’s elective, “Designing Responsible AI” (course description). Later, on 28 March 2025, I was invited by my colleagues Alessandro Bozzon and Carlo van der Valk to give a single-morning interactive lecture on part of the same content at the course AI Products and Services (slides).

Books that represent the range of theory covered in the course “Designing Responsible AI.”

Stop the Cuts: On 2 July 2024, a far-right government was sworn in in the Netherlands (it has since fallen). They intended to cut funding to education by €2 billion. A coalition of researchers, teachers, students, and others organized to protest and strike in response. I was present at several of these actions: The alternative opening of the academic year in Utrecht on 2 September 2024. Local walkouts on 14 November 2024 (I participated in Utrecht). Mass demonstration in The Hague on 25 November 2024. Local actions on 11 December 2024 (I participated in Delft). And finally, for now at least, on 24 April 2025, at the Delft edition of the nationwide relay strike. If you read this, work in academia, and want to act, join a union (I am a member of the AOb), and sign up for the WOinActie newsletter.

End of the march during the 24 April 2025 strike in Delft.

Panels: Over the past months, I was a panelist at several events. On 22 October 2024, at the Design & AI Symposium as part of the panel “Evolving Perspectives on AI and Design,” together with Iohanna Nicenboim and Jesse Benjamin, moderated by Mathias Funk (blog post). On 13 December 2024 at TH/NGS as part of the panel “Rethink Design: Book Launch and Panel Discussion on Designing With AI” chaired by Roy Bendor (video). On 12 March 2025, at the panel “Inclusive AI: Approaches to Digital Inclusion,” chaired by Nazli Cila and Taylor Stone.

Slide I used during my panel contribution at the Design & AI symposium.

Design for Human Autonomy: I was part of several activities organized by the Delft Design for Values institute related to their annual theme of autonomy (led by Michael Klenk). I was a panelist on 15 October 2024 during the kick-off event (blog post). I wrote the section on designing AI for autonomy for the white paper edited by Udo Pesch (preprint). And during the closing symposium, master’s graduation student Ameya Sawant, whom I am coaching (with Fernando Secomandi acting as chair), was honored as a finalist in the thesis competition.

Master Graduation Students: Four master students that I coached during their thesis projects graduated, which between them explored technology’s role in society through AI-mediated civic engagement, generative AI implementation in public services, experimental approaches to AI trustworthiness, and urban environmental sensing—Nina te Groen (with Achilleas Psyilidis as chair), Romée Postma (with Roy Bendor), Eline Oei (with Giulia Calabretta), and Jim Blom (with Tomasz Jaskiewicz).

Architecting for Contestability: On 22 November 2025, I ran a single-day workshop about contestability for government-employed ICT architects participating in the Digital Design & Architecture course offered by the University of Twente, on invitation from Marijn Janssen (slides).

Qualitative Design Research: On 17 December 2024, I delivered a lecture on qualitative design research for the course Empirical Design Research, on invitation from my colleague Himanshu Verma (slides). Later, on 22 April 2025, I delivered a follow-up in the form of a lecture on reflexive thematic analysis for the course Product Futures Studio, coordinated by Holly McQuillan (slides).

Democratic Generative Things: On 6 June 2025 I joined the ThingsCon unconference to discuss my contribution to the RIOT report, “Embodied AI and collective power: Designing democratic generative things” (preprint). The report was edited by edited by Andrea Krajewski and Iskander Smit.

Me, holding forth during the ThingsCon RIOT unconference.

Learning Experience Design: I delivered the closing invited talk at LXDCON on 12 June 2025, reflecting on the impact of GenAI on the fields of education and design for learning (slides). Many thanks to Niels Floor for the invitation.

People’s Compute: I published a preprint of my position paper “People’s Compute: Design and the Politics of AI Infrastructures” over at OSF on 14 April 2025. I emailed it to peers and received over a dozen encouraging responses. It was also somehow picked up by Evgeny Morozov’s The Syllabus with some nice commentary attached.

On deck

So what am I up to at the moment? Keeping nice and busy.

  • I am co-authoring several articles, papers, and book chapters on topics including workplace automation, AI transparency, contestability in engineering, AI design and regulation, computational argumentation, explainable and participatory AI, and AI infrastructure politics. I do hope at least some of these will see the light of day in the coming months.
  • I am preparing a personal grant application that builds on the vision laid out in People’s Compute.
  • I will be delivering an invited talk at Enterprise UX on 21 November 2025.
  • I am acting as a scientific advisor to a center that is currently being established, which focuses on increasing digital autonomy within Dutch government institutions.
  • I will be co-teaching Designing Responsible AI again in Q1 of the next academic year.
  • I’ll serve as an associate chair on the CHI 2026 design subcommittee.
  • And I have signed up to begin our university’s teaching qualification certification.

Whew. That’s it. Thanks for reading (skimming?) if you’ve made it all the way to the end. I will try to circle back and do another update, maybe a little sooner than this one, say in six months’ time.

On how to think about large language models

How should we think about large language models (LLMs)? People commonly think and talk about them in terms of human intelligence. To the extent this metaphor does not accurately reflect the properties of the technology, this may lead to misguided diagnoses and prescriptions. It seems to me an LLM is not like a human or a human brain in so many ways. One crucial distinction for me is that LLMs lack individuality and subjectivity.

What are organisms that similarly lack these qualities? Coral polyps and Portuguese man o’ war come to mind, or slime mold colonies. Or maybe a single bacterium, like an E. coli. Each is essentially identical to its clones, responds automatically to chemical gradients (bringing to mind how LLMs respond to prompts), and doesn’t accumulate unique experiences in any meaningful way.

Considering all these examples, the meme about LLMs being like a shoggoth (an amorphous blob-like monster originating from the speculative fiction of Howard Philips Lovecraft) is surprisingly accurate. The thing about these metaphors though is that it’s about as hard to reason about such organisms as it is to reason about LLMs. So to use them as a metaphor for thinking about LLMs won’t work. A shoggoth is even less helpful because the reference will only be familiar to those who know their H.P. Lovecraft.

So perhaps we should abandon metaphorical thinking and think historically instead. LLMs are a new language technology. As with previous technologies, such as the printing press, when they are introduced, our relationship to language changes. How does this change occur?

I think the change is dialectical. First, we have a relationship to language that we recognize as our own. Then, a new technology destabilizes this relationship, alienating us from the language practice. We no longer see our own hand in it. And we experience a lack of control over language practice. Finally, we reappropriate this language use in our practices. In this process of reappropriation, language practice as a whole is transformed. And the cycle begins again.

For an example of this dialectical transformation of language practice under the influence of new technology, we can take Eisenstein’s classic account of the history of the printing press (1980). Following its introduction many things changed about how we relate to language. Our engagement with language shifted from a primarily oral one to a visual and deliberative one. Libraries became more abundantly stocked, leading to the practice of categorization and classification of works. Preservation and analysis of stable texts became a possibility. The solitary reading experience gained prominence, producing a more private and personal relationship between readers and texts. Concerns about information overload first reared its head.

All of these things were once new and alien to humans. Now we consider them part of the natural order of things. They weren’t predetermined by the technology, they emerged through this active tug of war between groups in society about what the technology would be used for, mediated by the affordances of the technology itself.

In concrete material terms, what does an LLM consist of? An LLM is just numerical values stored in computer memory. It is a neural network architecture consisting of billions of parameters in weights and biases, organized in matrices. The storage is distributed across multiple devices. System software loads these parameters and enables the calculation of inferences. This all runs in physical data centers housing computing infrastructure, power, cooling, and networking infrastructure. Whenever people start talking about LLMs having agency or being able to reason, I remind myself of these basic facts.

A printing press, although a cleverly designed, engineered, and manufactured device, is similarly banal when you break it down to its essential components. Still, the ultimate changes to how we relate to language have been profound. From these first few years of living with LLMs, I think it is not unreasonable to think they will cause similar upheavals. What is important for me is to recognize how we become alienated from language, and to see ourselves as having agency in reappropriating LLM-mediated language practice as our own.

On mapping AI value chains

At CSCW 2024, back in November of last year, we* ran a workshop titled “From Stem to Stern: Contestability Along AI Value Chains.” With it, we wanted to address a gap in contestable AI research. Current work focuses mainly on contesting specific AI decisions or outputs (for example, appealing a decision made by an automated content moderation system). But we should also look at contestability across the entire AI value chain—from raw material extraction to deployment and impact (think, for example, of data center activists opposing the construction of new hyperscales). We aimed to explore how different stakeholders can contest AI systems at various points in this chain, considering issues like labor conditions, environmental impact, and data collection practices often overlooked in contestability discussions.

The workshop mixed presentations with hands-on activities. In the morning, researchers shared their work through short talks, both in person and online. The afternoon focused on mapping out where and how people can contest AI systems, from data collection to deployment, followed by detailed discussions of the practical challenges involved. We had both in-person and online participants, requiring careful coordination between facilitators. We wrapped up by synthesizing key insights and outlining future research directions.

I was responsible for being a remote facilitator most of the day. But Mireia and I also prepared and ran the first group activity, in which we mapped a typical AI value chain. I figured I might as well share the canvas we used for that here. It’s not rocket science, but it held up pretty well, so maybe some other people will get some use out of it. The canvas was designed to offer a fair bit of scaffolding for thinking through what decision points there are along the chain that are potentially value-laden.

AI value chain mapping canvas (licensed CC-BY 4.0 Mireia Yurrita & Kars Alfrink, 2024). Download PDF.

Here’s how the activity worked: We covered about 50 minutes doing a structured mapping exercise where participants identified potential contestation points along an AI value chain, using ChatGPT as an example case. The activity used a Miro board with a preliminary map showing different stages of AI development (infrastructure setup, data management, AI development, etc.). Participants first brainstormed individually for 10 minutes, adding value-laden decisions and noting stakeholders, harms, benefits, and values at stake. They then collaborated to reorganize and discuss the map for 15 minutes. The activity concluded with participants using dot voting (3 votes each) to identify the most impactful contestation sites, which were then clustered and named to feed into the next group activity.

The activity design drew from two main influences: typical value chain mapping methodologies (e.g., Mapping Actors along Value Chains, 2017), which usually emphasize tracking actors, flows, and contextual factors, and Wardley mapping (Wardley, 2022), which is characterized by the idea of a structured progression along an x-axis with an additional dimension on the y-axis.

The canvas design aimed to make AI system development more tangible by breaking it into clear phases (from infrastructure through governance) while considering visibility and materiality through the y-axis. We ultimately chose to use a familiar system (ChatGPT). This, combined with the activity’s structured approach, helped participants identify concrete opportunities for intervention and contestation along the AI value chain, which we could build on during the rest of the workshop.

I got a lot out of this workshop. Some of the key takeaways that emerged out of the activities and discussions include:

  • There’s a disconnect between legal and technical communities, from basic terminology differences to varying conceptions of key concepts like explainability, highlighting the need for translation work between disciplines.
  • We need to move beyond individual grievance models to consider collective contestation and upstream interventions in the AI supply chain.
  • We also need to shift from reactive contestation to proactive design approaches that build in contestability from the start.
  • By virtue of being hybrid, we were lucky enough to have participants from across the globe. This helped drive home to me the importance of including Global South perspectives and considering contestability beyond Western legal frameworks. We desperately need a more inclusive and globally-minded approach to AI governance.

Many thanks to all the workshop co-organizers for having me as part of the team and to Agathe and Yulu, in particular, for leading the effort.


* The full workshop team consisted of Agathe Balayn, Yulu Pi, David Gray Widder, Mireia Yurrita, Sohini Upadhyay, Naveena Karusala, Henrietta Lyons, Cagatay Turkay, Christelle Tessono, Blair Attard-Frost, Ujwal Gadiraju, and myself.

On autonomy, design, and AI

In my thesis, I use autonomy to build the normative case for contestability. It so happens that this year’s theme at the Delft Design for Values Institute is also autonomy. On October 15, 2024, I participated in a panel discussion on autonomy to kick things off. I collected some notes on autonomy that go beyond the conceptualization I used in my thesis. I thought it might be helpful and interesting to collect some of them here in adapted form.

The notes I brought included, first of all, a summary of the ecumenical conceptualization of autonomy concerning automated decision-making systems offered by Alan Rubel, Clinton Castro, and Adam Pham (2021). They conceive of autonomy as effective self-governance. To be autonomous, we need authentic beliefs about our circumstances and the agency to act on our plans. Regarding algorithmic systems, they offer this notion of a reasonable endorsement test—the degree to which a system can be said to respect autonomy depends on its reliability, the stakes of its outputs, the degree to which subjects can be held responsible for inputs, and the distribution of burdens across groups.

Second, I collected some notes from several pieces by James Muldoon, which get into notions of freedom and autonomy that were developed in socialist republican thought by the likes of Luxemburg, Kautsky, and Castoriadis (2020, 2021a, 2021b). This story of autonomy is sociopolitical rather than moral. This approach is quite appealing for someone interested in non-ideal theory in a realist mode like myself. The account of autonomy Muldoon offers is one where individual autonomy hinges on greater group autonomy and stronger bonds of association between those producing and consuming technologies. Freedom is conceived of as collective self-determination.

And then third and finally, there’s this connected idea of relational autonomy, which to a degree is part of the account offered by Rubel et al., but in the conceptions here more radical in how it seeks to create distance from liberal individualism (e.g., Christman, 2004; Mhlambi & Tiribelli, 2023; Westlund, 2009). In this, individual capacity for autonomous choice is shaped by social structures. So freedom becomes realized through networks of care, responsibility, and interdependence.

That’s what I am interested in: accounts of autonomy that are not premised on liberal individualism and that give us some alternative handle on the problem of the social control of technology in general and of AI in particular.

From my point of view, the implications of all this for design and AI include the following.

First, to make a fairly obvious but often overlooked point, the degree to which a given system impacts people’s autonomy depends on various factors. It makes little sense to make blanket statements about AI destroying our autonomy and so on.

Second, in value-sensitive design terms, you can think about autonomy as a value to be balanced against others—in the case where you take the position that all values can be considered equally important, at least in principle. Or you can consider autonomy more like a precondition for people to live with technology in concordance with their values, making autonomy take precedence over other values. The sociopolitical and relational accounts above point in this direction.

Third, suppose you buy into the radical democratic idea of technology and autonomy. In that case, it follows that it makes little sense to admonish individual designers about respecting others’ autonomy. They may be asked to privilege technologies in their designs that afford individual and group autonomy. But designers also need organization and emancipation more often than not. So it’s about building power. The power of workers inside the organizations that develop technologies and the power of communities that “consume” those same technologies. 

With AI, the fact is that, in reality, in the cases I look at, the communities that AI is brought to bear on have little say in the matter. The buyers and deployers of AI could and should be made more accountable to the people subjected to AI.

Towards a realist AI design practice?

This is a version of the opening statement I contributed to the panel “Evolving Perspectives on AI and Design” at the Design & AI symposium that was part of Dutch Design Week 2024. I had the pleasure of joining Iohanna Nicenboim and Jesse Benjamin on stage to explore what could be called the post-GenAI possibility space for design. Thanks also to Mathias Funk for moderating.

The slide I displayed:

My statement:

  1. There’s a lot of magical thinking in the AI field today. It assumes intelligence is latent in the structure of the internet. Metaphors like AGI and superintelligence are magical in nature. AI practice is also very secretive. It relies on demonstrations. This leads to a lack of rigor and political accountability (cf. Gilbert & Lambert in VentureBeat, 2023).
  2. Design in its idealist mode is easily fooled by such magic. For example, in a recent report, the Dutch Court of Audit states that 35% of government AI systems are not known to meet expectations (cf. Raji et al., 2022).
  3. What is needed is design in a realist mode. Realism focuses on who does what to whom in whose interest (cf. Geuss, 2008, 23 in von Busch & Palmås, 2023). Applied to AI the question becomes who gets to do AI to whom? This isn’t to say we should consider AI technologies completely inert. It mediates our being in the world (Verbeek, 2021). But we should also not consider it an independent force that’s just dragging us along.
  4. The challenge is to steer a path between, on the one hand, wholesale cynical rejection and naive, optimistic, unconditional embrace, on the other hand.
  5. In my own work, what that looks like is to use design to make things that allow me to go into situations where people are building and using AI systems. And to use those things as instruments to ask questions related to human autonomy, social control, and collective freedom in the face of AI.
  6. The example shown is an animated short depicting a design fiction scenario involving intelligent camera cars used for policy execution in urban public space. I used this video to talk to civil servants about the challenges facing governments who want to ensure citizens remain in control of the AI systems they deploy (cf. Alfrink et al., 2023).
  7. Why is this realist? Because the work looks at how some groups of people use particular forms of actually existing AI to do things to other people. The work also foregrounds the competing interests that are at stake. And it frames AI as neither fully autonomous nor fully passive, but as a thing that mediates peoples’ perceptions and actions.
  8. There are more examples besides this. But I will stop here. I just want to reiterate that I think we need a realist approach to the design of AI.