Based on an invited talk delivered at Enterprise UX, on November 21, 2025 in Amersfoort, the Netherlands.

In a previous life, I was a practicing designer. These days I’m a postdoc at TU Delft, researching something called Contestable AI. Today I want to explore how we can design AI work tools that preserve worker autonomy—focusing specifically on large language models and knowledge work.
The Meeting We’ve All Been In

Who’s been in this meeting? Your CEO saw a demo, and now your PM is asking you to build some kind of AI feature into your product.
This is very much like Office Space: decisions about tools and automation are being made top-down, without consulting the people who actually do the work.
What I want to explore are the questions you should be asking before you go off and build that thing. Because we shouldn’t just be asking “can we build it?” but also “should we build it?” And if so, how do we build it in a way that empowers workers rather than diminishes them?
Part 1: Reality Check
What We’re Actually Building

Large language models can be thought of as databases containing programs for transforming text (Chollet, 2022). When we prompt, we’re querying that database.
The simpler precursors of LLMs would let you take the word “king” and ask it to make it female, outputting “queen.” Now, language models work similarly but can do much more complex transformations—give it a poem, ask it to write in the style of Shakespeare, and it outputs a transformed poem.
The key point: they are sophisticated text transformation machines. They are not magic. Understanding this helps us design better.
Three Assumptions to Challenge
Before adding AI, we should challenge three things:
- Functionality: Does it actually work?
- Power: Who really benefits?
- Practice: What skills or processes are transformed?
1. Functionality: Does It Work?

One problem with AI projects is that functionality is often assumed instead of demonstrated (Raji et al., 2022). And historically, service sector automation has not led to expected productivity gains (Benanav, 2020).
What this means: don’t just trust the demo. Demand evidence in your actual context. Ask for them to show it working in production, not a prototype.
2. Power: Who Benefits?

Current AI developments seem to favor employers over workers. Because of this, some have started taking inspiration from the Luddites (Merchant, 2023).
It’s a common misconception that Luddites hated technology. They hated losing control over their craft. They smashed frames operated by unskilled workers that undercut skilled craftspeople (Sabie et al., 2023).
What we should be asking: who gains power, and who loses it? This isn’t about being anti-technology. It’s about being pro-empowerment.
3. Practice: What Changes?

AI-enabled work tools can have second-order effects on work practices. Automation breaks skill transmission from experts to novices (Beane, 2024). For example, surgical robots that can be remotely operated by expert surgeons mean junior surgeons don’t learn by doing.
Some work that is challenging, complex, and requires human connection should be preserved so that learning can happen.
On the other hand, before we automate a task, we should ask whether a process should exist at all. Otherwise, we may be simply reifying bureaucracy. As Michael Hammer put it: “don’t automate, obliterate” (1990).
Every automation project is an opportunity to liberate skilled professionals from bureaucracy.
Part 2: Control → Autonomy
All three questions are really about control. Control over whether tools serve you. Control over developing expertise. This is fundamentally about autonomy.
What Autonomy Is

A common definition of autonomy is the effective capacity for self-governance (Prunkl, 2022). It consists of two dimensions:
- Authenticity: holding beliefs that are free from manipulation
- Agency: having meaningful options to act on those beliefs
Both are necessary for autonomy.
Office Space examples:
- Authenticity: Joanna’s manager tells her the minimum is 15 pieces of flair, then criticizes her for wearing “only” the minimum. Her understanding of the rules gets manipulated.
- Agency: Lumbergh tells Peter, “Yeah, if you could come in on Saturday, that would be great.” Technically a request, but the power structure eliminates any real choice.
How AI Threatens Autonomy
AI can threaten autonomy in a variety of ways. Here are a few examples.

Manipulation — Like TikTok’s recommendation algorithm. It exploits cognitive vulnerabilities, creating personalized content loops that maximize engagement time. This makes it difficult for users to make autonomous decisions about their attention and time use.
Restricted choice — LinkedIn’s automated hiring tools can automatically exclude qualified candidates based on biased pattern matching. Candidates are denied opportunities without human review and lack the ability to contest the decision.
Diminished competence — Routinely outsourcing writing, problem-solving, or analysis to ChatGPT without critical engagement can lead to atrophying the very skills that make professionals valuable. Similar to how reliance on GPS erodes navigational abilities.
These are real risks, not hypothetical. But we can design AI systems to protect against these threats—and we can do more. We can design AI systems to actively promote autonomy.
A Toolkit for Designing AI for Autonomy

Here’s a provisional toolkit with two parts: one focusing on design process, the other on product features (Alfrink, 2025).
Process:
- Reflexive design
- Impact assessment
- Stakeholder negotiation
Product:
- Override mechanisms
- Transparency
- Non-manipulative interfaces
- Collective autonomy support
I’ll focus on three elements that I think are most novel: relfexive design, stakeholder negotiation, and collective autonomy support.
Part 3: Application
Example: LegalMike

LegalMike is a Dutch legal AI platform that helps lawyers draft contracts, summarize case law, and so forth. It’s a perfect example to apply my framework—it uses an LLM and focuses on knowledge work.
1. Reflexive Design

The question here: what happens to “legal judgment” when AI drafts clauses? Does competence shift from “knowing how to argue” to “knowing how to prompt”?
We should map this before we start shipping.
This is new because standard UX doesn’t often ask how AI tools redefine the work itself.
2. Stakeholder Negotiation

Run workshops with juniors, partners, and clients:
- Juniors might fear deskilling
- Partners want quality control
- Clients may want transparency
By running workshops like this, we make tensions visible and negotiate boundaries between stakeholders.
This is new because we have stakeholders negotiate what autonomy should look like, rather than just accept what exists.
3. Collective Autonomy Support

LegalMike could isolate, or connect. Isolating means everyone with their own AI. But we could deliberately design it to surface connections:
- Show which partner’s work the AI drew from
- Create prompts that encourage juniors to consult seniors
- Show how firm expertise flows, not just individual outputs
This counters the “individual productivity” framing that dominates AI products today.
Tool → Medium

These interventions would shift LegalMike from a pure efficiency tool to a medium for collaborative legal work that preserves professional judgment, surfaces power dynamics, and strengthens collective expertise—not just individual output.
Think of LLMs not as a robot arm that automates away knowledge work tasks—like in a Korean noodle shop. Instead, it can be the robot arm that mediates collaboration between humans to produce entirely new ways of working—like in the CRTA visual identity project for the University of Zagreb.
Conclusion

AI isn’t neutral. It’s embedded in power structures. As designers, we’re not just building features—we’re brokers of autonomy.
Every design choice we make either empowers or disempowers workers. We should choose deliberately.

And seriously, watch Office Space if you haven’t seen it. It’s the best “documentary” about workplace autonomy ever made. Mike Judge understood this as early as 1999.