Reclaiming Autonomy: Designing AI-Enhanced Work Tools That Empower Users

Based on an invited talk delivered at Enterprise UX, on November 21, 2025 in Amersfoort, the Netherlands.

In a previous life, I was a practicing designer. These days I’m a postdoc at TU Delft, researching something called Contestable AI. Today I want to explore how we can design AI work tools that preserve worker autonomy—focusing specifically on large language models and knowledge work.

The Meeting We’ve All Been In

Who’s been in this meeting? Your CEO saw a demo, and now your PM is asking you to build some kind of AI feature into your product.

This is very much like Office Space: decisions about tools and automation are being made top-down, without consulting the people who actually do the work.

What I want to explore are the questions you should be asking before you go off and build that thing. Because we shouldn’t just be asking “can we build it?” but also “should we build it?” And if so, how do we build it in a way that empowers workers rather than diminishes them?

Part 1: Reality Check

What We’re Actually Building

Large language models can be thought of as databases containing programs for transforming text (Chollet, 2022). When we prompt, we’re querying that database.

The simpler precursors of LLMs would let you take the word “king” and ask it to make it female, outputting “queen.” Now, language models work similarly but can do much more complex transformations—give it a poem, ask it to write in the style of Shakespeare, and it outputs a transformed poem.

The key point: they are sophisticated text transformation machines. They are not magic. Understanding this helps us design better.

Three Assumptions to Challenge

Before adding AI, we should challenge three things:

  1. Functionality: Does it actually work?
  2. Power: Who really benefits?
  3. Practice: What skills or processes are transformed?

1. Functionality: Does It Work?

One problem with AI projects is that functionality is often assumed instead of demonstrated (Raji et al., 2022). And historically, service sector automation has not led to expected productivity gains (Benanav, 2020).

What this means: don’t just trust the demo. Demand evidence in your actual context. Ask for them to show it working in production, not a prototype.

2. Power: Who Benefits?

Current AI developments seem to favor employers over workers. Because of this, some have started taking inspiration from the Luddites (Merchant, 2023).

It’s a common misconception that Luddites hated technology. They hated losing control over their craft. They smashed frames operated by unskilled workers that undercut skilled craftspeople (Sabie et al., 2023).

What we should be asking: who gains power, and who loses it? This isn’t about being anti-technology. It’s about being pro-empowerment.

3. Practice: What Changes?

AI-enabled work tools can have second-order effects on work practices. Automation breaks skill transmission from experts to novices (Beane, 2024). For example, surgical robots that can be remotely operated by expert surgeons mean junior surgeons don’t learn by doing.

Some work that is challenging, complex, and requires human connection should be preserved so that learning can happen.

On the other hand, before we automate a task, we should ask whether a process should exist at all. Otherwise, we may be simply reifying bureaucracy. As Michael Hammer put it: “don’t automate, obliterate” (1990).

Every automation project is an opportunity to liberate skilled professionals from bureaucracy.

Part 2: Control → Autonomy

All three questions are really about control. Control over whether tools serve you. Control over developing expertise. This is fundamentally about autonomy.

What Autonomy Is

A common definition of autonomy is the effective capacity for self-governance (Prunkl, 2022). It consists of two dimensions:

  • Authenticity: holding beliefs that are free from manipulation
  • Agency: having meaningful options to act on those beliefs

Both are necessary for autonomy.

Office Space examples:

  • Authenticity: Joanna’s manager tells her the minimum is 15 pieces of flair, then criticizes her for wearing “only” the minimum. Her understanding of the rules gets manipulated.
  • Agency: Lumbergh tells Peter, “Yeah, if you could come in on Saturday, that would be great.” Technically a request, but the power structure eliminates any real choice.

How AI Threatens Autonomy

AI can threaten autonomy in a variety of ways. Here are a few examples.

Manipulation — Like TikTok’s recommendation algorithm. It exploits cognitive vulnerabilities, creating personalized content loops that maximize engagement time. This makes it difficult for users to make autonomous decisions about their attention and time use.

Restricted choice — LinkedIn’s automated hiring tools can automatically exclude qualified candidates based on biased pattern matching. Candidates are denied opportunities without human review and lack the ability to contest the decision.

Diminished competence — Routinely outsourcing writing, problem-solving, or analysis to ChatGPT without critical engagement can lead to atrophying the very skills that make professionals valuable. Similar to how reliance on GPS erodes navigational abilities.

These are real risks, not hypothetical. But we can design AI systems to protect against these threats—and we can do more. We can design AI systems to actively promote autonomy.

A Toolkit for Designing AI for Autonomy

Here’s a provisional toolkit with two parts: one focusing on design process, the other on product features (Alfrink, 2025).

Process:

  • Reflexive design
  • Impact assessment
  • Stakeholder negotiation

Product:

  • Override mechanisms
  • Transparency
  • Non-manipulative interfaces
  • Collective autonomy support

I’ll focus on three elements that I think are most novel: relfexive design, stakeholder negotiation, and collective autonomy support.

Part 3: Application

Example: LegalMike

LegalMike is a Dutch legal AI platform that helps lawyers draft contracts, summarize case law, and so forth. It’s a perfect example to apply my framework—it uses an LLM and focuses on knowledge work.

1. Reflexive Design

The question here: what happens to “legal judgment” when AI drafts clauses? Does competence shift from “knowing how to argue” to “knowing how to prompt”?

We should map this before we start shipping.

This is new because standard UX doesn’t often ask how AI tools redefine the work itself.

2. Stakeholder Negotiation

Run workshops with juniors, partners, and clients:

  • Juniors might fear deskilling
  • Partners want quality control
  • Clients may want transparency

By running workshops like this, we make tensions visible and negotiate boundaries between stakeholders.

This is new because we have stakeholders negotiate what autonomy should look like, rather than just accept what exists.

3. Collective Autonomy Support

LegalMike could isolate, or connect. Isolating means everyone with their own AI. But we could deliberately design it to surface connections:

  • Show which partner’s work the AI drew from
  • Create prompts that encourage juniors to consult seniors
  • Show how firm expertise flows, not just individual outputs

This counters the “individual productivity” framing that dominates AI products today.

Tool → Medium

These interventions would shift LegalMike from a pure efficiency tool to a medium for collaborative legal work that preserves professional judgment, surfaces power dynamics, and strengthens collective expertise—not just individual output.

Think of LLMs not as a robot arm that automates away knowledge work tasks—like in a Korean noodle shop. Instead, it can be the robot arm that mediates collaboration between humans to produce entirely new ways of working—like in the CRTA visual identity project for the University of Zagreb.

Conclusion

AI isn’t neutral. It’s embedded in power structures. As designers, we’re not just building features—we’re brokers of autonomy.

Every design choice we make either empowers or disempowers workers. We should choose deliberately.

And seriously, watch Office Space if you haven’t seen it. It’s the best “documentary” about workplace autonomy ever made. Mike Judge understood this as early as 1999.

Waiting for the smart city

Nowadays when we talk about the smart city we don’t necessarily talk about smartness or cities.

I feel like when the term is used it often obscures more than it reveals.

Here a few reasons why.

To begin with, the term suggests something that is yet to arrive. Some kind of tech-enabled utopia. But actually, current day cities are already smart to a greater or lesser degree depending on where and how you look.

This is important because too often we postpone action as we wait for the smart city to arrive. We don’t have to wait. We can act to improve things right now.

Furthermore, ‘smart city’ suggests something monolithic that can be designed as a whole. But a smart city, like any city, is a huge mess of interconnected things. It resists topdown design.

History is littered with failed attempts at authoritarian high-modernist city design. Just stop it.

Smartness should not be an end but a means.

I read ‘smart’ as a shorthand for ‘technologically augmented’. A smart city is a city eaten by software. All cities are being eaten (or have been eaten) by software to a greater or lesser extent. Uber and Airbnb are obvious examples. Smaller more subtle ones abound.

The question is, smart to what end? Efficiency? Legibility? Controllability? Anti-fragility? Playability? Liveability? Sustainability? The answer depends on your outlook.

These are ways in which the smart city label obscures. It obscures agency. It obscures networks. It obscures intent.

I’m not saying don’t ever use it. But in many cases you can get by without it. You can talk about specific parts that make up the whole of a city, specific technologies and specific aims.


Postscript 1

We can do the same exercise with the ‘city’ part of the meme.

The same process that is making cities smart (software eating the world) is also making everything else smart. Smart towns. Smart countrysides. The ends are different. The networks are different. The processes play out in different ways.

It’s okay to think about cities but don’t think they have a monopoly on ‘disruption’.

Postscript 2

Some of this inspired by clever things I heard Sebastian Quack say at Playful Design for Smart Cities and Usman Haque at ThingsCon Amsterdam.

Artificial intelligence, creativity and metis

Boris pointed me to CreativeAI, an interesting article about creativity and artificial intelligence. It offers a really nice overview of the development of the idea of augmenting human capabilities through technology. One of the claims the authors make is that artificial intelligence is making creativity more accessible. Because tools with AI in them support humans in a range of creative tasks in a way that shortcuts the traditional requirements of long practice to acquire the necessary technical skills.

For example, ShadowDraw (PDF) is a program that helps people with freehand drawing by guessing what they are trying to create and showing a dynamically updated ‘shadow image’ on the canvas which people can use as a guide.

It is an interesting idea and in some ways these kinds of software indeed lower the threshold for people to engage in creative tasks. They are good examples of artificial intelligence as partner in stead of master or servant.

While reading CreativeAI I wasn’t entirely comfortable though and I think it may have been caused by two things.

One is that I care about creativity and I think that a good understanding of it and a daily practice at it—in the broad sense of the word—improves lives. I am also in some ways old-fashioned about it and I think the joy of creativity stems from the infinitely high skill ceiling involved and the never-ending practice it affords. Let’s call it the Jiro perspective, after the sushi chef made famous by a wonderful documentary.

So, claiming that creative tools with AI in them can shortcut all of this life-long joyful toil produces a degree of panic for me. Although it’s probably a Pastoral worldview which would be better to abandon. In a world eaten by software, it’s better to be a Promethean.

The second reason might hold more water but really is more of an open question than something I have researched in any meaningful way. I think there is more to creativity than just the technical skill required and as such the CreativeAI story runs the risk of being reductionist. While reading the article I was also slowly but surely making my way through one of the final chapters of James C. Scott’s Seeing Like a State, which is about the concept of metis.

It is probably the most interesting chapter of the whole book. Scott introduces metis as a form of knowledge different from that produced by science. Here are some quick excerpts from the book that provide a sense of what it is about. But I really can’t do the richness of his description justice here. I am trying to keep this short.

The kind of knowledge required in such endeavors is not deductive knowledge from first principles but rather what Greeks of the classical period called metis, a concept to which we shall return. […] metis is better understood as the kind of knowledge that can be acquired only by long practice at similar but rarely identical tasks, which requires constant adaptation to changing circumstances. […] It is to this kind of knowledge that [socialist writer] Luxemburg appealed when she characterized the building of socialism as “new territory” demanding “improvisation” and “creativity.”

Scott’s argument is about how authoritarian high-modernist schemes privilege scientific knowledge over metis. His exploration of what metis means is super interesting to anyone dedicated to honing a craft, or to cultivating organisations conducive to the development and application of craft in the face of uncertainty. There is a close link between metis and the concept of agility.

So circling back to artificially intelligent tools for creativity I would be interested in exploring not only how we can diminish the need for the acquisition of the technical skills required, but to also accelerate the acquisition of the practical knowledge required to apply such skills in the ever-changing real world. I suggest we expand our understanding of what it means to be creative, but without losing the link to actual practice.

For the ancient Greeks metis became synonymous with a kind of wisdom and cunning best exemplified by such figures as Odysseus and notably also Prometheus. The latter in particular exemplifies the use of creativity towards transformative ends. This is the real promise of AI for creativity in my eyes. Not to simply make it easier to reproduce things that used to be hard to create but to create new kinds of tools which have the capacity to surprise their users and to produce results that were impossible to create before.

Collaboratively designing Things through sketching

So far, Ianus, Alexander and I have announced three of the four people who’ll be speaking at the first Dutch This happened. They are Fabian of Ronimo Games, Philine of Supernana and Dirk of IR labs The final addition to this wonderful line-up is Werner Jainek of Cultured Code, the developers of Things, a task management application for Mac OS X as well as the iPhone and iPod Touch.

When I first got in touch with the guys at Cultured Code, I asked who of the four principals was responsible for interaction design. I was surprised to hear that a large part of the interaction design is a collaborative effort. This flies in the face of conventional wisdom in design circles: You’re not supposed to design by committee. Yet no-one can deny Things’ interaction design is solid, focused and cohesive.

Things touch still life by Cultured Code

Werner and his associates collaborate through vigorous sketching. Sometimes they produce many mock-ups to iron out apparently simple bits of the application. A prime example being this recurring tasks dialog. Just look at all the alternatives they explored. Their attention to detail is admirable. Also, take a look at the photos they posted when they announced Things touch. I’m sure that, if you’re a designer, you can’t help but love carefully examining the details of such work in progress.

Werner tells me he’s been busy scanning lots of sketches to share at This happened – Utrecht #1. I can’t wait to hear his stories about how the design of both the desktop and mobile app have happened.

Werner completes our line-up. Which you can see in full at thishappened.nl. There, you’ll also be able to register for the event starting this Monday (20 October). I hope to see you on 3 November, it promises to be a lovely filled with the stories behind interaction design.