Reclaiming Autonomy: Designing AI-Enhanced Work Tools That Empower Users

Based on an invited talk delivered at Enterprise UX, on November 21, 2025 in Amersfoort, the Netherlands.

In a previous life, I was a practicing designer. These days I’m a postdoc at TU Delft, researching something called Contestable AI. Today I want to explore how we can design AI work tools that preserve worker autonomy—focusing specifically on large language models and knowledge work.

The Meeting We’ve All Been In

Who’s been in this meeting? Your CEO saw a demo, and now your PM is asking you to build some kind of AI feature into your product.

This is very much like Office Space: decisions about tools and automation are being made top-down, without consulting the people who actually do the work.

What I want to explore are the questions you should be asking before you go off and build that thing. Because we shouldn’t just be asking “can we build it?” but also “should we build it?” And if so, how do we build it in a way that empowers workers rather than diminishes them?

Part 1: Reality Check

What We’re Actually Building

Large language models can be thought of as databases containing programs for transforming text (Chollet, 2022). When we prompt, we’re querying that database.

The simpler precursors of LLMs would let you take the word “king” and ask it to make it female, outputting “queen.” Now, language models work similarly but can do much more complex transformations—give it a poem, ask it to write in the style of Shakespeare, and it outputs a transformed poem.

The key point: they are sophisticated text transformation machines. They are not magic. Understanding this helps us design better.

Three Assumptions to Challenge

Before adding AI, we should challenge three things:

  1. Functionality: Does it actually work?
  2. Power: Who really benefits?
  3. Practice: What skills or processes are transformed?

1. Functionality: Does It Work?

One problem with AI projects is that functionality is often assumed instead of demonstrated (Raji et al., 2022). And historically, service sector automation has not led to expected productivity gains (Benanav, 2020).

What this means: don’t just trust the demo. Demand evidence in your actual context. Ask for them to show it working in production, not a prototype.

2. Power: Who Benefits?

Current AI developments seem to favor employers over workers. Because of this, some have started taking inspiration from the Luddites (Merchant, 2023).

It’s a common misconception that Luddites hated technology. They hated losing control over their craft. They smashed frames operated by unskilled workers that undercut skilled craftspeople (Sabie et al., 2023).

What we should be asking: who gains power, and who loses it? This isn’t about being anti-technology. It’s about being pro-empowerment.

3. Practice: What Changes?

AI-enabled work tools can have second-order effects on work practices. Automation breaks skill transmission from experts to novices (Beane, 2024). For example, surgical robots that can be remotely operated by expert surgeons mean junior surgeons don’t learn by doing.

Some work that is challenging, complex, and requires human connection should be preserved so that learning can happen.

On the other hand, before we automate a task, we should ask whether a process should exist at all. Otherwise, we may be simply reifying bureaucracy. As Michael Hammer put it: “don’t automate, obliterate” (1990).

Every automation project is an opportunity to liberate skilled professionals from bureaucracy.

Part 2: Control → Autonomy

All three questions are really about control. Control over whether tools serve you. Control over developing expertise. This is fundamentally about autonomy.

What Autonomy Is

A common definition of autonomy is the effective capacity for self-governance (Prunkl, 2022). It consists of two dimensions:

  • Authenticity: holding beliefs that are free from manipulation
  • Agency: having meaningful options to act on those beliefs

Both are necessary for autonomy.

Office Space examples:

  • Authenticity: Joanna’s manager tells her the minimum is 15 pieces of flair, then criticizes her for wearing “only” the minimum. Her understanding of the rules gets manipulated.
  • Agency: Lumbergh tells Peter, “Yeah, if you could come in on Saturday, that would be great.” Technically a request, but the power structure eliminates any real choice.

How AI Threatens Autonomy

AI can threaten autonomy in a variety of ways. Here are a few examples.

Manipulation — Like TikTok’s recommendation algorithm. It exploits cognitive vulnerabilities, creating personalized content loops that maximize engagement time. This makes it difficult for users to make autonomous decisions about their attention and time use.

Restricted choice — LinkedIn’s automated hiring tools can automatically exclude qualified candidates based on biased pattern matching. Candidates are denied opportunities without human review and lack the ability to contest the decision.

Diminished competence — Routinely outsourcing writing, problem-solving, or analysis to ChatGPT without critical engagement can lead to atrophying the very skills that make professionals valuable. Similar to how reliance on GPS erodes navigational abilities.

These are real risks, not hypothetical. But we can design AI systems to protect against these threats—and we can do more. We can design AI systems to actively promote autonomy.

A Toolkit for Designing AI for Autonomy

Here’s a provisional toolkit with two parts: one focusing on design process, the other on product features (Alfrink, 2025).

Process:

  • Reflexive design
  • Impact assessment
  • Stakeholder negotiation

Product:

  • Override mechanisms
  • Transparency
  • Non-manipulative interfaces
  • Collective autonomy support

I’ll focus on three elements that I think are most novel: relfexive design, stakeholder negotiation, and collective autonomy support.

Part 3: Application

Example: LegalMike

LegalMike is a Dutch legal AI platform that helps lawyers draft contracts, summarize case law, and so forth. It’s a perfect example to apply my framework—it uses an LLM and focuses on knowledge work.

1. Reflexive Design

The question here: what happens to “legal judgment” when AI drafts clauses? Does competence shift from “knowing how to argue” to “knowing how to prompt”?

We should map this before we start shipping.

This is new because standard UX doesn’t often ask how AI tools redefine the work itself.

2. Stakeholder Negotiation

Run workshops with juniors, partners, and clients:

  • Juniors might fear deskilling
  • Partners want quality control
  • Clients may want transparency

By running workshops like this, we make tensions visible and negotiate boundaries between stakeholders.

This is new because we have stakeholders negotiate what autonomy should look like, rather than just accept what exists.

3. Collective Autonomy Support

LegalMike could isolate, or connect. Isolating means everyone with their own AI. But we could deliberately design it to surface connections:

  • Show which partner’s work the AI drew from
  • Create prompts that encourage juniors to consult seniors
  • Show how firm expertise flows, not just individual outputs

This counters the “individual productivity” framing that dominates AI products today.

Tool → Medium

These interventions would shift LegalMike from a pure efficiency tool to a medium for collaborative legal work that preserves professional judgment, surfaces power dynamics, and strengthens collective expertise—not just individual output.

Think of LLMs not as a robot arm that automates away knowledge work tasks—like in a Korean noodle shop. Instead, it can be the robot arm that mediates collaboration between humans to produce entirely new ways of working—like in the CRTA visual identity project for the University of Zagreb.

Conclusion

AI isn’t neutral. It’s embedded in power structures. As designers, we’re not just building features—we’re brokers of autonomy.

Every design choice we make either empowers or disempowers workers. We should choose deliberately.

And seriously, watch Office Space if you haven’t seen it. It’s the best “documentary” about workplace autonomy ever made. Mike Judge understood this as early as 1999.

Postdoc update – July 2025

I am over one year into my postdoc at TU Delft. Where did the time go? By way of an annual report, here’s a rundown of my most notable outputs and activities since the previous update from June 2024. And also, some notes on what I am up to now.

Happenings

Participatory AI and ML Engineering: On 13 February 2024 at a Human Values for Smarter Cities meeting and on 11 June 2024 at a Cities Coalition for Digital Rights meeting, I presented a talk on participatory AI and ML engineering (blogged here). This has since evolved into a study I am currently running with the working title “Vision Model Macroscope.” We are designing, building, and evaluating an interface that allows municipal workers to understand and debate value-laden technical decisions made by machine learning engineers in the construction of camera vehicles. For the design, I am collaborating with CLEVER°FRANKE. The study is part of the Human Values for Smarter Cities projected headed up by the Civic Interaction Design group at AUAS.

Envisioning Contestability Loops: My article “Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI” (with Ianus Keller, Mireia Yurrita Semperena, Denis Bulygin, Gerd Kortuem, and Neelke Doorn) was published in She Ji on 17 June 2024. (I had already published the infographic “Contestability Loops for Public AI,” which the article revolves around, on 17 April 2024.) Later in the year, on 5 September 2024, I ran the workshop that the study builds on as a ThingsCon Salon. And on 27 September 2024, I presented the article at Lawtomation Days in Madrid, Spain, as part of the panel “Methods in law and technology research: inter- and cross-disciplinary challenges and opportunities,” chaired by Kostina Prifti (slides). (Also, John Thackara said nice things about the article online.)

Contestability Loops for Public AI infographic
Envisioning Contestability Loops workshop at ThingsCon Salon in progress.

Democratizing AI Through Continuous Adaptability: I presented on “Democratizing AI Through Continuous Adaptability: The Role of DevOps” at the TILTing Perspectives 2024 panel “The mutual shaping of democratic practices & AI,” which was chaired and moderated by Merel Noorman on 14 July 2024. I later reprised this talk at NWO ICT.OPEN on 16 April 2025 as part of the track “Human-Computer Interaction and Societal Impact in the Netherlands,” chaired by Armağan Karahanoğlu and Max Birk (PDF of slides).

From Stem to Stern: I was part of the organizing team of the CSCW 2024 workshop “From Stem to Stern: Contestability Along AI Value Chains,” which took place as a hybrid one-day session on 9 November 2024. I blogged a summary and some takeaways of the workshop here. Shoutout to Agathe Balayn and Yulu Pi for leading this endeavor.

Contestable AI Talks: I was invited to speak on my PhD research at various meetings and events organized by studios, agencies, consultancies, schools, and public sector organizations. On 3 September 2024, at the data design agency CLEVER°FRANKE (slides). On 10 January 2025, at the University of Utrecht Computational Sociology group. On 19 February 2025, at digital ethics consultancy The Green Land (slides). On 6 March 2024, at Communication and Multimedia Design Amsterdam (slides). And on 17 March 2025, at the Advisory Board on Open Government and Information Management.

Designing Responsible AI: Over the course of 2024, Sara Colombo, Francesca Mauri, and I developed and taught for the first time a new Integrated Product Design master’s elective, “Designing Responsible AI” (course description). Later, on 28 March 2025, I was invited by my colleagues Alessandro Bozzon and Carlo van der Valk to give a single-morning interactive lecture on part of the same content at the course AI Products and Services (slides).

Books that represent the range of theory covered in the course “Designing Responsible AI.”

Stop the Cuts: On 2 July 2024, a far-right government was sworn in in the Netherlands (it has since fallen). They intended to cut funding to education by €2 billion. A coalition of researchers, teachers, students, and others organized to protest and strike in response. I was present at several of these actions: The alternative opening of the academic year in Utrecht on 2 September 2024. Local walkouts on 14 November 2024 (I participated in Utrecht). Mass demonstration in The Hague on 25 November 2024. Local actions on 11 December 2024 (I participated in Delft). And finally, for now at least, on 24 April 2025, at the Delft edition of the nationwide relay strike. If you read this, work in academia, and want to act, join a union (I am a member of the AOb), and sign up for the WOinActie newsletter.

End of the march during the 24 April 2025 strike in Delft.

Panels: Over the past months, I was a panelist at several events. On 22 October 2024, at the Design & AI Symposium as part of the panel “Evolving Perspectives on AI and Design,” together with Iohanna Nicenboim and Jesse Benjamin, moderated by Mathias Funk (blog post). On 13 December 2024 at TH/NGS as part of the panel “Rethink Design: Book Launch and Panel Discussion on Designing With AI” chaired by Roy Bendor (video). On 12 March 2025, at the panel “Inclusive AI: Approaches to Digital Inclusion,” chaired by Nazli Cila and Taylor Stone.

Slide I used during my panel contribution at the Design & AI symposium.

Design for Human Autonomy: I was part of several activities organized by the Delft Design for Values institute related to their annual theme of autonomy (led by Michael Klenk). I was a panelist on 15 October 2024 during the kick-off event (blog post). I wrote the section on designing AI for autonomy for the white paper edited by Udo Pesch (preprint). And during the closing symposium, master’s graduation student Ameya Sawant, whom I am coaching (with Fernando Secomandi acting as chair), was honored as a finalist in the thesis competition.

Master Graduation Students: Four master students that I coached during their thesis projects graduated, which between them explored technology’s role in society through AI-mediated civic engagement, generative AI implementation in public services, experimental approaches to AI trustworthiness, and urban environmental sensing—Nina te Groen (with Achilleas Psyilidis as chair), Romée Postma (with Roy Bendor), Eline Oei (with Giulia Calabretta), and Jim Blom (with Tomasz Jaskiewicz).

Architecting for Contestability: On 22 November 2025, I ran a single-day workshop about contestability for government-employed ICT architects participating in the Digital Design & Architecture course offered by the University of Twente, on invitation from Marijn Janssen (slides).

Qualitative Design Research: On 17 December 2024, I delivered a lecture on qualitative design research for the course Empirical Design Research, on invitation from my colleague Himanshu Verma (slides). Later, on 22 April 2025, I delivered a follow-up in the form of a lecture on reflexive thematic analysis for the course Product Futures Studio, coordinated by Holly McQuillan (slides).

Democratic Generative Things: On 6 June 2025 I joined the ThingsCon unconference to discuss my contribution to the RIOT report, “Embodied AI and collective power: Designing democratic generative things” (preprint). The report was edited by edited by Andrea Krajewski and Iskander Smit.

Me, holding forth during the ThingsCon RIOT unconference.

Learning Experience Design: I delivered the closing invited talk at LXDCON on 12 June 2025, reflecting on the impact of GenAI on the fields of education and design for learning (slides). Many thanks to Niels Floor for the invitation.

People’s Compute: I published a preprint of my position paper “People’s Compute: Design and the Politics of AI Infrastructures” over at OSF on 14 April 2025. I emailed it to peers and received over a dozen encouraging responses. It was also somehow picked up by Evgeny Morozov’s The Syllabus with some nice commentary attached.

On deck

So what am I up to at the moment? Keeping nice and busy.

  • I am co-authoring several articles, papers, and book chapters on topics including workplace automation, AI transparency, contestability in engineering, AI design and regulation, computational argumentation, explainable and participatory AI, and AI infrastructure politics. I do hope at least some of these will see the light of day in the coming months.
  • I am preparing a personal grant application that builds on the vision laid out in People’s Compute.
  • I will be delivering an invited talk at Enterprise UX on 21 November 2025.
  • I am acting as a scientific advisor to a center that is currently being established, which focuses on increasing digital autonomy within Dutch government institutions.
  • I will be co-teaching Designing Responsible AI again in Q1 of the next academic year.
  • I’ll serve as an associate chair on the CHI 2026 design subcommittee.
  • And I have signed up to begin our university’s teaching qualification certification.

Whew. That’s it. Thanks for reading (skimming?) if you’ve made it all the way to the end. I will try to circle back and do another update, maybe a little sooner than this one, say in six months’ time.

On how to think about large language models

How should we think about large language models (LLMs)? People commonly think and talk about them in terms of human intelligence. To the extent this metaphor does not accurately reflect the properties of the technology, this may lead to misguided diagnoses and prescriptions. It seems to me an LLM is not like a human or a human brain in so many ways. One crucial distinction for me is that LLMs lack individuality and subjectivity.

What are organisms that similarly lack these qualities? Coral polyps and Portuguese man o’ war come to mind, or slime mold colonies. Or maybe a single bacterium, like an E. coli. Each is essentially identical to its clones, responds automatically to chemical gradients (bringing to mind how LLMs respond to prompts), and doesn’t accumulate unique experiences in any meaningful way.

Considering all these examples, the meme about LLMs being like a shoggoth (an amorphous blob-like monster originating from the speculative fiction of Howard Philips Lovecraft) is surprisingly accurate. The thing about these metaphors though is that it’s about as hard to reason about such organisms as it is to reason about LLMs. So to use them as a metaphor for thinking about LLMs won’t work. A shoggoth is even less helpful because the reference will only be familiar to those who know their H.P. Lovecraft.

So perhaps we should abandon metaphorical thinking and think historically instead. LLMs are a new language technology. As with previous technologies, such as the printing press, when they are introduced, our relationship to language changes. How does this change occur?

I think the change is dialectical. First, we have a relationship to language that we recognize as our own. Then, a new technology destabilizes this relationship, alienating us from the language practice. We no longer see our own hand in it. And we experience a lack of control over language practice. Finally, we reappropriate this language use in our practices. In this process of reappropriation, language practice as a whole is transformed. And the cycle begins again.

For an example of this dialectical transformation of language practice under the influence of new technology, we can take Eisenstein’s classic account of the history of the printing press (1980). Following its introduction many things changed about how we relate to language. Our engagement with language shifted from a primarily oral one to a visual and deliberative one. Libraries became more abundantly stocked, leading to the practice of categorization and classification of works. Preservation and analysis of stable texts became a possibility. The solitary reading experience gained prominence, producing a more private and personal relationship between readers and texts. Concerns about information overload first reared its head.

All of these things were once new and alien to humans. Now we consider them part of the natural order of things. They weren’t predetermined by the technology, they emerged through this active tug of war between groups in society about what the technology would be used for, mediated by the affordances of the technology itself.

In concrete material terms, what does an LLM consist of? An LLM is just numerical values stored in computer memory. It is a neural network architecture consisting of billions of parameters in weights and biases, organized in matrices. The storage is distributed across multiple devices. System software loads these parameters and enables the calculation of inferences. This all runs in physical data centers housing computing infrastructure, power, cooling, and networking infrastructure. Whenever people start talking about LLMs having agency or being able to reason, I remind myself of these basic facts.

A printing press, although a cleverly designed, engineered, and manufactured device, is similarly banal when you break it down to its essential components. Still, the ultimate changes to how we relate to language have been profound. From these first few years of living with LLMs, I think it is not unreasonable to think they will cause similar upheavals. What is important for me is to recognize how we become alienated from language, and to see ourselves as having agency in reappropriating LLM-mediated language practice as our own.

On autonomy, design, and AI

In my thesis, I use autonomy to build the normative case for contestability. It so happens that this year’s theme at the Delft Design for Values Institute is also autonomy. On October 15, 2024, I participated in a panel discussion on autonomy to kick things off. I collected some notes on autonomy that go beyond the conceptualization I used in my thesis. I thought it might be helpful and interesting to collect some of them here in adapted form.

The notes I brought included, first of all, a summary of the ecumenical conceptualization of autonomy concerning automated decision-making systems offered by Alan Rubel, Clinton Castro, and Adam Pham (2021). They conceive of autonomy as effective self-governance. To be autonomous, we need authentic beliefs about our circumstances and the agency to act on our plans. Regarding algorithmic systems, they offer this notion of a reasonable endorsement test—the degree to which a system can be said to respect autonomy depends on its reliability, the stakes of its outputs, the degree to which subjects can be held responsible for inputs, and the distribution of burdens across groups.

Second, I collected some notes from several pieces by James Muldoon, which get into notions of freedom and autonomy that were developed in socialist republican thought by the likes of Luxemburg, Kautsky, and Castoriadis (2020, 2021a, 2021b). This story of autonomy is sociopolitical rather than moral. This approach is quite appealing for someone interested in non-ideal theory in a realist mode like myself. The account of autonomy Muldoon offers is one where individual autonomy hinges on greater group autonomy and stronger bonds of association between those producing and consuming technologies. Freedom is conceived of as collective self-determination.

And then third and finally, there’s this connected idea of relational autonomy, which to a degree is part of the account offered by Rubel et al., but in the conceptions here more radical in how it seeks to create distance from liberal individualism (e.g., Christman, 2004; Mhlambi & Tiribelli, 2023; Westlund, 2009). In this, individual capacity for autonomous choice is shaped by social structures. So freedom becomes realized through networks of care, responsibility, and interdependence.

That’s what I am interested in: accounts of autonomy that are not premised on liberal individualism and that give us some alternative handle on the problem of the social control of technology in general and of AI in particular.

From my point of view, the implications of all this for design and AI include the following.

First, to make a fairly obvious but often overlooked point, the degree to which a given system impacts people’s autonomy depends on various factors. It makes little sense to make blanket statements about AI destroying our autonomy and so on.

Second, in value-sensitive design terms, you can think about autonomy as a value to be balanced against others—in the case where you take the position that all values can be considered equally important, at least in principle. Or you can consider autonomy more like a precondition for people to live with technology in concordance with their values, making autonomy take precedence over other values. The sociopolitical and relational accounts above point in this direction.

Third, suppose you buy into the radical democratic idea of technology and autonomy. In that case, it follows that it makes little sense to admonish individual designers about respecting others’ autonomy. They may be asked to privilege technologies in their designs that afford individual and group autonomy. But designers also need organization and emancipation more often than not. So it’s about building power. The power of workers inside the organizations that develop technologies and the power of communities that “consume” those same technologies. 

With AI, the fact is that, in reality, in the cases I look at, the communities that AI is brought to bear on have little say in the matter. The buyers and deployers of AI could and should be made more accountable to the people subjected to AI.

Towards a realist AI design practice?

This is a version of the opening statement I contributed to the panel “Evolving Perspectives on AI and Design” at the Design & AI symposium that was part of Dutch Design Week 2024. I had the pleasure of joining Iohanna Nicenboim and Jesse Benjamin on stage to explore what could be called the post-GenAI possibility space for design. Thanks also to Mathias Funk for moderating.

The slide I displayed:

My statement:

  1. There’s a lot of magical thinking in the AI field today. It assumes intelligence is latent in the structure of the internet. Metaphors like AGI and superintelligence are magical in nature. AI practice is also very secretive. It relies on demonstrations. This leads to a lack of rigor and political accountability (cf. Gilbert & Lambert in VentureBeat, 2023).
  2. Design in its idealist mode is easily fooled by such magic. For example, in a recent report, the Dutch Court of Audit states that 35% of government AI systems are not known to meet expectations (cf. Raji et al., 2022).
  3. What is needed is design in a realist mode. Realism focuses on who does what to whom in whose interest (cf. Geuss, 2008, 23 in von Busch & Palmås, 2023). Applied to AI the question becomes who gets to do AI to whom? This isn’t to say we should consider AI technologies completely inert. It mediates our being in the world (Verbeek, 2021). But we should also not consider it an independent force that’s just dragging us along.
  4. The challenge is to steer a path between, on the one hand, wholesale cynical rejection and naive, optimistic, unconditional embrace, on the other hand.
  5. In my own work, what that looks like is to use design to make things that allow me to go into situations where people are building and using AI systems. And to use those things as instruments to ask questions related to human autonomy, social control, and collective freedom in the face of AI.
  6. The example shown is an animated short depicting a design fiction scenario involving intelligent camera cars used for policy execution in urban public space. I used this video to talk to civil servants about the challenges facing governments who want to ensure citizens remain in control of the AI systems they deploy (cf. Alfrink et al., 2023).
  7. Why is this realist? Because the work looks at how some groups of people use particular forms of actually existing AI to do things to other people. The work also foregrounds the competing interests that are at stake. And it frames AI as neither fully autonomous nor fully passive, but as a thing that mediates peoples’ perceptions and actions.
  8. There are more examples besides this. But I will stop here. I just want to reiterate that I think we need a realist approach to the design of AI.

‘Machine Learning for Designers’ workshop

On Wednesday Péter Kun, Holly Robbins and myself taught a one-day workshop on machine learning at Delft University of Technology. We had about thirty master’s students from the industrial design engineering faculty. The aim was to get them acquainted with the technology through hands-on tinkering with the Wekinator as central teaching tool.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Background

The reasoning behind this workshop is twofold.

On the one hand I expect designers will find themselves working on projects involving machine learning more and more often. The technology has certain properties that differ from traditional software. Most importantly, machine learning is probabilistic in stead of deterministic. It is important that designers understand this because otherwise they are likely to make bad decisions about its application.

The second reason is that I have a strong sense machine learning can play a role in the augmentation of the design process itself. So-called intelligent design tools could make designers more efficient and effective. They could also enable the creation of designs that would otherwise be impossible or very hard to achieve.

The workshop explored both ideas.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Format

The structure was roughly as follows:

In the morning we started out providing a very broad introduction to the technology. We talked about the very basic premise of (supervised) learning. Namely, providing examples of inputs and desired outputs and training a model based on those examples. To make these concepts tangible we then introduced the Wekinator and walked the students through getting it up and running using basic examples from the website. The final step was to invite them to explore alternative inputs and outputs (such as game controllers and Arduino boards).

In the afternoon we provided a design brief, asking the students to prototype a data-enabled object with the set of tools they had acquired in the morning. We assisted with technical hurdles where necessary (of which there were more than a few) and closed out the day with demos and a group discussion reflecting on their experiences with the technology.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Results

As I tweeted on the way home that evening, the results were… interesting.

Not all groups managed to put something together in the admittedly short amount of time they were provided with. They were most often stymied by getting an Arduino to talk to the Wekinator. Max was often picked as a go-between because the Wekinator receives OSC messages over UDP, whereas the quickest way to get an Arduino to talk to a computer is over serial. But Max in my experience is a fickle beast and would more than once crap out on us.

The groups that did build something mainly assembled prototypes from the examples on hand. Which is fine, but since we were mainly working with the examples from the Wekinator website they tended towards the interactive instrument side of things. We were hoping for explorations of IoT product concepts. For that more hand-rolling was required and this was only achievable for the students on the higher end of the technical expertise spectrum (and the more tenacious ones).

The discussion yielded some interesting insights into mental models of the technology and how they are affected by hands-on experience. A comment I heard more than once was: Why is this considered learning at all? The Wekinator was not perceived to be learning anything. When challenged on this by reiterating the underlying principles it became clear the black box nature of the Wekinator hampers appreciation of some of the very real achievements of the technology. It seems (for our students at least) machine learning is stuck in a grey area between too-high expectations and too-low recognition of its capabilities.

Next steps

These results, and others, point towards some obvious improvements which can be made to the workshop format, and to teaching design students about machine learning more broadly.

  1. We can improve the toolset so that some of the heavy lifting involved with getting the various parts to talk to each other is made easier and more reliable.
  2. We can build examples that are geared towards the practice of designing IoT products and are ready for adaptation and hacking.
  3. And finally, and probably most challengingly, we can make the workings of machine learning more transparent so that it becomes easier to develop a feel for its capabilities and shortcomings.

We do intend to improve and teach the workshop again. If you’re interested in hosting one (either in an educational or professional context) let me know. And stay tuned for updates on this and other efforts to get designers to work in a hands-on manner with machine learning.

Special thanks to the brilliant Ianus Keller for connecting me to Péter and for allowing us to pilot this crazy idea at IDE Academy.

References

Sources used during preparation and running of the workshop:

  • The Wekinator – the UI is infuriatingly poor but when it comes to getting started with machine learning this tool is unmatched.
  • Arduino – I have become particularly fond of the MKR1000 board. Add a lithium-polymer battery and you have everything you need to prototype IoT products.
  • OSC for Arduino – CNMAT’s implementation of the open sound control (OSC) encoding. Key puzzle piece for getting the above two tools talking to each other.
  • Machine Learning for Designers – my preferred introduction to the technology from a designerly perspective.
  • A Visual Introduction to Machine Learning – a very accessible visual explanation of the basic underpinnings of computers applying statistical learning.
  • Remote Control Theremin – an example project I prepared for the workshop demoing how to have the Wekinator talk to an Arduino MKR1000 with OSC over UDP.

Artificial intelligence, creativity and metis

Boris pointed me to CreativeAI, an interesting article about creativity and artificial intelligence. It offers a really nice overview of the development of the idea of augmenting human capabilities through technology. One of the claims the authors make is that artificial intelligence is making creativity more accessible. Because tools with AI in them support humans in a range of creative tasks in a way that shortcuts the traditional requirements of long practice to acquire the necessary technical skills.

For example, ShadowDraw (PDF) is a program that helps people with freehand drawing by guessing what they are trying to create and showing a dynamically updated ‘shadow image’ on the canvas which people can use as a guide.

It is an interesting idea and in some ways these kinds of software indeed lower the threshold for people to engage in creative tasks. They are good examples of artificial intelligence as partner in stead of master or servant.

While reading CreativeAI I wasn’t entirely comfortable though and I think it may have been caused by two things.

One is that I care about creativity and I think that a good understanding of it and a daily practice at it—in the broad sense of the word—improves lives. I am also in some ways old-fashioned about it and I think the joy of creativity stems from the infinitely high skill ceiling involved and the never-ending practice it affords. Let’s call it the Jiro perspective, after the sushi chef made famous by a wonderful documentary.

So, claiming that creative tools with AI in them can shortcut all of this life-long joyful toil produces a degree of panic for me. Although it’s probably a Pastoral worldview which would be better to abandon. In a world eaten by software, it’s better to be a Promethean.

The second reason might hold more water but really is more of an open question than something I have researched in any meaningful way. I think there is more to creativity than just the technical skill required and as such the CreativeAI story runs the risk of being reductionist. While reading the article I was also slowly but surely making my way through one of the final chapters of James C. Scott’s Seeing Like a State, which is about the concept of metis.

It is probably the most interesting chapter of the whole book. Scott introduces metis as a form of knowledge different from that produced by science. Here are some quick excerpts from the book that provide a sense of what it is about. But I really can’t do the richness of his description justice here. I am trying to keep this short.

The kind of knowledge required in such endeavors is not deductive knowledge from first principles but rather what Greeks of the classical period called metis, a concept to which we shall return. […] metis is better understood as the kind of knowledge that can be acquired only by long practice at similar but rarely identical tasks, which requires constant adaptation to changing circumstances. […] It is to this kind of knowledge that [socialist writer] Luxemburg appealed when she characterized the building of socialism as “new territory” demanding “improvisation” and “creativity.”

Scott’s argument is about how authoritarian high-modernist schemes privilege scientific knowledge over metis. His exploration of what metis means is super interesting to anyone dedicated to honing a craft, or to cultivating organisations conducive to the development and application of craft in the face of uncertainty. There is a close link between metis and the concept of agility.

So circling back to artificially intelligent tools for creativity I would be interested in exploring not only how we can diminish the need for the acquisition of the technical skills required, but to also accelerate the acquisition of the practical knowledge required to apply such skills in the ever-changing real world. I suggest we expand our understanding of what it means to be creative, but without losing the link to actual practice.

For the ancient Greeks metis became synonymous with a kind of wisdom and cunning best exemplified by such figures as Odysseus and notably also Prometheus. The latter in particular exemplifies the use of creativity towards transformative ends. This is the real promise of AI for creativity in my eyes. Not to simply make it easier to reproduce things that used to be hard to create but to create new kinds of tools which have the capacity to surprise their users and to produce results that were impossible to create before.