Reclaiming Autonomy: Designing AI-Enhanced Work Tools That Empower Users

Based on an invited talk delivered at Enterprise UX, on November 21, 2025 in Amersfoort, the Netherlands.

In a previous life, I was a practicing designer. These days I’m a postdoc at TU Delft, researching something called Contestable AI. Today I want to explore how we can design AI work tools that preserve worker autonomy—focusing specifically on large language models and knowledge work.

The Meeting We’ve All Been In

Who’s been in this meeting? Your CEO saw a demo, and now your PM is asking you to build some kind of AI feature into your product.

This is very much like Office Space: decisions about tools and automation are being made top-down, without consulting the people who actually do the work.

What I want to explore are the questions you should be asking before you go off and build that thing. Because we shouldn’t just be asking “can we build it?” but also “should we build it?” And if so, how do we build it in a way that empowers workers rather than diminishes them?

Part 1: Reality Check

What We’re Actually Building

Large language models can be thought of as databases containing programs for transforming text (Chollet, 2022). When we prompt, we’re querying that database.

The simpler precursors of LLMs would let you take the word “king” and ask it to make it female, outputting “queen.” Now, language models work similarly but can do much more complex transformations—give it a poem, ask it to write in the style of Shakespeare, and it outputs a transformed poem.

The key point: they are sophisticated text transformation machines. They are not magic. Understanding this helps us design better.

Three Assumptions to Challenge

Before adding AI, we should challenge three things:

  1. Functionality: Does it actually work?
  2. Power: Who really benefits?
  3. Practice: What skills or processes are transformed?

1. Functionality: Does It Work?

One problem with AI projects is that functionality is often assumed instead of demonstrated (Raji et al., 2022). And historically, service sector automation has not led to expected productivity gains (Benanav, 2020).

What this means: don’t just trust the demo. Demand evidence in your actual context. Ask for them to show it working in production, not a prototype.

2. Power: Who Benefits?

Current AI developments seem to favor employers over workers. Because of this, some have started taking inspiration from the Luddites (Merchant, 2023).

It’s a common misconception that Luddites hated technology. They hated losing control over their craft. They smashed frames operated by unskilled workers that undercut skilled craftspeople (Sabie et al., 2023).

What we should be asking: who gains power, and who loses it? This isn’t about being anti-technology. It’s about being pro-empowerment.

3. Practice: What Changes?

AI-enabled work tools can have second-order effects on work practices. Automation breaks skill transmission from experts to novices (Beane, 2024). For example, surgical robots that can be remotely operated by expert surgeons mean junior surgeons don’t learn by doing.

Some work that is challenging, complex, and requires human connection should be preserved so that learning can happen.

On the other hand, before we automate a task, we should ask whether a process should exist at all. Otherwise, we may be simply reifying bureaucracy. As Michael Hammer put it: “don’t automate, obliterate” (1990).

Every automation project is an opportunity to liberate skilled professionals from bureaucracy.

Part 2: Control → Autonomy

All three questions are really about control. Control over whether tools serve you. Control over developing expertise. This is fundamentally about autonomy.

What Autonomy Is

A common definition of autonomy is the effective capacity for self-governance (Prunkl, 2022). It consists of two dimensions:

  • Authenticity: holding beliefs that are free from manipulation
  • Agency: having meaningful options to act on those beliefs

Both are necessary for autonomy.

Office Space examples:

  • Authenticity: Joanna’s manager tells her the minimum is 15 pieces of flair, then criticizes her for wearing “only” the minimum. Her understanding of the rules gets manipulated.
  • Agency: Lumbergh tells Peter, “Yeah, if you could come in on Saturday, that would be great.” Technically a request, but the power structure eliminates any real choice.

How AI Threatens Autonomy

AI can threaten autonomy in a variety of ways. Here are a few examples.

Manipulation — Like TikTok’s recommendation algorithm. It exploits cognitive vulnerabilities, creating personalized content loops that maximize engagement time. This makes it difficult for users to make autonomous decisions about their attention and time use.

Restricted choice — LinkedIn’s automated hiring tools can automatically exclude qualified candidates based on biased pattern matching. Candidates are denied opportunities without human review and lack the ability to contest the decision.

Diminished competence — Routinely outsourcing writing, problem-solving, or analysis to ChatGPT without critical engagement can lead to atrophying the very skills that make professionals valuable. Similar to how reliance on GPS erodes navigational abilities.

These are real risks, not hypothetical. But we can design AI systems to protect against these threats—and we can do more. We can design AI systems to actively promote autonomy.

A Toolkit for Designing AI for Autonomy

Here’s a provisional toolkit with two parts: one focusing on design process, the other on product features (Alfrink, 2025).

Process:

  • Reflexive design
  • Impact assessment
  • Stakeholder negotiation

Product:

  • Override mechanisms
  • Transparency
  • Non-manipulative interfaces
  • Collective autonomy support

I’ll focus on three elements that I think are most novel: relfexive design, stakeholder negotiation, and collective autonomy support.

Part 3: Application

Example: LegalMike

LegalMike is a Dutch legal AI platform that helps lawyers draft contracts, summarize case law, and so forth. It’s a perfect example to apply my framework—it uses an LLM and focuses on knowledge work.

1. Reflexive Design

The question here: what happens to “legal judgment” when AI drafts clauses? Does competence shift from “knowing how to argue” to “knowing how to prompt”?

We should map this before we start shipping.

This is new because standard UX doesn’t often ask how AI tools redefine the work itself.

2. Stakeholder Negotiation

Run workshops with juniors, partners, and clients:

  • Juniors might fear deskilling
  • Partners want quality control
  • Clients may want transparency

By running workshops like this, we make tensions visible and negotiate boundaries between stakeholders.

This is new because we have stakeholders negotiate what autonomy should look like, rather than just accept what exists.

3. Collective Autonomy Support

LegalMike could isolate, or connect. Isolating means everyone with their own AI. But we could deliberately design it to surface connections:

  • Show which partner’s work the AI drew from
  • Create prompts that encourage juniors to consult seniors
  • Show how firm expertise flows, not just individual outputs

This counters the “individual productivity” framing that dominates AI products today.

Tool → Medium

These interventions would shift LegalMike from a pure efficiency tool to a medium for collaborative legal work that preserves professional judgment, surfaces power dynamics, and strengthens collective expertise—not just individual output.

Think of LLMs not as a robot arm that automates away knowledge work tasks—like in a Korean noodle shop. Instead, it can be the robot arm that mediates collaboration between humans to produce entirely new ways of working—like in the CRTA visual identity project for the University of Zagreb.

Conclusion

AI isn’t neutral. It’s embedded in power structures. As designers, we’re not just building features—we’re brokers of autonomy.

Every design choice we make either empowers or disempowers workers. We should choose deliberately.

And seriously, watch Office Space if you haven’t seen it. It’s the best “documentary” about workplace autonomy ever made. Mike Judge understood this as early as 1999.

Postdoc update – July 2025

I am over one year into my postdoc at TU Delft. Where did the time go? By way of an annual report, here’s a rundown of my most notable outputs and activities since the previous update from June 2024. And also, some notes on what I am up to now.

Happenings

Participatory AI and ML Engineering: On 13 February 2024 at a Human Values for Smarter Cities meeting and on 11 June 2024 at a Cities Coalition for Digital Rights meeting, I presented a talk on participatory AI and ML engineering (blogged here). This has since evolved into a study I am currently running with the working title “Vision Model Macroscope.” We are designing, building, and evaluating an interface that allows municipal workers to understand and debate value-laden technical decisions made by machine learning engineers in the construction of camera vehicles. For the design, I am collaborating with CLEVER°FRANKE. The study is part of the Human Values for Smarter Cities projected headed up by the Civic Interaction Design group at AUAS.

Envisioning Contestability Loops: My article “Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI” (with Ianus Keller, Mireia Yurrita Semperena, Denis Bulygin, Gerd Kortuem, and Neelke Doorn) was published in She Ji on 17 June 2024. (I had already published the infographic “Contestability Loops for Public AI,” which the article revolves around, on 17 April 2024.) Later in the year, on 5 September 2024, I ran the workshop that the study builds on as a ThingsCon Salon. And on 27 September 2024, I presented the article at Lawtomation Days in Madrid, Spain, as part of the panel “Methods in law and technology research: inter- and cross-disciplinary challenges and opportunities,” chaired by Kostina Prifti (slides). (Also, John Thackara said nice things about the article online.)

Contestability Loops for Public AI infographic
Envisioning Contestability Loops workshop at ThingsCon Salon in progress.

Democratizing AI Through Continuous Adaptability: I presented on “Democratizing AI Through Continuous Adaptability: The Role of DevOps” at the TILTing Perspectives 2024 panel “The mutual shaping of democratic practices & AI,” which was chaired and moderated by Merel Noorman on 14 July 2024. I later reprised this talk at NWO ICT.OPEN on 16 April 2025 as part of the track “Human-Computer Interaction and Societal Impact in the Netherlands,” chaired by Armağan Karahanoğlu and Max Birk (PDF of slides).

From Stem to Stern: I was part of the organizing team of the CSCW 2024 workshop “From Stem to Stern: Contestability Along AI Value Chains,” which took place as a hybrid one-day session on 9 November 2024. I blogged a summary and some takeaways of the workshop here. Shoutout to Agathe Balayn and Yulu Pi for leading this endeavor.

Contestable AI Talks: I was invited to speak on my PhD research at various meetings and events organized by studios, agencies, consultancies, schools, and public sector organizations. On 3 September 2024, at the data design agency CLEVER°FRANKE (slides). On 10 January 2025, at the University of Utrecht Computational Sociology group. On 19 February 2025, at digital ethics consultancy The Green Land (slides). On 6 March 2024, at Communication and Multimedia Design Amsterdam (slides). And on 17 March 2025, at the Advisory Board on Open Government and Information Management.

Designing Responsible AI: Over the course of 2024, Sara Colombo, Francesca Mauri, and I developed and taught for the first time a new Integrated Product Design master’s elective, “Designing Responsible AI” (course description). Later, on 28 March 2025, I was invited by my colleagues Alessandro Bozzon and Carlo van der Valk to give a single-morning interactive lecture on part of the same content at the course AI Products and Services (slides).

Books that represent the range of theory covered in the course “Designing Responsible AI.”

Stop the Cuts: On 2 July 2024, a far-right government was sworn in in the Netherlands (it has since fallen). They intended to cut funding to education by €2 billion. A coalition of researchers, teachers, students, and others organized to protest and strike in response. I was present at several of these actions: The alternative opening of the academic year in Utrecht on 2 September 2024. Local walkouts on 14 November 2024 (I participated in Utrecht). Mass demonstration in The Hague on 25 November 2024. Local actions on 11 December 2024 (I participated in Delft). And finally, for now at least, on 24 April 2025, at the Delft edition of the nationwide relay strike. If you read this, work in academia, and want to act, join a union (I am a member of the AOb), and sign up for the WOinActie newsletter.

End of the march during the 24 April 2025 strike in Delft.

Panels: Over the past months, I was a panelist at several events. On 22 October 2024, at the Design & AI Symposium as part of the panel “Evolving Perspectives on AI and Design,” together with Iohanna Nicenboim and Jesse Benjamin, moderated by Mathias Funk (blog post). On 13 December 2024 at TH/NGS as part of the panel “Rethink Design: Book Launch and Panel Discussion on Designing With AI” chaired by Roy Bendor (video). On 12 March 2025, at the panel “Inclusive AI: Approaches to Digital Inclusion,” chaired by Nazli Cila and Taylor Stone.

Slide I used during my panel contribution at the Design & AI symposium.

Design for Human Autonomy: I was part of several activities organized by the Delft Design for Values institute related to their annual theme of autonomy (led by Michael Klenk). I was a panelist on 15 October 2024 during the kick-off event (blog post). I wrote the section on designing AI for autonomy for the white paper edited by Udo Pesch (preprint). And during the closing symposium, master’s graduation student Ameya Sawant, whom I am coaching (with Fernando Secomandi acting as chair), was honored as a finalist in the thesis competition.

Master Graduation Students: Four master students that I coached during their thesis projects graduated, which between them explored technology’s role in society through AI-mediated civic engagement, generative AI implementation in public services, experimental approaches to AI trustworthiness, and urban environmental sensing—Nina te Groen (with Achilleas Psyilidis as chair), Romée Postma (with Roy Bendor), Eline Oei (with Giulia Calabretta), and Jim Blom (with Tomasz Jaskiewicz).

Architecting for Contestability: On 22 November 2025, I ran a single-day workshop about contestability for government-employed ICT architects participating in the Digital Design & Architecture course offered by the University of Twente, on invitation from Marijn Janssen (slides).

Qualitative Design Research: On 17 December 2024, I delivered a lecture on qualitative design research for the course Empirical Design Research, on invitation from my colleague Himanshu Verma (slides). Later, on 22 April 2025, I delivered a follow-up in the form of a lecture on reflexive thematic analysis for the course Product Futures Studio, coordinated by Holly McQuillan (slides).

Democratic Generative Things: On 6 June 2025 I joined the ThingsCon unconference to discuss my contribution to the RIOT report, “Embodied AI and collective power: Designing democratic generative things” (preprint). The report was edited by edited by Andrea Krajewski and Iskander Smit.

Me, holding forth during the ThingsCon RIOT unconference.

Learning Experience Design: I delivered the closing invited talk at LXDCON on 12 June 2025, reflecting on the impact of GenAI on the fields of education and design for learning (slides). Many thanks to Niels Floor for the invitation.

People’s Compute: I published a preprint of my position paper “People’s Compute: Design and the Politics of AI Infrastructures” over at OSF on 14 April 2025. I emailed it to peers and received over a dozen encouraging responses. It was also somehow picked up by Evgeny Morozov’s The Syllabus with some nice commentary attached.

On deck

So what am I up to at the moment? Keeping nice and busy.

  • I am co-authoring several articles, papers, and book chapters on topics including workplace automation, AI transparency, contestability in engineering, AI design and regulation, computational argumentation, explainable and participatory AI, and AI infrastructure politics. I do hope at least some of these will see the light of day in the coming months.
  • I am preparing a personal grant application that builds on the vision laid out in People’s Compute.
  • I will be delivering an invited talk at Enterprise UX on 21 November 2025.
  • I am acting as a scientific advisor to a center that is currently being established, which focuses on increasing digital autonomy within Dutch government institutions.
  • I will be co-teaching Designing Responsible AI again in Q1 of the next academic year.
  • I’ll serve as an associate chair on the CHI 2026 design subcommittee.
  • And I have signed up to begin our university’s teaching qualification certification.

Whew. That’s it. Thanks for reading (skimming?) if you’ve made it all the way to the end. I will try to circle back and do another update, maybe a little sooner than this one, say in six months’ time.

On mapping AI value chains

At CSCW 2024, back in November of last year, we* ran a workshop titled “From Stem to Stern: Contestability Along AI Value Chains.” With it, we wanted to address a gap in contestable AI research. Current work focuses mainly on contesting specific AI decisions or outputs (for example, appealing a decision made by an automated content moderation system). But we should also look at contestability across the entire AI value chain—from raw material extraction to deployment and impact (think, for example, of data center activists opposing the construction of new hyperscales). We aimed to explore how different stakeholders can contest AI systems at various points in this chain, considering issues like labor conditions, environmental impact, and data collection practices often overlooked in contestability discussions.

The workshop mixed presentations with hands-on activities. In the morning, researchers shared their work through short talks, both in person and online. The afternoon focused on mapping out where and how people can contest AI systems, from data collection to deployment, followed by detailed discussions of the practical challenges involved. We had both in-person and online participants, requiring careful coordination between facilitators. We wrapped up by synthesizing key insights and outlining future research directions.

I was responsible for being a remote facilitator most of the day. But Mireia and I also prepared and ran the first group activity, in which we mapped a typical AI value chain. I figured I might as well share the canvas we used for that here. It’s not rocket science, but it held up pretty well, so maybe some other people will get some use out of it. The canvas was designed to offer a fair bit of scaffolding for thinking through what decision points there are along the chain that are potentially value-laden.

AI value chain mapping canvas (licensed CC-BY 4.0 Mireia Yurrita & Kars Alfrink, 2024). Download PDF.

Here’s how the activity worked: We covered about 50 minutes doing a structured mapping exercise where participants identified potential contestation points along an AI value chain, using ChatGPT as an example case. The activity used a Miro board with a preliminary map showing different stages of AI development (infrastructure setup, data management, AI development, etc.). Participants first brainstormed individually for 10 minutes, adding value-laden decisions and noting stakeholders, harms, benefits, and values at stake. They then collaborated to reorganize and discuss the map for 15 minutes. The activity concluded with participants using dot voting (3 votes each) to identify the most impactful contestation sites, which were then clustered and named to feed into the next group activity.

The activity design drew from two main influences: typical value chain mapping methodologies (e.g., Mapping Actors along Value Chains, 2017), which usually emphasize tracking actors, flows, and contextual factors, and Wardley mapping (Wardley, 2022), which is characterized by the idea of a structured progression along an x-axis with an additional dimension on the y-axis.

The canvas design aimed to make AI system development more tangible by breaking it into clear phases (from infrastructure through governance) while considering visibility and materiality through the y-axis. We ultimately chose to use a familiar system (ChatGPT). This, combined with the activity’s structured approach, helped participants identify concrete opportunities for intervention and contestation along the AI value chain, which we could build on during the rest of the workshop.

I got a lot out of this workshop. Some of the key takeaways that emerged out of the activities and discussions include:

  • There’s a disconnect between legal and technical communities, from basic terminology differences to varying conceptions of key concepts like explainability, highlighting the need for translation work between disciplines.
  • We need to move beyond individual grievance models to consider collective contestation and upstream interventions in the AI supply chain.
  • We also need to shift from reactive contestation to proactive design approaches that build in contestability from the start.
  • By virtue of being hybrid, we were lucky enough to have participants from across the globe. This helped drive home to me the importance of including Global South perspectives and considering contestability beyond Western legal frameworks. We desperately need a more inclusive and globally-minded approach to AI governance.

Many thanks to all the workshop co-organizers for having me as part of the team and to Agathe and Yulu, in particular, for leading the effort.


* The full workshop team consisted of Agathe Balayn, Yulu Pi, David Gray Widder, Mireia Yurrita, Sohini Upadhyay, Naveena Karusala, Henrietta Lyons, Cagatay Turkay, Christelle Tessono, Blair Attard-Frost, Ujwal Gadiraju, and myself.

De opkomst van de meritocratie

Thijs Kleinpaste heeft een mooie boekbespreking van Michael Young’s De opkomst van de meritocratie in de Nederlandse Boekengids. Een paar passages die ik vooral sterk vond hieronder.

De grote verdienste van Young is dat hij inzichtelijk maakt hoe onschuldige principes als ‘beloning naar verdienste’ volkomen kunnen ontsporen als ze worden ingezet binnen een verder onveranderd sociaal en economisch stelsel. Concreet: sommigen een uitverkoren positie geven in een maatschappelijke hiërarchie en anderen opdragen om hun plek te kennen.

Het klassenbelang van de meritocratie is abstracter. Het belangrijkste is om allereerst een klasse of kaste te blijven om zo de voordelen daarvan te kunnen blijven oogsten. In iedere moderne staat wordt macht uitgeoefend – of meritocratischer gezegd: moet er bestuurd worden – en als er dan toch een kaste moet zijn die deze taak vervult, laat dat die van de hoogst gediplomeerden zijn. De meritocratie reproduceert zichzelf door deze gedachte mee te geven aan elke nieuwe lichting die tot haar uitverkoren rangen toetreedt: dat zij de juiste, met recht geroepen groep is om de wereld te ordenen. Niet de arbeidersklasse, niet de ongeleide democratie, niet het gekrioel van belangengroepjes – maar zij. Alle materiële voordelen van de meritocratie vloeien voort uit het in stand houden van die uitverkoren status.

Te vaak lijkt de gedachte te zijn dat vertegenwoordiging en het bedienen van belangen onproblematisch in elkaars verlengde liggen. Om die zelfgenoegzaamheid te doorbreken is kennelijk iets stelligers nodig, zoals de gedachte dat waar managers en bestuurders zijn, er gestaakt moet kunnen worden: dat waar macht wordt uitgeoefend en waar aanwijzingen worden gegeven, zij die de aanwijzingen moeten opvolgen kunnen stemmen met hun voeten. Dat conflict omarmd wordt en niet wordt gezien als iets wat gevaarlijk is voor de maatschappelijke lieve vrede, de ‘economie’, of zelfs de democratie. Conflict is ongetwijfeld gevaarlijk voor de hegemonie van de manager en diens klasse van droomkoninkjes, en daarmee voor de soevereiniteit van de meritocratische orde, maar dat gevaar is zowel heilzaam als noodzakelijk. Een van de lessen van het boek van Young is immers ook dat je moet kiezen: zelf een revolutie maken, of wachten tot die uitbreekt.

Zelf lezen: https://www.nederlandseboekengids.com/20221116-thijs-kleinpaste/

Status update

This is not exactly a now page, but I thought I would write up what I am doing at the moment since last reporting on my status in my end-of-year report.

The majority of my workdays are spent doing freelance design consulting. My primary gig has been through Eend at the Dutch Victim Support Foundation, where until very recently I was part of a team building online services. I helped out with product strategy, setting up a lean UX design process, and getting an integrated agile design and development team up and running. The first services are now shipping so it is time for me to move on, after 10 months of very gratifying work. I really enjoy working in the public sector and I hope to be doing more of it in future.

So yes, this means I am available and you can hire me to do strategy and design for software products and services. Just send me an email.

Shortly before the Dutch national elections of this year, Iskander and I gathered a group of fellow tech workers under the banner of “Tech Solidarity NL” to discuss the concerning lurch to the right in national politics and what our field can do about it. This has developed into a small but active community who gather monthly to educate ourselves and develop plans for collective action. I am getting a huge boost out of this. Figuring out how to be a leftist in this day and age is not easy. The only way to do it is to practice and for that reflection with peers is invaluable. Building and facilitating a group like this is hugely educational too. I have learned a lot about how a community is boot-strapped and nurtured.

If you are in the Netherlands, your politics are left of center, and you work in technology, consider yourself invited to join.

And finally, the last major thing on my plate is a continuing effort to secure a PhD position for myself. I am getting great support from people at Delft University of Technology, in particular Gerd Kortuem. I am focusing on internet of things products that have features driven by machine learning. My ultimate aim is to develop prototyping tools for design and development teams that will help them create more innovative and more ethical solutions. The first step for this will be to conduct field research inside companies who are creating such products right now. So I am reaching out to people to see if I can secure a reasonable amount of potential collaborators for this, which will go a long way in proving the feasibility of my whole plan.

If you know of any companies that develop consumer-facing products that have a connected hardware component and make use of machine learning to drive features, do let me know.

That’s about it. Freelance UX consulting, leftist tech-worker organising and design-for-machine-learning research. Quite happy with that mix, really.

Curiosity is our product

A few weeks ago I facilitated a discussion on ‘advocacy in a post-truth era’ at the European Digital Rights Initiative’s annual general assembly. And last night I was part of a discussion on fake news at a behaviour design meetup in Amsterdam. This was a good occasion to pull together some of my notes and figure out what I think is true about the ‘fake news’ phenomenon.

There is plenty of good writing out there exploring the history and current state of post-truth political culture.

Kellyanne Conway’s “alternative facts” and Michael Gove’s “I think people have had enough of experts” are just two examples of the right’s appropriation of what I would call epistemological relativism. Post-modernism was fun while it worked to advance our leftist agenda. But now that the tables are turned we’re not enjoying it quite as much anymore, are we?

Part of the fact-free politics playbook goes back at least as far as big tobacco’s efforts to discredit the anti-smoking lobby. “Doubt is our product” still applies to modern day reactionary movements such as climate change deniers and anti-vaxers.

The double whammy of news industry commercialisation and internet platform consolidation has created fertile ground for coordinated efforts by various groups to turn the sowing of doubt all the way up to eleven.

There is Russia’s “firehose of falsehood” which sends a high volume of messages across a wide range of channels with total disregard for truth or even consistency in a rapid, continuous and repetitive fashion. They seem to be having fun destabilising western democracies — including the Netherlands — without any apparent end-goal in mind.

And then there is the outrage marketing leveraged by trolls both minor and major. Pissing off mainstream media builds an audience on the fringes and in the underground. Journalists are held hostage by figures such as Milo because they depend on stories that trigger strong emotions for distribution, eyeballs, clicks and ultimately revenue.

So, given all of this, what is to be done? First some bad news. Facts, the weapon of choice for liberals, don’t appear to work. This is empirically evident from recent events, but it also appears to be borne out by psychology.

Facts are often more complicated than the untruths they are supposed to counter. It is also easier to remember a simple lie than a complicated truth. Complicating matters further, facts tend to be boring. Finally, and most interestingly, there is something called the ‘backfire effect’: we become more entrenched in our views when confronted with contradicting facts, because they are threatening to our group identities.

More bad news. Given the speed at which falsehoods spread through our networks, fact-checking is useless. Fact-checking is after-the-fact-checking. Worse, when media fact-check falsehoods on their front pages they are simply providing even more airtime to them. From a strategic perspective, when you debunk, you allow yourself to be captured by your opponent’s frame, and you’re also on the defensive. In Boydian terms you are caught in their OODA loop, when you should be working to take back the initiative, and you should be offering an alternative narrative.

I am not hopeful mainstream media will save us from these dynamics given the realities of the business models they operate inside of. Journalists inside of these organisations are typically overworked, just holding on for dear life and churning out stories at a rapid clip. In short, there is no time to orient and manoeuvre. For bad-faith actors, they are sitting ducks.

What about literacy? If only people knew about churnalism, the attention economy, and filter bubbles ‘they’ would become immune to the lies peddled by reactionaries and return to the liberal fold. Personally I find these claims highly unconvincing not to mention condescending.

My current working theory is that we, all of us, buy into the stories that activate one or more of our group identities, regardless of wether they are fact-based or outright lies. This is called ‘motivated reasoning’. Since this is a fact of psychology, we are all susceptible to it, including liberals who are supposedly defenders of fact-based reasoning.

Seriously though, what about literacy? I’m sorry, no. There is evidence that scientific literacy actually increases polarisation. Motivated reasoning trumps factual knowledge you may have. The same research shows however that curiosity in turn trumps motivated reasoning. The way I understand the distinction between literacy and curiosity is that the former is about knowledge while the latter is about attitude. Motivated reasoning isn’t counteracted by knowing stuff, but by wanting to know stuff.

This is a mixed bag. Offering facts is comparatively easy. Sparking curiosity requires storytelling which in turn requires imagination. If we’re presented with a fact we are not invited to ask questions. However, if we are presented with questions and those questions are wrapped up in stories that create emotional stakes, some of the views we hold might be destabilised.

In other words, if doubt is the product peddled by our opponents, then we should start trafficking in curiosity.

Further reading

Artificial intelligence as partner

Some notes on artificial intelligence, technology as partner and related user interface design challenges. Mostly notes to self, not sure I am adding much to the debate. Just summarising what I think is important to think about more. Warning: Dense with links.

Matt Jones writes about how artificial intelligence does not have to be a slave, but can also be partner.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

I would add a third possibility, which is AI as master. A common fear we humans have and one I think only growing as things like AlphaGo and new Boston Dynamics robots keep happening.

I have had a tweet pinned to my timeline for a while now, which is a quote from Play Matters.

“tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actually does not just apply to AI but to tech in general. Of course, as tech gets smarter and more independent from humans, the idea of a ‘third way’ only grows in importance.

More tweeting. A while back, shortly after AlphaGo’s victory, James tweeted:

On the one hand, we must insist, as Kasparov did, on Advanced Go, and then Advanced Everything Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear example of humans and AI partnering. And it is also an example of technology as a source of expression and a way of being.

Also, in a WIRED article on AlphaGo, someone who had played the AI repeatedly says his game has improved tremendously.

So that is the promise: Artificially intelligent systems which work together with humans for mutual benefit.

Now of course these AIs don’t just arrive into the world fully formed. They are created by humans with particular goals in mind. So there is a design component there. We can design them to be partners but we can also design them to be masters or slaves.

As an aside: Maybe AIs that make use of deep learning are particularly well suited to this partner model? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynamics. There apparently is a significant difference between holistic and reductionist approaches, deep learning being holistic. I imagine reductionist AI might be more dependent on humans. But this is just wild speculation. I don’t know if there is anything there.

This insistence of James on “advanced everything else” is a world view. A politics. To allow ourselves to be increasingly entangled with these systems, to not be afraid of them. Because if we are afraid, we either want to subjugate them or they will subjugate us. It is also about not obscuring the systems we are part of. This is a sentiment also expressed by James in the same series of tweets I quoted from earlier:

These emergences are also the best model we have ever built for describing the true state of the world as it always already exists.

And there is overlap here with ideas expressed by Kevin in ‘Design as Participation’:

[W]e are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?

AI partners might help us to better see the systems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.

I am reminded of the mixed-initiative systems being researched in the area of procedural content generation for games. I wrote about these a while back on the Hubbub blog. Such systems are partners of designers. They give something like super powers. Now imagine such powers applied to other problems. Quite exciting.

Actually, in the aforementioned article I distinguish between tools for making things and tools for inspecting possibility spaces. In the first case designers manipulate more abstract representations of the intended outcome and the system generates the actual output. In the second case the system visualises the range of possible outcomes given a particular configuration of the abstract representation. These two are best paired.

From a design perspective, a lot remains to be figured out. If I look at those mixed-initiative tools I am struck by how poorly they communicate what the AI is doing and what its capabilities are. There is a huge user interface design challenge there.

For stuff focused on getting information, a conversational UI seems to be the current local optimum for working with an AI. But for tools for creativity, to use the two-way split proposed by Victor, different UIs will be required.

What shape will they take? What visual language do we need to express the particular properties of artificial intelligence? What approaches can we take in addition to personifying AI as bots or characters? I don’t know and I can hardly think of any good examples that point towards promising approaches. Lots to be done.

"Anonymous Scientology 1 by David Shankbone" by David Shankbone - David Shankbone. Licensed under CC BY-SA 3.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Anonymous_Scientology_1_by_David_Shankbone.JPG#mediaviewer/File:Anonymous_Scientology_1_by_David_Shankbone.JPG

Political play is a mode of thinking critically about politics, and of developing an agonistic approach to those politics. This agonism is framed through carnivalesque chaos and humour, through the appropriation of the world for playing. By playing, by carefully negotiating the purpose of playing between pleasure and the political, we engage in a transformative act.

Quote taken from PARTICIPATORY REPUBLICS: PLAY AND THE POLITICAL by Miguel Sicart on the Play Matters book blog.

I love where Miguel is going with his thinking on the relationship between play, politics, appropriation and resistance.

I am interested in this because at Hubbub we have been exploring similar themes through the making of games and things-you-can-play-with.

The big challenges with this remain in the area of instrumentalisation – if you set out to design a thing that encourages this kind of play you often end up with something that is far from playful.

But the opportunities are huge because so much of today’s struggles of individuals against the state relate to legibility and control in some way, and play is the perfect antidote.

For example shortly after reading Miguel’s piece I came across this McKenzie Wark piece on extrastatecraft via Honor Harger. Extrastatecraft shifts the focus from architecture and politics to infrastructure.

Infrastructure is how power deploys itself, and it does so much faster than law or democracy.

You should read the whole thing. What’s fascinating is that Wark briefly discusses strategies and tactics for resisting such statecraft.

So the world might be run not by statecraft but at least in part by extrastatecraft. Easterling: “Avoiding binary dispositions, this field of activity calls for experiments with ongoing forms of leverage, reciprocity, and vigilance to counter the violence immanent in the space of extrastatecraft.” (149) She has some interesting observations on the tactics for this. Some exploit the informational character of third nature, such as gossip, rumor and hoax. She also discusses the possibilities of the gift or of exaggerated compliance (related perhaps to Zizek’s over-identification), and of mimicry and comedy.

“Gossip, rumor and hoax” sound a lot like the carnivalesque reflective-in-action political play Miguel is talking about.

To finish off, here’s a video of the great James C. Scott on the art of not being governed. He talks at length about how peoples have historically fled from statecraft into geographical zones unreachable by power’s infrastructure. And how they deploy their own, state-resistant infrastructure (such as particular kinds of crops) to remain illegible and uncapturable.

Reboot 10 slides and video

I am breaking radio-silence for a bit to let you know the slides and video for my Reboot 10 presentation are now available online, in case you’re interested. I presented this talk before at The Web and Beyond, but this time I had a lot more time, and I presented in English. I therefore think this might still be of interest to some people.1 As always, I am very interested in receiving constructive criticism Just drop me a line in the comments.

Update: It occurred to me that it might be a good idea to briefly summarize what this is about. This is a presentation in two parts. In the first, I theorize about the emergence of games that have as their goal the conveying of an argument. These games would use the real-time city as their platform. It is these games that I call urban procedural rhetorics. In the second part I give a few examples of what such games might look like, using a series of sketches.

The slides, posted to SlideShare, as usual:

The video, hosted on the Reboot website:

  1. I did post a transcript in English before, in case you prefer reading to listening. []

Urban procedural rhetorics — transcript of my TWAB 2008 talk

This is a transcript of my presentation at The Web and Beyond 2008: Mobility in Amsterdam on 22 May. Since the majority of paying attendees were local I presented in Dutch. However, English appears to be the lingua franca of the internet, so here I offer a translation. I have uploaded the slides to SlideShare and hope to be able to share a video recording of the whole thing soon.

Update: I have uploaded a video of the presentation to Vimeo. Many thanks to Almar van der Krogt for recording this.

In 1966 a number of members of Provo took to the streets of Amsterdam carrying blank banners. Provo was a nonviolent anarchist movement. They primarily occupied themselves with provoking the authorities in a “ludic” manner. Nothing was written on their banners because the mayor of Amsterdam had banned the slogans “freedom of speech”, “democracy” and “right to demonstrate”. Regardless, the members were arrested by police, showing that the authorities did not respect their right to demonstrate.1

Good afternoon everyone, my name is Kars Alfrink, I’m a freelance interaction designer. Today I’d like to talk about play in public space. I believe that with the arrival of ubiquitous computing in the city new forms of play will be made possible. The technologies we shape will be used for play wether we want to or not. As William Gibson writes in Burning Chrome:

“…the street finds its own uses for things”

For example: Skateboarding as we now know it — with its emphasis on aerial acrobatics — started in empty pools like this one. That was done without permission, of course…

Only later half-pipes, ramps, verts (which by the way is derived from ‘vertical’) and skateparks arrived — areas where skateboarding is tolerated. Skateboarding would not be what it is today without those first few empty pools.2

Continue reading Urban procedural rhetorics — transcript of my TWAB 2008 talk

  1. The website of Gramschap contains a chronology of the Provo movement in Dutch. []
  2. For a vivid account of the emergence of the vertical style of skateboarding see the documentary film Dogtown and Z-Boys. []