People’s Compute: Design and the Politics of AI Infrastructures

This post is adapted from a talk given at “The Politics of AI: Governance, Resistance, Alternatives” at Goldsmiths, University of London, on 18 September 2025.

The hidden layer

When we talk about making AI more fair, transparent, or accountable, we typically focus on the apps and interfaces that people actually use. We ask: How can we make this recommendation algorithm more explainable? How can users appeal a decision made by this system? These are important questions. But they miss something fundamental.

The premise of this talk is simple: AI infrastructure constrains what’s possible at the application level. The data centers, the compute resources, the platforms, and APIs that AI applications are built on top of all shape what designers can and cannot do. Before a single line of application code is written, decisions have already been made about who controls the underlying systems.

This plays out across every domain where AI is being deployed. Agricultural machinery depends on cloud services. Medical imaging runs on specific hardware platforms. Smart city projects require massive infrastructure investments. In each case, the infrastructure layer sets the terms.

Design’s complicity

Here’s an uncomfortable truth for designers: every smart device we create reinforces Big Tech’s control over infrastructure. Even products that position themselves as alternatives to smartphones, like the Humane AI Pin or the Rabbit R1, ultimately depend on the same concentrated cloud computing resources. The innovation happens at the surface while the underlying power structures remain unchanged.

A different question

What if communities owned the compute that shapes their futures?

This isn’t a new question. In the early 1990s, Amsterdam launched De Digitale Stad (The Digital City). At a time when internet access was restricted to technical elites, DDS provided free accounts, email, and web space to anyone with a modem. Public terminals made the net accessible to all residents. The project used the city as an organizing metaphor, with virtual squares, houses, and public spaces.

The history of DDS is instructive. It started as a radical access project, became enormously popular, and eventually faced tensions between community demands for democratic control and organizational leadership. It was privatized, sold to British Telecom, and absorbed by a commercial provider. But volunteers responded by creating “De echte Digitale Stad” (The Real Digital City) to continue the original mission.

The lesson here is that community-initiated digital infrastructure can thrive. But it also reveals the tensions between user democracy and organizational control, as well as how privatization redirects public goods toward profit. Communities persist in reclaiming their digital commons, even when the institutions they built are captured.

A political framework

To think through AI infrastructure politics, I draw on what’s called the socialist republican ideal. The core idea is that freedom should be understood as collective self-determination, not just individual choice. This is different from the liberal framework that dominates most AI ethics discussions, which treats autonomy as something individuals possess and must protect from external interference.

If we take collective self-determination seriously, then the question isn’t just whether an individual user can contest an AI decision. The question is whether communities can shape the technological systems that structure their lives.

Alternative ways of doing AI

Research on the political economy of AI shows that current development is characterized by resource-intensive centralization. The massive compute requirements of large language models benefit companies with huge infrastructure investments. This creates dependencies and reinforces existing power imbalances.

But technical choices are political choices. Researchers have called for alternative approaches to AI development that prioritize reduced compute requirements, greater transparency, and broader accessibility. There’s work being done on Indigenous data sovereignty, circular systems, and sustainable technology that challenge the dominant model. These aren’t just technical alternatives. They represent different visions of who AI is for and who gets to shape it.

Three moves

To design for democratic AI, I propose we need to make three moves.

From applications to infrastructures

Design must reveal and engage the invisible. Data centers, compute resources, maintenance work: these things typically remain hidden in everyday life. But they shape what’s possible. Designers need to expand their focus beyond user interfaces to understand how technical abstractions manifest in the products they create.

This means adopting methodological approaches that capture extended temporal and spatial dimensions. It means studying infrastructure over time, tracking how technologies move across contexts, and using speculative design to explore capabilities, limitations, and dependencies.

Ethnographic methods, focusing on what researchers call “infrastructure time,” offer underexplored pathways for connecting material, everyday experiences with the invisible forces of AI infrastructure.

From individuals to collectives

Most design has historically focused on individual consumers or users. Even approaches that address collective needs typically treat collectives as just collections of individuals. However, designing for groups as a whole requires a different approach.

This means shifting from designing for users to designing with publics. It means repositioning designers as embedded accomplices who build capacity within communities. The goal isn’t to deliver a finished product but to create infrastructures for ongoing appropriation, what some researchers call “design after design.”

We live in an era where many member-based associations that could serve as vehicles for collective action have weakened or disappeared. Part of the design task may be helping to construct new forms of collective organization around technological concerns.

From idealism to realism

Most design that seeks to transform social arrangements starts from abstract principles and then applies them to concrete situations. A realist approach inverts this. It starts from actual power relations, not abstract ethics. It asks: Who does what to whom, for whose benefit?

This means beginning from specific situations in their historical context. It means focusing on the interests of people in those contexts and how those interests are collectively articulated. And it means accounting for how design work interacts with existing power relations, rather than assuming that good intentions or participatory processes automatically produce beneficial outcomes.

Why now

We’re at an interesting political moment. The post-political neoliberal technocracy that dominated recent decades is being challenged by populist anti-politics. Between these two unsatisfying options lies a democratic possibility.

Contemporary global politics has shifted toward what’s called sovereigntism, and this trend has notably affected AI policy discussions. A focus on infrastructures, collectives, and real politics can help articulate positions that differ from both technocratic management and reactionary populism. It might help us make actual headway toward social progress rather than oscillating between these poles.

Get involved

This is ongoing work. The full paper is available as a preprint at osf.io/uaewn_v2. I welcome feedback and collaboration at c.p.alfrink@tudelft.nl.


Kars Alfrink is a researcher at Delft University of Technology working on contestable AI. More at contestable.ai.

Reclaiming Autonomy: Designing AI-Enhanced Work Tools That Empower Users

Based on an invited talk delivered at Enterprise UX, on November 21, 2025 in Amersfoort, the Netherlands.

In a previous life, I was a practicing designer. These days I’m a postdoc at TU Delft, researching something called Contestable AI. Today I want to explore how we can design AI work tools that preserve worker autonomy—focusing specifically on large language models and knowledge work.

The Meeting We’ve All Been In

Who’s been in this meeting? Your CEO saw a demo, and now your PM is asking you to build some kind of AI feature into your product.

This is very much like Office Space: decisions about tools and automation are being made top-down, without consulting the people who actually do the work.

What I want to explore are the questions you should be asking before you go off and build that thing. Because we shouldn’t just be asking “can we build it?” but also “should we build it?” And if so, how do we build it in a way that empowers workers rather than diminishes them?

Part 1: Reality Check

What We’re Actually Building

Large language models can be thought of as databases containing programs for transforming text (Chollet, 2022). When we prompt, we’re querying that database.

The simpler precursors of LLMs would let you take the word “king” and ask it to make it female, outputting “queen.” Now, language models work similarly but can do much more complex transformations—give it a poem, ask it to write in the style of Shakespeare, and it outputs a transformed poem.

The key point: they are sophisticated text transformation machines. They are not magic. Understanding this helps us design better.

Three Assumptions to Challenge

Before adding AI, we should challenge three things:

  1. Functionality: Does it actually work?
  2. Power: Who really benefits?
  3. Practice: What skills or processes are transformed?

1. Functionality: Does It Work?

One problem with AI projects is that functionality is often assumed instead of demonstrated (Raji et al., 2022). And historically, service sector automation has not led to expected productivity gains (Benanav, 2020).

What this means: don’t just trust the demo. Demand evidence in your actual context. Ask for them to show it working in production, not a prototype.

2. Power: Who Benefits?

Current AI developments seem to favor employers over workers. Because of this, some have started taking inspiration from the Luddites (Merchant, 2023).

It’s a common misconception that Luddites hated technology. They hated losing control over their craft. They smashed frames operated by unskilled workers that undercut skilled craftspeople (Sabie et al., 2023).

What we should be asking: who gains power, and who loses it? This isn’t about being anti-technology. It’s about being pro-empowerment.

3. Practice: What Changes?

AI-enabled work tools can have second-order effects on work practices. Automation breaks skill transmission from experts to novices (Beane, 2024). For example, surgical robots that can be remotely operated by expert surgeons mean junior surgeons don’t learn by doing.

Some work that is challenging, complex, and requires human connection should be preserved so that learning can happen.

On the other hand, before we automate a task, we should ask whether a process should exist at all. Otherwise, we may be simply reifying bureaucracy. As Michael Hammer put it: “don’t automate, obliterate” (1990).

Every automation project is an opportunity to liberate skilled professionals from bureaucracy.

Part 2: Control → Autonomy

All three questions are really about control. Control over whether tools serve you. Control over developing expertise. This is fundamentally about autonomy.

What Autonomy Is

A common definition of autonomy is the effective capacity for self-governance (Prunkl, 2022). It consists of two dimensions:

  • Authenticity: holding beliefs that are free from manipulation
  • Agency: having meaningful options to act on those beliefs

Both are necessary for autonomy.

Office Space examples:

  • Authenticity: Joanna’s manager tells her the minimum is 15 pieces of flair, then criticizes her for wearing “only” the minimum. Her understanding of the rules gets manipulated.
  • Agency: Lumbergh tells Peter, “Yeah, if you could come in on Saturday, that would be great.” Technically a request, but the power structure eliminates any real choice.

How AI Threatens Autonomy

AI can threaten autonomy in a variety of ways. Here are a few examples.

Manipulation — Like TikTok’s recommendation algorithm. It exploits cognitive vulnerabilities, creating personalized content loops that maximize engagement time. This makes it difficult for users to make autonomous decisions about their attention and time use.

Restricted choice — LinkedIn’s automated hiring tools can automatically exclude qualified candidates based on biased pattern matching. Candidates are denied opportunities without human review and lack the ability to contest the decision.

Diminished competence — Routinely outsourcing writing, problem-solving, or analysis to ChatGPT without critical engagement can lead to atrophying the very skills that make professionals valuable. Similar to how reliance on GPS erodes navigational abilities.

These are real risks, not hypothetical. But we can design AI systems to protect against these threats—and we can do more. We can design AI systems to actively promote autonomy.

A Toolkit for Designing AI for Autonomy

Here’s a provisional toolkit with two parts: one focusing on design process, the other on product features (Alfrink, 2025).

Process:

  • Reflexive design
  • Impact assessment
  • Stakeholder negotiation

Product:

  • Override mechanisms
  • Transparency
  • Non-manipulative interfaces
  • Collective autonomy support

I’ll focus on three elements that I think are most novel: relfexive design, stakeholder negotiation, and collective autonomy support.

Part 3: Application

Example: LegalMike

LegalMike is a Dutch legal AI platform that helps lawyers draft contracts, summarize case law, and so forth. It’s a perfect example to apply my framework—it uses an LLM and focuses on knowledge work.

1. Reflexive Design

The question here: what happens to “legal judgment” when AI drafts clauses? Does competence shift from “knowing how to argue” to “knowing how to prompt”?

We should map this before we start shipping.

This is new because standard UX doesn’t often ask how AI tools redefine the work itself.

2. Stakeholder Negotiation

Run workshops with juniors, partners, and clients:

  • Juniors might fear deskilling
  • Partners want quality control
  • Clients may want transparency

By running workshops like this, we make tensions visible and negotiate boundaries between stakeholders.

This is new because we have stakeholders negotiate what autonomy should look like, rather than just accept what exists.

3. Collective Autonomy Support

LegalMike could isolate, or connect. Isolating means everyone with their own AI. But we could deliberately design it to surface connections:

  • Show which partner’s work the AI drew from
  • Create prompts that encourage juniors to consult seniors
  • Show how firm expertise flows, not just individual outputs

This counters the “individual productivity” framing that dominates AI products today.

Tool → Medium

These interventions would shift LegalMike from a pure efficiency tool to a medium for collaborative legal work that preserves professional judgment, surfaces power dynamics, and strengthens collective expertise—not just individual output.

Think of LLMs not as a robot arm that automates away knowledge work tasks—like in a Korean noodle shop. Instead, it can be the robot arm that mediates collaboration between humans to produce entirely new ways of working—like in the CRTA visual identity project for the University of Zagreb.

Conclusion

AI isn’t neutral. It’s embedded in power structures. As designers, we’re not just building features—we’re brokers of autonomy.

Every design choice we make either empowers or disempowers workers. We should choose deliberately.

And seriously, watch Office Space if you haven’t seen it. It’s the best “documentary” about workplace autonomy ever made. Mike Judge understood this as early as 1999.

Participatory AI and ML engineering

In the first half of this year, I’ve presented several versions of a brief talk on participatory AI. I figured I would post an amalgam of these to the blog for future reference. (Previously, on the blog, I posted a brief lit review on the same topic; this talk builds on that.)

So, to start, the main point of this talk is that many participatory approaches to AI don’t engage deeply with the specifics of the technology. One such specific is the translation work engineers do to make a problem “learnable” by a machine (Kang, 2023). From this perspective, the main question to ask becomes, how does translation happen in our specific projects? Should citizens be involved in this translation work? If so, how to achieve this?

Before we dig into the state of participatory AI, let’s begin by clarifying why we might want to enable participation in the first place. A common motivation is a lack of democratic control over AI systems. (This is particularly concerning when AI systems are used for government policy execution. These are the systems I mostly look at in my own research.) And so the response is to bring the people into the development process, and to let them co-decide matters.

In these cases, participation can be understood as an enabler of democratic agency, i.e., a way for subjects to legitimate the use of AI systems (cf. Peter, 2020 in Rubel et al., 2021). Peter distinguishes two pathways: a normative one and a democratic one. Participation can be seen as an example of the democratic pathway to legitimation. A crucial detail Peter mentions here, which is often overlooked in participatory AI literature, is that normative constraints must limit the democratic pathway to avoid arbitrariness.

So, what is the state of participatory AI research and practice? I will look at each in turn next.

As mentioned, I previously posted on the state of participatory AI research, so I won’t repeat that in full here. (For the record, I reviewed Birhane et al. (2022), Bratteteig & Verne (2018), Delgado et al. (2023), Ehsan & Riedl (2020), Feffer et al. (2023), Gerdes (2022), Groves et al. (2023), Robertson et al. (2023), Sloane et al. (2020), and Zytko et al. (2022).) Elements that jump out include:

  • Superficial and unrepresentative involvement.
  • Piecemeal approaches that have minimal impact on decision-making.
  • Participants with a consultative role rather than that of active decision-makers.
  • A lack of bridge-builders between stakeholder perspectives.
  • Participation washing and exploitative community involvement.
  • Struggles with the dynamic nature of technology over time.
  • Discrepancies between the time scales for users to evaluate design ideas versus the pace at which systems are developed.
  • A demand for participation to enhance community knowledge and to actually empower them.

Taking a step back, if I were to evaluate the state of the scientific literature on participatory AI, it strikes me that many of these issues are not new to AI. They have been present in participatory design more broadly for some time already. Many of these issues are also not necessarily specific to AI. The ones I would call out include the issues related to AI system dynamism, time scales of participation versus development, and knowledge gaps between various actors in participatory processes (and, relatedly, the lack of bridge-builders).

So, what about practice? Let’s look at two reports that I feel are a good representation of the broader field: Framework for Meaningful Stakeholder Involvement by ECNL & SocietyInside, and Democratizing AI: Principles for Meaningful Public Participation by Data & Society.

Framework for Meaningful Stakeholder Involvement is aimed at businesses, organizations, and institutions that use AI. It focuses on human rights, ethical assessment, and compliance. It aims to be a tool for planning, delivering, and evaluating stakeholder engagement effectively, emphasizing three core elements: Shared Purpose, Trustworthy Process, and Visible Impact.

Democratizing AI frames public participation in AI development as a way to add legitimacy and accountability and to help prevent harmful impacts. It outlines risks associated with AI, including biased outcomes, opaque decision-making processes, and designers lacking real-world impact awareness. Causes for ineffective participation include unidirectional communication, socioeconomic barriers, superficial engagement, and ineffective third-party involvement. The report uses environmental law as a reference point and offers eight guidelines for meaningful public participation in AI.

Taking stock of these reports, we can say that the building blocks for the overall process are available to those seriously looking. The challenges facing participatory AI are, on the one hand, economic and political. On the other hand, they are related to the specifics of the technology at hand. For the remainder of this piece, let’s dig into the latter a bit more.

Let’s focus on translation work done by engineers during model development.

For this, I build on work by Kang (2023), which focuses on the qualitative analysis of how phenomena are translated into ML-compatible forms, paying specific attention to the ontological translations that occur in making a problem learnable. Translation in ML means transforming complex qualitative phenomena into quantifiable and computable forms. Multifaceted problems are converted into a “usable quantitative reference” or “ground truth.” This translation is not a mere representation of reality but a reformulation of a problem into mathematical terms, making it understandable and processable by ML algorithms. This transformation involves a significant amount of “ontological dissonance,” as it mediates and often simplifies the complexity of real-world phenomena into a taxonomy or set of classes for ML prediction. The process of translating is based on assumptions and standards that may alter the nature of the ML task and introduce new social and technical problems.

So what? I propose we can use the notion of translation as a frame for ML engineering. Understanding ML model engineering as translation is a potentially useful way to analyze what happens at each step of the process: What gets selected for translation, how the translation is performed, and what the resulting translation consists of.

So, if we seek to make participatory AI engage more with the technical particularities of ML, we could begin by identifying translations that have happened or might happen in our projects. We could then ask to what extent these acts of translation are value-laden. For those that are, we could think about how to communicate these translations to a lay audience. A particular challenge I expect we will be faced with is what the meaningful level of abstraction for citizen participation during AI development is. We should also ask what the appropriate ‘vehicle’ for citizen participation will be. And we should seek to move beyond small-scale, one-off, often unrepresentative forms of direct participation.

Bibliography

  • Birhane, A., Isaac, W., Prabhakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Power to the People? Opportunities and Challenges for Participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization, 1–8. https://doi.org/10/grnj99
  • Bratteteig, T., & Verne, G. (2018). Does AI make PD obsolete?: Exploring challenges from artificial intelligence to participatory design. Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial – Volume 2, 1–5. https://doi.org/10/ghsn84
  • Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 1–23. https://doi.org/10/gs8kvm
  • Ehsan, U., & Riedl, M. O. (2020). Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In C. Stephanidis, M. Kurosu, H. Degen, & L. Reinerman-Jones (Eds.), HCI International 2020—Late Breaking Papers: Multimodality and Intelligence (pp. 449–466). Springer International Publishing. https://doi.org/10/gskmgf
  • Feffer, M., Skirpan, M., Lipton, Z., & Heidari, H. (2023). From Preference Elicitation to Participatory ML: A Critical Survey & Guidelines for Future Research. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 38–48. https://doi.org/10/gs8kvx
  • Gerdes, A. (2022). A participatory data-centric approach to AI Ethics by Design. Applied Artificial Intelligence, 36(1), 2009222. https://doi.org/10/gs8kt4
  • Groves, L., Peppin, A., Strait, A., & Brennan, J. (2023). Going public: The role of public participation approaches in commercial AI labs. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1162–1173. https://doi.org/10/gs8kvs
  • Kang, E. B. (2023). Ground truth tracings (GTT): On the epistemic limits of machine learning. Big Data & Society, 10(1), 1–12. https://doi.org/10/gtfgvx
  • Peter, F. (2020). The Grounds of Political Legitimacy. Journal of the American Philosophical Association, 6(3), 372–390. https://doi.org/10/grqfhn
  • Robertson, S., Nguyen, T., Hu, C., Albiston, C., Nikzad, A., & Salehi, N. (2023). Expressiveness, Cost, and Collectivism: How the Design of Preference Languages Shapes Participation in Algorithmic Decision-Making. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10/gr6q2t
  • Rubel, A., Castro, C., & Pham, A. K. (2021). Algorithms and autonomy: The ethics of automated decision systems. Cambridge University Press.
  • Sloane, M., Moss, E., Awomolo, O., & Forlano, L. (2020). Participation is not a Design Fix for Machine Learning. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  • Zytko, D., J. Wisniewski, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Participatory Design of AI Systems: Opportunities and Challenges Across Diverse Users, Relationships, and Application Domains. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, 1–4. https://doi.org/10/gs8kv6

Democratizing AI Through Continuous Adaptability: The Role of DevOps

Below are the abstract and slides for my contribution to the TILTing Perspectives 2024 panel “The mutual shaping of democratic practices & AI,” moderated by Merel Noorman.

Slides

Abstract

Contestability

This presentation delves into democratizing artificial intelligence (AI) systems through contestability. Contestability refers to the ability of AI systems to remain open and responsive to disputes throughout their lifecycle. It approaches AI systems as arenas where groups compete for power over designs and outcomes.

Autonomy, democratic agency, legitimation

We identify contestability as a critical system quality for respecting people’s autonomy. This includes their democratic agency: their ability to legitimate policies. This includes policies enacted by AI systems.

For a decision to be legitimate, it must be democratically willed or rely on “normative authority.” The democratic pathway should be constrained by normative bounds to avoid arbitrariness. The appeal to authority should meet the “access constraint,” which ensures citizens can form beliefs about policies with a sufficient degree of agency (Peter, 2020 in Rubel et al., 2021).

Contestability is the quality that ensures mechanisms are in place for subjects to exercise their democratic agency. In the case of an appeal to normative authority, contestability mechanisms are how subjects and their representatives gain access to the information that will enable them to evaluate its justifiability. In this way, contestability satisfies the access constraint. In the case of democratic will, contestability-by-design practices are how system development is democratized. The autonomy account of legitimation adds the normative constraints that should bind this democratic pathway.

Himmelreich (2022) similarly argues that only a “thick” conception of democracy will address some of the current shortcomings of AI development. This is a pathway that not only allows for participation but also includes deliberation over justifications.

The agonistic arena

Elsewhere, we have proposed the Agonistic Arena as a metaphor for thinking about the democratization of AI systems (Alfrink et al., 2024). Contestable AI embodies the generative metaphor of the Arena. This metaphor characterizes public AI as a space where interlocutors embrace conflict as productive. Seen through the lens of the Arena, public AI problems stem from a need for opportunities for adversarial interaction between stakeholders.

This metaphorical framing suggests prescriptions to make more contentious and open to dispute the norms and procedures that shape:

  1. AI system design decisions on a global level, and
  2. human-AI system output decisions on a local level (i.e., individual decision outcomes), establishing new dialogical feedback loops between stakeholders that ensure continuous monitoring.

The Arena metaphor encourages a design ethos of revisability and reversibility so that AI systems embody the agonistic ideal of contingency.

Post-deployment malleability, feedback-ladenness

Unlike physical systems, AI technologies exhibit a unique malleability post-deployment.

For example, LLM chatbots optimize their performance based on a variety of feedback sources, including interactions with users, as well as feedback collected through crowd-sourced data work.

Because of this open-endedness, democratic control and oversight in the operations phase of the system’s lifecycle become a particular concern.

This is a concern because while AI systems are dynamic and feedback-laden (Gilbert et al., 2023), many of the existing oversight and control measures are static, one-off exercises that struggle to track systems as they evolve over time.

DevOps

The field of DevOps is pivotal in this context. DevOps focuses on system instrumentation for enhanced monitoring and control for continuous improvement. Typically, metrics for DevOps and their machine learning-specific MLOps offshoot emphasize technical performance and business objectives.

However, there is scope to expand these to include matters of public concern. The matters-of-concern perspective shifts the focus on issues such as fairness or discrimination, viewing them as challenges that cannot be resolved through universal methods with absolute certainty. Rather, it highlights how standards are locally negotiated within specific institutional contexts, emphasizing that such standards are never guaranteed (Lampland & Star, 2009, Geiger et al., 2023).

MLOps Metrics

In the context of machine learning systems, technical metrics focus on model accuracy. For example, a financial services company might use Area Under The Curve Receiver Operating Characteristics (AUC-ROC) to continuously monitor and maintain the performance of their fraud detection model in production.

Business metrics focus on cost-benefit analyses. For example, a bank might use a cost-benefit matrix to balance the potential revenue from approving a loan against the risk of default, ensuring that the overall profitability of their loan portfolio is optimized.

Drift

These metrics can be monitored over time to detect “drift” between a model and the world. Training sets are static. Reality is dynamic. It changes over time. Drift occurs when the nature of new input data diverges from the data a model was trained on. A change in performance metrics may be used to alert system operators, who can then investigate and decide on a course of action, e.g., retraining a model on updated data. This, in effect, creates a feedback loop between the system in use and its ongoing development.

An expansion of these practices in the interest of contestability would require:

  1. setting different metrics,
  2. exposing these metrics to additional audiences, and
  3. establishing feedback loops with the processes that govern models and the systems they are embedded in.

Example 1: Camera Cars

Let’s say a city government uses a camera-equipped vehicle and a computer vision model to detect potholes in public roads. In addition to accuracy and a favorable cost-benefit ratio, citizens, and road users in particular, may care about the time between a detected pothole and its fixing. Or, they may care about the distribution of potholes across the city. Furthermore, when road maintenance appears to be degrading, this should be taken up with department leadership, the responsible alderperson, and council members.

Example 2: EV Charching

Or, let’s say the same city government uses an algorithmic system to optimize public electric vehicle (EV) charging stations for green energy use by adapting charging speeds to expected sun and wind. EV drivers may want to know how much energy has been shifted to greener time windows and its trends. Without such visibility on a system’s actual goal achievement, citizens’ ability to legitimate its use suffers. As I have already mentioned, democratic agency, when enacted via the appeal to authority, depends on access to “normative facts” that underpin policies. And finally, professed system functionality must be demonstrated as well (Raji et al., 2022).

DevOps as sociotechnical leverage point for democratizing AI

These brief examples show that the DevOps approach is a potential sociotechnical leverage point. It offers pathways for democratizing AI system design, development, and operations.

DevOps can be adapted to further contestability. It creates new channels between human and machine actors. One of DevOps’s essential activities is monitoring (Smith, 2020), which presupposes fallibility, a necessary precondition for contestability. Finally, it requires and provides infrastructure for technical flexibility so that recovery from error is low-cost and continuous improvement becomes practically feasible.

The mutual shaping of democratic practices & AI

Zooming out further, let’s reflect on this panel’s overall theme, picking out three elements: legitimation, representation of marginalized groups, and dealing with conflict and contestation after implementation and during use.

Contestability is a lever for demanding justifications from operators, which is a necessary input for legitimation by subjects (Henin & Le Métayer, 2022). Contestability frames different actors’ stances as adversarial positions on a political field rather than “equally valid” perspectives (Scott, 2023). And finally, relations, monitoring, and revisability are all ways to give voice to and enable responsiveness to contestations (Genus & Stirling, 2018).

And again, all of these things can be furthered in the post-deployment phase by adapting the DevOps lens.

Bibliography

  • Alfrink, K., Keller, I., Kortuem, G., & Doorn, N. (2022). Contestable AI by Design: Towards a Framework. Minds and Machines33(4), 613–639. https://doi.org/10/gqnjcs
  • Alfrink, K., Keller, I., Yurrita Semperena, M., Bulygin, D., Kortuem, G., & Doorn, N. (2024). Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI. She Ji: The Journal of Design, Economics, and Innovation10(1), 53–93. https://doi.org/10/gtzwft
  • Geiger, R. S., Tandon, U., Gakhokidze, A., Song, L., & Irani, L. (2023). Making Algorithms Public: Reimagining Auditing From Matters of Fact to Matters of Concern. International Journal of Communication18(0), Article 0.
  • Genus, A., & Stirling, A. (2018). Collingridge and the dilemma of control: Towards responsible and accountable innovation. Research Policy47(1), 61–69. https://doi.org/10/gcs7sn
  • Gilbert, T. K., Lambert, N., Dean, S., Zick, T., Snoswell, A., & Mehta, S. (2023). Reward Reports for Reinforcement Learning. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 84–130. https://doi.org/10/gs9cnh
  • Henin, C., & Le Métayer, D. (2022). Beyond explainability: Justifiability and contestability of algorithmic decision systems. AI & SOCIETY37(4), 1397–1410. https://doi.org/10/gmg8pf
  • Himmelreich, J. (2022). Against “Democratizing AI.” AI & SOCIETYhttps://doi.org/10/gr95d5
  • Lampland, M., & Star, S. L. (Eds.). (2008). Standards and Their Stories: How Quantifying, Classifying, and Formalizing Practices Shape Everyday Life (1st edition). Cornell University Press.
  • Peter, F. (2020). The Grounds of Political Legitimacy. Journal of the American Philosophical Association6(3), 372–390. https://doi.org/10/grqfhn
  • Raji, I. D., Kumar, I. E., Horowitz, A., & Selbst, A. (2022). The Fallacy of AI Functionality. 2022 ACM Conference on Fairness, Accountability, and Transparency, 959–972. https://doi.org/10/gqfvf5
  • Rubel, A., Castro, C., & Pham, A. K. (2021). Algorithms and autonomy: The ethics of automated decision systems. Cambridge University Press.
  • Scott, D. (2023). Diversifying the Deliberative Turn: Toward an Agonistic RRI. Science, Technology, & Human Values48(2), 295–318. https://doi.org/10/gpk2pr
  • Smith, J. D. (2020). Operations anti-patterns, DevOps solutions. Manning Publications.
  • Treveil, M. (2020). Introducing MLOps: How to scale machine learning in the enterprise (First edition). O’Reilly.

AI pedagogy through a design lens

At a TU Delft spring symposium on AI education, Hosana and I ran a short workshop titled “AI pedagogy through a design lens.” In it, we identified some of the challenges facing AI teaching, particularly outside of computer science, and explored how design pedagogy, particularly the practices of studios and making, may help to address them. The AI & Society master elective I’ve been developing and teaching over the past five years served as a case study. The session was punctuated by brief brainstorming using an adapted version of the SQUID gamestorming technique. Below are the slides we used.

“Geen transparantie zonder tegenspraak” — betoog voor première documentaire transparante laadpaal

Het onderstaande korte betoog sprak ik uit tijdens de online premiere van de documentaire over de transparante laadpaal op donderdag 18 maart 2021.

Ik had laatst contact met een internationale “thought leader” op het gebied van “tech ethics”. Hij vertelde mij dat hij heel dankbaar is voor het bestaan van de transparante laadpaal omdat het zo’n goed voorbeeld is van hoe design kan bijdragen aan eerlijke technologie.

Dat is natuurlijk ontzettend leuk om te horen. En het past in een bredere trend in de industrie gericht op het transparant en uitlegbaar maken van algoritmes. Inmiddels is het zelfs zo ver dat wetgeving uitlegbaarheid (in sommige gevallen) verplicht stelt.

In de documentaire hoor je meerdere mensen vertellen (mijzelf inbegrepen) waarom het belangrijk is dat stedelijke algoritmes transparant zijn. Thijs benoemt heel mooi twee redenen: Enerzijds het collectieve belang om democratische controle op de ontwikkeling van stedelijke algoritmes mogelijk te maken. Anderzijds is er het individuele belang om je recht te kunnen halen als een systeem een beslissing maakt waarmee je het (om wat voor reden dan ook) niet eens bent.

En inderdaad, in beide gevallen (collectieve controle en individuele remedie) is transparantie een randvoorwaarde. Ik denk dat we met dit project een hoop problemen qua design en techniek hebben opgelost die daarbij komen kijken. Tegelijkertijd doemt er een nieuwe vraag aan de horizon op: Als we begrijpen hoe een slim systeem werkt, en we zijn het er niet mee eens, wat dan? Hoe krijg je vervolgens daadwerkelijk invloed op de werking van het systeem?

Ik denk dat we onze focus zullen moeten gaan verleggen van transparantie naar wat ik tegenspraak of in goed Engels “contestability” noem.

Ontwerpen voor tegenspraak betekent dat we na moeten gaan denken over de middelen die mensen nodig hebben voor het uitoefenen van hun recht op menselijke interventie. Ja, dit betekent dat we informatie moeten aanleveren over het hoe en waarom van individuele beslissingen. Transparantie dus. Maar het betekent ook dat we nieuwe kanalen en processen moeten inrichten waarmee mensen verzoeken kunnen indienen voor het herzien van een beslissing. We zullen na moeten gaan denken over hoe we dergelijke verzoeken beoordelen, en hoe we er voor zorgen dat het slimme systeem in kwestie “leert” van de signalen die we op deze manier oppikken uit de samenleving.

Je zou kunnen zeggen dat ontwerpen van transparantie eenrichtingsverkeer is. Informatie stroomt van de ontwikkelende partij, naar de eindgebruiker. Bij het ontwerpen voor tegenspraak gaat het om het creëren van een dialoog tussen ontwikkelaars en burgers.

Ik zeg burgers want niet alleen klassieke eindgebruikers worden geraakt door slimme systemen. Allerlei andere groepen worden ook, vaak indirect beïnvloed.

Dat is ook een nieuwe ontwerp uitdaging. Hoe ontwerp je niet alleen voor de eindgebruiker (zoals bij de transparante laadpaal de EV bestuurder) maar ook voor zogenaamde indirecte belanghebbenden, bijvoorbeeld bewoners van straten waar laadpalen geplaatst worden, die geen EV rijden, of zelfs geen auto, maar evengoed een belang hebben bij hoe stoepen en straten worden ingericht.

Deze verbreding van het blikveld betekent dat we bij het ontwerpen voor tegenspraak nóg een stap verder kunnen en zelfs moeten gaan dan het mogelijk maken van remedie bij individuele beslissingen.

Want ontwerpen voor tegenspraak bij individuele beslissingen van een reeds uitgerold systeem is noodzakelijkerwijs post-hoc en reactief, en beperkt zich tot één enkele groep belanghebbenden.

Zoals Thijs ook min of meer benoemt in de documentaire beïnvloed slimme stedelijke infrastructuur de levens van ons allemaal, en je zou kunnen zeggen dat de design en technische keuzes die bij de ontwikkeling daarvan gemaakt worden intrinsiek ook politieke keuzes zijn.

Daarom denk ik dat we er niet omheen kunnen om het proces dat ten grondslag ligt aan deze systemen zelf, ook zo in te richten dat er ruimte is voor tegenspraak. In mijn ideale wereld is de ontwikkeling van een volgende generatie slimme laadpalen daarom participatief, pluriform en inclusief, net als onze democratie dat zelf ook streeft te zijn.

Hoe we dit soort “contestable” algoritmes precies vorm moeten geven, hoe ontwerpen voor tegenspraak moeten gaan werken, is een open vraag. Maar een aantal jaren geleden wist niemand nog hoe een transparante laadpaal er uit zou moeten zien, en dat hebben we ook voor elkaar gekregen.

Update (2021-03-31 16:43): Een opname van het gehele event is nu ook beschikbaar. Het bovenstaande betoog start rond 25:14.

“Contestable Infrastructures” at Beyond Smart Cities Today

I’ll be at Beyond Smart Cities Today the next couple of days (18-19 September). Below is the abstract I submitted, plus a bibliography of some of the stuff that went into my thinking for this and related matters that I won’t have the time to get into.

In the actually existing smart city, algorithmic systems are increasingly used for the purposes of automated decision-making, including as part of public infrastructure. Algorithmic systems raise a range of ethical concerns, many of which stem from their opacity. As a result, prescriptions for improving the accountability, trustworthiness and legitimacy of algorithmic systems are often based on a transparency ideal. The thinking goes that if the functioning and ownership of an algorithmic system is made perceivable, people understand them and are in turn able to supervise them. However, there are limits to this approach. Algorithmic systems are complex and ever-changing socio-technical assemblages. Rendering them visible is not a straightforward design and engineering task. Furthermore such transparency does not necessarily lead to understanding or, crucially, the ability to act on this understanding. We believe legitimate smart public infrastructure needs to include the possibility for subjects to articulate objections to procedures and outcomes. The resulting “contestable infrastructure” would create spaces that open up the possibility for expressing conflicting views on the smart city. Our project is to explore the design implications of this line of reasoning for the physical assets that citizens encounter in the city. Because after all, these are the perceivable elements of the larger infrastructural systems that recede from view.

  • Alkhatib, A., & Bernstein, M. (2019). Street-Level Algorithms. 1–13. https://doi.org/10.1145/3290605.3300760
  • Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
  • Centivany, A., & Glushko, B. (2016). “Popcorn tastes good”: Participatory policymaking and Reddit’s “AMAgeddon.” Conference on Human Factors in Computing Systems – Proceedings, 1126–1137. https://doi.org/10.1145/2858036.2858516
  • Crawford, K. (2016). Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics. Science Technology and Human Values, 41(1), 77–92. https://doi.org/10.1177/0162243915589635
  • DiSalvo, C. (2010). Design, Democracy and Agonistic Pluralism. Proceedings of the Design Research Society Conference, 366–371.
  • Hildebrandt, M. (2017). Privacy As Protection of the Incomputable Self: Agonistic Machine Learning. SSRN Electronic Journal, 1–33. https://doi.org/10.2139/ssrn.3081776
  • Jackson, S. J., Gillespie, T., & Payette, S. (2014). The Policy Knot: Re-integrating Policy, Practice and Design. CSCW Studies of Social Computing, 588–602. https://doi.org/10.1145/2531602.2531674
  • Jewell, M. (2018). Contesting the decision: living in (and living with) the smart city. International Review of Law, Computers and Technology. https://doi.org/10.1080/13600869.2018.1457000
  • Lindblom, L. (2019). Consent, Contestability, and Unions. Business Ethics Quarterly. https://doi.org/10.1017/beq.2018.25
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
  • Van de Poel, I. (2016). An ethical framework for evaluating experimental technology. Science and Engineering Ethics, 22(3), 667–686. https://doi.org/10.1007/s11948-015-9724-3

“Contestable Infrastructures: Designing for Dissent in Smart Public Objects” at We Make the City 2019

Thijs Turèl of AMS Institute and myself presented a version of the talk below at the Cities for Digital Rights conference on June 19 in Amsterdam during the We Make the City festival. The talk is an attempt to articulate some of the ideas we both have been developing for some time around contestability in smart public infrastructure. As always with this sort of thing, this is intended as a conversation piece so I welcome any thoughts you may have.


The basic message of the talk is that when we start to do automated decision-making in public infrastructure using algorithmic systems, we need to design for the inevitable disagreements that may arise and furthermore, we suggest there is an opportunity to focus on designing for such disagreements in the physical objects that people encounter in urban space as they make use of infrastructure.

We set the scene by showing a number of examples of smart public infrastructure. A cyclist crossing that adapts to weather conditions. If it’s raining cyclists more frequently get a green light. A pedestrian crossing in Tilburg where elderly can use their mobile to get more time to cross. And finally, the case we are involved with ourselves: smart EV charging in the city of Amsterdam, about which more later.

Image credits: Vattenfall, Fietsfan010, De Nieuwe Draai

We identify three trends in smart public infrastructure: (1) where previously algorithms were used to inform policy, now they are employed to perform automated decision-making on an individual case basis. This raises the stakes; (2) distributed ownership of these systems as the result of public-private partnerships and other complex collaboration schemes leads to unclear responsibility; and finally (3) the increasing use of machine learning leads to opaque decision-making.

These trends, and algorithmic systems more generally, raise a number of ethical concerns. They include but are not limited to: the use of inductive correlations (for example in the case of machine learning) leads to unjustified results; lack of access to and comprehension of a system’s inner workings produces opacity, which in turn leads to a lack of trust in the systems themselves and the organisations that use them; bias is introduced by a number of factors, including development team prejudices, technical flaws, bad data and unforeseen interactions with other systems; and finally the use of profiling, nudging and personalisation leads to diminished human agency. (We highly recommend the article by Mittelstadt et al. for a comprehensive overview of ethical concerns raised by algorithms.)

So for us, the question that emerges from all this is: How do we organise the supervision of smart public infrastructure in a democratic and lawful way?

There are a number of existing approaches to this question. These include legal and regulatory (e.g. the right to explanation in the GDPR); auditing (e.g. KPMG’s “AI in Control” method, BKZ’s transparantielab); procurement (e.g. open source clauses); insourcing (e.g. GOV.UK) and design and engineering (e.g. our own work on the transparent charging station).

We feel there are two important limitations with these existing approaches. The first is a focus on professionals and the second is a focus on prediction. We’ll discuss each in turn.

Image credits: Cities Today

First of all, many solutions target a professional class, be it accountants, civil servants, supervisory boards, as well as technologists, designers and so on. But we feel there is a role for the citizen as well, because the supervision of these systems is simply too important to be left to a privileged few. This role would include identifying wrongdoing, and suggesting alternatives.

There is a tension here, which is that from the perspective of the public sector one should only ask citizens for their opinion when you have the intention and the resources to actually act on their suggestions. It can also be a challenge to identify legitimate concerns in the flood of feedback that can sometimes occur. From our point of view though, such concerns should not be used as an excuse to not engage the public. If citizen participation is considered necessary, the focus should be on freeing up resources and setting up structures that make it feasible and effective.

The second limitation is prediction. This is best illustrated with the Collinridge dilemma: in the early phases of new technology, when a technology and its social embedding are still malleable, there is uncertainty about the social effects of that technology. In later phases, social effects may be clear but then often the technology has become so well entrenched in society that it is hard to overcome negative social effects. (This summary is taken from an excellent van de Poel article on the ethics of experimental technology.)

Many solutions disregard the Collingridge dilemma and try to predict and prevent adverse effects of new systems at design-time. One example of this approach would be value-sensitive design. Our focus in stead is on use-time. Considering the fact that smart public infrastructure tends to be developed on an ongoing basis, the question becomes how to make citizens a partner in this process. And even more specifically we are interested in how this can be made part of the design of the “touchpoints” people actually encounter in the streets, as well as their backstage processes.

Why do we focus on these physical objects? Because this is where people actually meet the infrastructural systems, of which large parts recede from view. These are the places where they become aware of their presence. They are the proverbial tip of the iceberg.

Image credits: Sagar Dani

The use of automated decision-making in infrastructure reduces people’s agency. For this reason, resources for agency need to be designed back into these systems. Frequently the answer to this question is premised on a transparency ideal. This may be a prerequisite for agency, but it is not sufficient. Transparency may help you become aware of what is going on, but it will not necessarily help you to act on that knowledge. This is why we propose a shift from transparency to contestability. (We can highly recommend Ananny and Crawford’s article for more on why transparency is insufficient.)

To clarify what we mean by contestability, consider the following three examples: When you see the lights on your router blink in the middle of the night when no-one in your household is using the internet you can act on this knowledge by yanking out the device’s power cord. You may never use the emergency brake in a train but its presence does give you a sense of control. And finally, the cash register receipt provides you with a view into both the procedure and the outcome of the supermarket checkout procedure and it offers a resource with which you can dispute them if something appears to be wrong.

Image credits: Aangiftedoen, source unknown for remainder

None of these examples is a perfect illustration of contestability but they hint at something more than transparency, or perhaps even something wholly separate from it. We’ve been investigating what their equivalents would be in the context of smart public infrastructure.

To illustrate this point further let us come back to the smart EV charging project we mentioned earlier. In Amsterdam, public EV charging stations are becoming “smart” which in this case means they automatically adapt the speed of charging to a number of factors. These include grid capacity, and the availability of solar energy. Additional factors can be added in future, one of which under consideration is to give priority to shared cars over privately owned cars. We are involved with an ongoing effort to consider how such charging stations can be redesigned so that people understand what’s going on behind the scenes and can act on this understanding. The motivation for this is that if not designed carefully, the opacity of smart EV charging infrastructure may be detrimental to social acceptance of the technology. (A first outcome of these efforts is the Transparent Charging Station designed by The Incredible Machine. A follow-up project is ongoing.)

Image credits: The Incredible Machine, Kars Alfrink

We have identified a number of different ways in which people may object to smart EV charging. They are listed in the table below. These types of objections can lead us to feature requirements for making the system contestable.

Because the list is preliminary, we asked the audience if they could imagine additional objections, if those examples represented new categories, and if they would require additional features for people to be able to act on them. One particularly interesting suggestion that emerged was to give local communities control over the policies enacted by the charge points in their vicinity. That’s something to further consider the implications of.

And that’s where we left it. So to summarise:

  1. Algorithmic systems are becoming part of public infrastructure.
  2. Smart public infrastructure raises new ethical concerns.
  3. Many solutions to ethical concerns are premised on a transparency ideal, but do not address the issue of diminished agency.
  4. There are different categories of objections people may have to an algorithmic system’s workings.
  5. Making a system contestable means creating resources for people to object, opening up a space for the exploration of meaningful alternatives to its current implementation.

ThingsCon 2018 workshop ‘Seeing Like a Bridge’

Workshop in progress with a view of Rotterdam’s Willemsbrug across the Maas.

Early December of last year Alec Shuldiner and myself ran a workshop at ThingsCon 2018 in Rotterdam.

Here’s the description as it was listed on the conference website:

In this workshop we will take a deep dive into some of the challenges of designing smart public infrastructure.

Smart city ideas are moving from hype into reality. The everyday things that our contemporary world runs on, such as roads, railways and canals are not immune to this development. Basic, “hard” infrastructure is being augmented with internet-connected sensing, processing and actuating capabilities. We are involved as practitioners and researchers in one such project: the MX3D smart bridge, a pedestrian bridge 3D printed from stainless steel and equipped with a network of sensors.

The question facing everyone involved with these developments, from citizens to professionals to policy makers is how to reap the potential benefits of these technologies, without degrading the urban fabric. For this to happen, information technology needs to become more like the city: open-ended, flexible and adaptable. And we need methods and tools for the diverse range of stakeholders to come together and collaborate on the design of truly intelligent public infrastructure.

We will explore these questions in this workshop by first walking you through the architecture of the MX3D smart bridge—offering a uniquely concrete and pragmatic view into a cutting edge smart city project. Subsequently we will together explore the question: What should a smart pedestrian bridge that is aware of itself and its surroundings be able to tell us? We will conclude by sharing some of the highlights from our conversation, and make note of particularly thorny questions that require further work.

The workshop’s structure was quite simple. After a round of introductions, Alec introduced the MX3D bridge to the participants. For a sense of what that introduction talk was like, I recommend viewing this recording of a presentation he delivered at a recent Pakhuis de Zwijger event.

We then ran three rounds of group discussion in the style of world cafe. each discussion was guided by one question. Participants were asked to write, draw and doodle on the large sheets of paper covering each table. At the end of each round, people moved to another table while one person remained to share the preceding round’s discussion with the new group.

The discussion questions were inspired by value-sensitive design. I was interested to see if people could come up with alternative uses for a sensor-equipped 3D-printed footbridge if they first considered what in their opinion made a city worth living in.

The questions we used were:

  1. What specific things do you like about your town? (Places, things to do, etc. Be specific.)
  2. What values underly those things? (A value is what a person or group of people consider important in life.)
  3. How would you redesign the bridge to support those values?

At the end of the three discussion rounds we went around to each table and shared the highlights of what was produced. We then had a bit of a back and forth about the outcomes and the workshop approach, after which we wrapped up.

We did get to some interesting values by starting from personal experience. Participants came from a variety of countries and that was reflected in the range of examples and related values. The design ideas for the bridge remained somewhat abstract. It turned out to be quite a challenge to make the jump from values to different types of smart bridges. Despite this, we did get nice ideas such as having the bridge report on water quality of the canal it crosses, derived from the value of care for the environment.

The response from participants afterwards was positive. People found it thought-provoking, which was definitely the point. People were also eager to learn even more about the bridge project. It remains a thing that captures people’s imagination. For that reason alone, it continues to be a very productive case to use for the grounding of these sorts of discussions.

‘Unboxing’ at Behavior Design Amsterdam #16

Below is a write-up of the talk I gave at the Behavior Design Amsterdam #16 meetup on Thursday, February 15, 2018.

'Pandora' by John William Waterhouse (1896)
‘Pandora’ by John William Waterhouse (1896)

I’d like to talk about the future of our design practice and what I think we should focus our attention on. It is all related to this idea of complexity and opening up black boxes. We’re going to take the scenic route, though. So bear with me.

Software Design

Two years ago I spent about half a year in Singapore.

While there I worked as product strategist and designer at a startup called ARTO, an art recommendation service. It shows you a random sample of artworks, you tell it which ones you like, and it will then start recommending pieces it thinks you like. In case you were wondering: yes, swiping left and right was involved.

We had this interesting problem of ingesting art from many different sources (mostly online galleries) with metadata of wildly varying levels of quality. So, using metadata to figure out which art to show was a bit of a non-starter. It should come as no surprise then, that we started looking into machine learning—image processing in particular.

And so I found myself working with my engineering colleagues on an art recommendation stream which was driven at least in part by machine learning. And I quickly realised we had a problem. In terms of how we worked together on this part of the product, it felt like we had taken a bunch of steps back in time. Back to a way of collaborating that was less integrated and less responsive.

That’s because we have all these nice tools and techniques for designing traditional software products. But software is deterministic. Machine learning is fundamentally different in nature: it is probabilistic.

It was hard for me to take the lead in the design of this part of the product for two reasons. First of all, it was challenging to get a first-hand feel of the machine learning feature before it was implemented.

And second of all, it was hard for me to communicate or visualise the intended behaviour of the machine learning feature to the rest of the team.

So when I came back to the Netherlands I decided to dig into this problem of design for machine learning. Turns out I opened up quite the can of worms for myself. But that’s okay.

There are two reasons I care about this:

The first is that I think we need more design-led innovation in the machine learning space. At the moment it is engineering-dominated, which doesn’t necessarily lead to useful outcomes. But if you want to take the lead in the design of machine learning applications, you need a firm handle on the nature of the technology.

The second reason why I think we need to educate ourselves as designers on the nature of machine learning is that we need to take responsibility for the impact the technology has on the lives of people. There is a lot of talk about ethics in the design industry at the moment. Which I consider a positive sign. But I also see a reluctance to really grapple with what ethics is and what the relationship between technology and society is. We seem to want easy answers, which is understandable because we are all very busy people. But having spent some time digging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a feature. And we should embrace it.

Machine Learning

At the end of 2016 I attended ThingsCon here in Amsterdam and I was introduced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both interested in machine learning. So with encouragement from Ianus we decided to put together a workshop that would enable industrial design master students to tangle with it in a hands-on manner.

About a year later now, this has grown into a thing we call Prototyping the Useless Butler. During the workshop, you use machine learning algorithms to train a model that takes inputs from a network-connected arduino’s sensors and drives that same arduino’s actuators. In effect, you can create interactive behaviour without writing a single line of code. And you get a first hand feel for how common applications of machine learning work. Things like regression, classification and dynamic time warping.

The thing that makes this workshop tick is an open source software application called Wekinator. Which was created by Rebecca Fiebrink. It was originally aimed at performing artists so that they could build interactive instruments without writing code. But it takes inputs from anything and sends outputs to anything. So we appropriated it towards our own ends.

You can find everything related to Useless Butler on this GitHub repo.

The thinking behind this workshop is that for us designers to be able to think creatively about applications of machine learning, we need a granular understanding of the nature of the technology. The thing with designers is, we can’t really learn about such things from books. A lot of design knowledge is tacit, it emerges from our physical engagement with the world. This is why things like sketching and prototyping are such essential parts of our way of working. And so with useless butler we aim to create an environment in which you as a designer can gain tacit knowledge about the workings of machine learning.

Simply put, for a lot of us, machine learning is a black box. With Useless Butler, we open the black box a bit and let you peer inside. This should improve the odds of design-led innovation happening in the machine learning space. And it should also help with ethics. But it’s definitely not enough. Knowledge about the technology isn’t the only issue here. There are more black boxes to open.

Values

Which brings me back to that other black box: ethics. Like I already mentioned there is a lot of talk in the tech industry about how we should “be more ethical”. But things are often reduced to this notion that designers should do no harm. As if ethics is a problem to be fixed in stead of a thing to be practiced.

So I started to talk about this to people I know in academia and more than once this thing called Value Sensitive Design was mentioned. It should be no surprise to anyone that scholars have been chewing on this stuff for quite a while. One of the earliest references I came across, an essay by Batya Friedman in Interactions is from 1996! This is a lesson to all of us I think. Pay more attention to what the academics are talking about.

So, at the end of last year I dove into this topic. Our host Iskander Smit, Rob Maijers and myself coordinate a grassroots community for tech workers called Tech Solidarity NL. We want to build technology that serves the needs of the many, not the few. Value Sensitive Design seemed like a good thing to dig into and so we did.

I’m not going to dive into the details here. There’s a report on the Tech Solidarity NL website if you’re interested. But I will highlight a few things that value sensitive design asks us to consider that I think help us unpack what it means to practice ethical design.

First of all, values. Here’s how it is commonly defined in the literature:

“A value refers to what a person or group of people consider important in life.”

I like it because it’s common sense, right? But it also makes clear that there can never be one monolithic definition of what ‘good’ is in all cases. As we designers like to say: “it depends” and when it comes to values things are no different.

“Person or group” implies there can be various stakeholders. Value sensitive design distinguishes between direct and indirect stakeholders. The former have direct contact with the technology, the latter don’t but are affected by it nonetheless. Value sensitive design means taking both into account. So this blows up the conventional notion of a single user to design for.

Various stakeholder groups can have competing values and so to design for them means to arrive at some sort of trade-off between values. This is a crucial point. There is no such thing as a perfect or objectively best solution to ethical conundrums. Not in the design of technology and not anywhere else.

Value sensitive design encourages you to map stakeholders and their values. These will be different for every design project. Another approach is to use lists like the one pictured here as an analytical tool to think about how a design impacts various values.

Furthermore, during your design process you might not only think about the short-term impact of a technology, but also think about how it will affect things in the long run.

And similarly, you might think about the effects of a technology not only when a few people are using it, but also when it becomes wildly successful and everybody uses it.

There are tools out there that can help you think through these things. But so far much of the work in this area is happening on the academic side. I think there is an opportunity for us to create tools and case studies that will help us educate ourselves on this stuff.

There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the technologies we work with, it helps to dig deeper into the nature of the relationship between technology and society. Yes, it complicates things. But that is exactly the point.

Privileging simple and scalable solutions over those adapted to local needs is socially, economically and ecologically unsustainable. So I hope you will join me in embracing complexity.