In my thesis, I use autonomy to build the normative case for contestability. It so happens that this year’s theme at the Delft Design for Values Institute is also autonomy. On October 15, 2024, I participated in a panel discussion on autonomy to kick things off. I collected some notes on autonomy that go beyond the conceptualization I used in my thesis. I thought it might be helpful and interesting to collect some of them here in adapted form.
The notes I brought included, first of all, a summary of the ecumenical conceptualization of autonomy concerning automated decision-making systems offered by Alan Rubel, Clinton Castro, and Adam Pham (2021). They conceive of autonomy as effective self-governance. To be autonomous, we need authentic beliefs about our circumstances and the agency to act on our plans. Regarding algorithmic systems, they offer this notion of a reasonable endorsement test—the degree to which a system can be said to respect autonomy depends on its reliability, the stakes of its outputs, the degree to which subjects can be held responsible for inputs, and the distribution of burdens across groups.
Second, I collected some notes from several pieces by James Muldoon, which get into notions of freedom and autonomy that were developed in socialist republican thought by the likes of Luxemburg, Kautsky, and Castoriadis ( 2020, 2021a, 2021b). This story of autonomy is sociopolitical rather than moral. This approach is quite appealing for someone interested in non-ideal theory in a realist mode like myself. The account of autonomy Muldoon offers is one where individual autonomy hinges on greater group autonomy and stronger bonds of association between those producing and consuming technologies. Freedom is conceived of as collective self-determination.
And then third and finally, there’s this connected idea of relational autonomy, which to a degree is part of the account offered by Rubel et al., but in the conceptions here more radical in how it seeks to create distance from liberal individualism (e.g., Christman, 2004; Mhlambi & Tiribelli, 2023; Westlund, 2009). In this, individual capacity for autonomous choice is shaped by social structures. So freedom becomes realized through networks of care, responsibility, and interdependence.
That’s what I am interested in: accounts of autonomy that are not premised on liberal individualism and that give us some alternative handle on the problem of the social control of technology in general and of AI in particular.
From my point of view, the implications of all this for design and AI include the following.
First, to make a fairly obvious but often overlooked point, the degree to which a given system impacts people’s autonomy depends on various factors. It makes little sense to make blanket statements about AI destroying our autonomy and so on.
Second, in value-sensitive design terms, you can think about autonomy as a value to be balanced against others—in the case where you take the position that all values can be considered equally important, at least in principle. Or you can consider autonomy more like a precondition for people to live with technology in concordance with their values, making autonomy take precedence over other values. The sociopolitical and relational accounts above point in this direction.
Third, suppose you buy into the radical democratic idea of technology and autonomy. In that case, it follows that it makes little sense to admonish individual designers about respecting others’ autonomy. They may be asked to privilege technologies in their designs that afford individual and group autonomy. But designers also need organization and emancipation more often than not. So it’s about building power. The power of workers inside the organizations that develop technologies and the power of communities that “consume” those same technologies.
With AI, the fact is that, in reality, in the cases I look at, the communities that AI is brought to bear on have little say in the matter. The buyers and deployers of AI could and should be made more accountable to the people subjected to AI.
I’d like to talk about the future of our design practice and what I think we should focus our attention on. It is all related to this idea of complexity and opening up black boxes. We’re going to take the scenic route, though. So bear with me.
Software Design
Two years ago I spent about half a year in Singapore.
While there I worked as product strategist and designer at a startup called ARTO, an art recommendation service. It shows you a random sample of artworks, you tell it which ones you like, and it will then start recommending pieces it thinks you like. In case you were wondering: yes, swiping left and right was involved.
We had this interesting problem of ingesting art from many different sources (mostly online galleries) with metadata of wildly varying levels of quality. So, using metadata to figure out which art to show was a bit of a non-starter. It should come as no surprise then, that we started looking into machine learning—image processing in particular.
And so I found myself working with my engineering colleagues on an art recommendation stream which was driven at least in part by machine learning. And I quickly realised we had a problem. In terms of how we worked together on this part of the product, it felt like we had taken a bunch of steps back in time. Back to a way of collaborating that was less integrated and less responsive.
That’s because we have all these nice tools and techniques for designing traditional software products. But software is deterministic. Machine learning is fundamentally different in nature: it is probabilistic.
It was hard for me to take the lead in the design of this part of the product for two reasons. First of all, it was challenging to get a first-hand feel of the machine learning feature before it was implemented.
And second of all, it was hard for me to communicate or visualise the intended behaviour of the machine learning feature to the rest of the team.
So when I came back to the Netherlands I decided to dig into this problem of design for machine learning. Turns out I opened up quite the can of worms for myself. But that’s okay.
There are two reasons I care about this:
The first is that I think we need more design-led innovation in the machine learning space. At the moment it is engineering-dominated, which doesn’t necessarily lead to useful outcomes. But if you want to take the lead in the design of machine learning applications, you need a firm handle on the nature of the technology.
The second reason why I think we need to educate ourselves as designers on the nature of machine learning is that we need to take responsibility for the impact the technology has on the lives of people. There is a lot of talk about ethics in the design industry at the moment. Which I consider a positive sign. But I also see a reluctance to really grapple with what ethics is and what the relationship between technology and society is. We seem to want easy answers, which is understandable because we are all very busy people. But having spent some time digging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a feature. And we should embrace it.
Machine Learning
At the end of 2016 I attended ThingsCon here in Amsterdam and I was introduced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both interested in machine learning. So with encouragement from Ianus we decided to put together a workshop that would enable industrial design master students to tangle with it in a hands-on manner.
About a year later now, this has grown into a thing we call Prototyping the Useless Butler. During the workshop, you use machine learning algorithms to train a model that takes inputs from a network-connected arduino’s sensors and drives that same arduino’s actuators. In effect, you can create interactive behaviour without writing a single line of code. And you get a first hand feel for how common applications of machine learning work. Things like regression, classification and dynamic time warping.
The thing that makes this workshop tick is an open source software application called Wekinator. Which was created by Rebecca Fiebrink. It was originally aimed at performing artists so that they could build interactive instruments without writing code. But it takes inputs from anything and sends outputs to anything. So we appropriated it towards our own ends.
The thinking behind this workshop is that for us designers to be able to think creatively about applications of machine learning, we need a granular understanding of the nature of the technology. The thing with designers is, we can’t really learn about such things from books. A lot of design knowledge is tacit, it emerges from our physical engagement with the world. This is why things like sketching and prototyping are such essential parts of our way of working. And so with useless butler we aim to create an environment in which you as a designer can gain tacit knowledge about the workings of machine learning.
Simply put, for a lot of us, machine learning is a black box. With Useless Butler, we open the black box a bit and let you peer inside. This should improve the odds of design-led innovation happening in the machine learning space. And it should also help with ethics. But it’s definitely not enough. Knowledge about the technology isn’t the only issue here. There are more black boxes to open.
Values
Which brings me back to that other black box: ethics. Like I already mentioned there is a lot of talk in the tech industry about how we should “be more ethical”. But things are often reduced to this notion that designers should do no harm. As if ethics is a problem to be fixed in stead of a thing to be practiced.
So I started to talk about this to people I know in academia and more than once this thing called Value Sensitive Design was mentioned. It should be no surprise to anyone that scholars have been chewing on this stuff for quite a while. One of the earliest references I came across, an essay by Batya Friedman in Interactions is from 1996! This is a lesson to all of us I think. Pay more attention to what the academics are talking about.
So, at the end of last year I dove into this topic. Our host Iskander Smit, Rob Maijers and myself coordinate a grassroots community for tech workers called Tech Solidarity NL. We want to build technology that serves the needs of the many, not the few. Value Sensitive Design seemed like a good thing to dig into and so we did.
I’m not going to dive into the details here. There’s a report on the Tech Solidarity NL website if you’re interested. But I will highlight a few things that value sensitive design asks us to consider that I think help us unpack what it means to practice ethical design.
First of all, values. Here’s how it is commonly defined in the literature:
“A value refers to what a person or group of people consider important in life.”
I like it because it’s common sense, right? But it also makes clear that there can never be one monolithic definition of what ‘good’ is in all cases. As we designers like to say: “it depends” and when it comes to values things are no different.
“Person or group” implies there can be various stakeholders. Value sensitive design distinguishes between direct and indirect stakeholders. The former have direct contact with the technology, the latter don’t but are affected by it nonetheless. Value sensitive design means taking both into account. So this blows up the conventional notion of a single user to design for.
Various stakeholder groups can have competing values and so to design for them means to arrive at some sort of trade-off between values. This is a crucial point. There is no such thing as a perfect or objectively best solution to ethical conundrums. Not in the design of technology and not anywhere else.
Value sensitive design encourages you to map stakeholders and their values. These will be different for every design project. Another approach is to use lists like the one pictured here as an analytical tool to think about how a design impacts various values.
Furthermore, during your design process you might not only think about the short-term impact of a technology, but also think about how it will affect things in the long run.
And similarly, you might think about the effects of a technology not only when a few people are using it, but also when it becomes wildly successful and everybody uses it.
There are tools out there that can help you think through these things. But so far much of the work in this area is happening on the academic side. I think there is an opportunity for us to create tools and case studies that will help us educate ourselves on this stuff.
There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the technologies we work with, it helps to dig deeper into the nature of the relationship between technology and society. Yes, it complicates things. But that is exactly the point.
Privileging simple and scalable solutions over those adapted to local needs is socially, economically and ecologically unsustainable. So I hope you will join me in embracing complexity.
At the end of last year I was invited to speak at the PLAYTrack conference in Aarhus about the workplace change management games made by Hubbub. It turned out to be a great opportunity to reconnect with the play research community.
I was very much impressed by the program assembled by the organisers. People came from a wide range of disciplines and crucially, there was ample time to discuss and reflect on the materials presented. As I tweeted afterwards, this is a thing that most conference organisers get wrong.
Back in Utrecht after a wonderful time in Århus attending #PLAYTrack. The lectures were uniformly fascinating but the one thing this conference really got right was the ample time to reflect and discuss. Really elevates the experience to something more than the usual info dump.
I was particularly inspired by the work of Benjamin Mardell and Mara Krechevsky at Harvard’s Project Zero – Making Learning Visible looks like a great resource for anyone who teaches. Then there was Reed Stevens from Northwestern University whose project FUSE is one of the most solid examples of playful learning for STEAM I’ve seen thus far. I was also fascinated by Ciara Laverty’s work at PEDAL on observing parent-child play. Miguel Sicart delivered another great provocation on the dark side of playful design. And finally I was delighted to hear about and experience for myself some of Amos Blanton’s work at the LEGO Foundation. I should also call out Ben Fincham’s many provocative contributions from the audience.
The abstract for my talk is below, which covers most of what I talked about. I tried to give people a good sense of:
what the games consisted of,
what we were aiming to achieve,
how both the fiction and the player activities supported these goals,
how we made learning outcomes visible to our players and clients,
and finally how we went about designing and developing these games.
Both projects have solid write-ups over at the Hubbub website, so I’ll just point to those here: Code 4 and Ripple Effect.
In the final section of the talk I spent a bit of time reflecting on how I would approach projects like this today. After all, it has been seven years since we made Code 4, and four years since Ripple Effect. That’s ages ago and my perspective has definitely changes since we made these.
Participatory design
First of all, I would get even more serious about co-designing with players at every step. I would recruit representatives of players and invest them with real influence. In the projects we did, the primary vehicle for player influence was through playtesting. But this is necessarily limited. I also won’t pretend this is at all easy to do in a commercial context.
But, these games are ultimately about improving worker productivity. So how do we make it so that workers share in the real-world profits yielded by a successful culture change?
I know of the existence of participatory design but from my experience it is not a common approach in the industry. Why?
Value sensitive design
On a related note, I would get more serious about what values are supported by the system, in whose interest they are and where they come from. Early field research and workshops with audience do surface some values but values from customer representatives tend to dominate. Again, the commercial context we work in is a potential challenge.
I know of value sensitive design, but as with participatory design, it has yet to catch on in a big way in the industry. So again, why is that?
Disintermediation
One thing I continue to be interested in is to reduce the complexity of a game system’s physical affordances (which includes its code), and to push even more of the substance of the game into those social allowances that make up the non-material aspects of the game. This allows for spontaneous renegotiation of the game by the players. This is disintermediation as a strategy. David Kanaga’s take on games as toys remains hugely inspirational in this regard, as does Bernard De Koven’s book The Well Played Game.
Gamefulness versus playfulness
Code 4 had more focus on satisfying the need for autonomy. Ripple Effect had more focus on competence, or in any case, it had less emphasis on autonomy. There was less room for ‘play’ around the core digital game. It seems to me that mastering a subjective simulation of a subject is not necessarily what a workplace game for culture change should be aiming for. So, less gameful design, more playful design.
Adaptation
Finally, the agency model does not enable us to stick around for the long haul. But workplace games might be better suited to a setup where things aren’t thought of as a one-off project but more of an ongoing process.
In How Buildings Learn, Stewart Brand talks about how architects should revisit buildings they’ve designed after they are built to learn about how people are actually using them. He also talks about how good buildings are buildings that its inhabitants can adapt to their needs. What does that look like in the context of a game for workplace culture change?
Playful Design for Workplace Change Management
Code 4 (2011, commissioned by the Tax Administration of the Netherlands) and Ripple Effect (2013, commissioned by Royal Dutch Shell) are both games for workplace change management designed and developed by Hubbub, a boutique playful design agency which operated from Utrecht, The Netherlands and Berlin, Germany between 2009 and 2015. These games are examples of how a goal-oriented serious game can be used to encourage playful appropriation of workplace infrastructure and social norms, resulting in an open-ended and creative exploration of new and innovative ways of working.
Serious game projects are usually commissioned to solve problems. Solving the problem of cultural change in a straightforward manner means viewing games as a way to persuade workers of a desired future state. They typically take videogame form, simulating the desired new way of working as determined by management. To play the game well, players need to master its system and by extension—it is assumed—learning happens.
These games can be be enjoyable experiences and an improvement on previous forms of workplace learning, but in our view they decrease the possibility space of potential workplace cultural change. They diminish worker agency, and they waste the creative and innovative potential of involving them in the invention of an improved workplace culture.
We instead choose to view workplace games as an opportunity to increase the space of possibility. We resist the temptation to bake the desired new way of working into the game’s physical and digital affordances. Instead, we leave how to play well up to the players. Since these games are team-based and collaborative, players need to negotiate their way of working around the game among themselves. In addition, because the games are distributed in time—running over a number of weeks—and are playable at player discretion during the workday, players are given license to appropriate workplace infrastructure and subvert social norms towards in-game ends.
We tried to make learning tangible in various ways. Because the games at the core are web applications to which players log on with individual accounts we were able to collect data on player behaviour. To guarantee privacy, employers did not have direct access to game databases and only received anonymised reports. We took responsibility for player learning by facilitating coaching sessions in which they could safely reflect on their game experiences. Rounding out these efforts, we conducted surveys to gain insight into the player experience from a more qualitative and subjective perspective.
These games offer a model for a reasonably democratic and ethical way of doing game-based workplace change management. However, we would like to see efforts that further democratise their design and development—involving workers at every step. We also worry about how games can be used to create the illusion of worker influence while at the same time software is deployed throughout the workplace to limit their agency.
Our examples may be inspiring but because of these developments we feel we can’t continue this type of work without seriously reconsidering our current processes, technology stacks and business practices—and ultimately whether we should be making games at all.