Ik had laatst contact met een internationale “thought leader” op het gebied van “tech ethics”. Hij vertelde mij dat hij heel dankbaar is voor het bestaan van de transparante laadpaal omdat het zo’n goed voorbeeld is van hoe design kan bijdragen aan eerlijke technologie.
Dat is natuurlijk ontzettend leuk om te horen. En het past in een bredere trend in de industrie gericht op het transparant en uitlegbaar maken van algoritmes. Inmiddels is het zelfs zo ver dat wetgeving uitlegbaarheid (in sommige gevallen) verplicht stelt.
In de documentaire hoor je meerdere mensen vertellen (mijzelf inbegrepen) waarom het belangrijk is dat stedelijke algoritmes transparant zijn. Thijs benoemt heel mooi twee redenen: Enerzijds het collectieve belang om democratische controle op de ontwikkeling van stedelijke algoritmes mogelijk te maken. Anderzijds is er het individuele belang om je recht te kunnen halen als een systeem een beslissing maakt waarmee je het (om wat voor reden dan ook) niet eens bent.
En inderdaad, in beide gevallen (collectieve controle en individuele remedie) is transparantie een randvoorwaarde. Ik denk dat we met dit project een hoop problemen qua design en techniek hebben opgelost die daarbij komen kijken. Tegelijkertijd doemt er een nieuwe vraag aan de horizon op: Als we begrijpen hoe een slim systeem werkt, en we zijn het er niet mee eens, wat dan? Hoe krijg je vervolgens daadwerkelijk invloed op de werking van het systeem?
Ik denk dat we onze focus zullen moeten gaan verleggen van transparantie naar wat ik tegenspraak of in goed Engels “contestability” noem.
Ontwerpen voor tegenspraak betekent dat we na moeten gaan denken over de middelen die mensen nodig hebben voor het uitoefenen van hun recht op menselijke interventie. Ja, dit betekent dat we informatie moeten aanleveren over het hoe en waarom van individuele beslissingen. Transparantie dus. Maar het betekent ook dat we nieuwe kanalen en processen moeten inrichten waarmee mensen verzoeken kunnen indienen voor het herzien van een beslissing. We zullen na moeten gaan denken over hoe we dergelijke verzoeken beoordelen, en hoe we er voor zorgen dat het slimme systeem in kwestie “leert” van de signalen die we op deze manier oppikken uit de samenleving.
Je zou kunnen zeggen dat ontwerpen van transparantie eenrichtingsverkeer is. Informatie stroomt van de ontwikkelende partij, naar de eindgebruiker. Bij het ontwerpen voor tegenspraak gaat het om het creëren van een dialoog tussen ontwikkelaars en burgers.
Ik zeg burgers want niet alleen klassieke eindgebruikers worden geraakt door slimme systemen. Allerlei andere groepen worden ook, vaak indirect beïnvloed.
Dat is ook een nieuwe ontwerp uitdaging. Hoe ontwerp je niet alleen voor de eindgebruiker (zoals bij de transparante laadpaal de EV bestuurder) maar ook voor zogenaamde indirecte belanghebbenden, bijvoorbeeld bewoners van straten waar laadpalen geplaatst worden, die geen EV rijden, of zelfs geen auto, maar evengoed een belang hebben bij hoe stoepen en straten worden ingericht.
Deze verbreding van het blikveld betekent dat we bij het ontwerpen voor tegenspraak nóg een stap verder kunnen en zelfs moeten gaan dan het mogelijk maken van remedie bij individuele beslissingen.
Want ontwerpen voor tegenspraak bij individuele beslissingen van een reeds uitgerold systeem is noodzakelijkerwijs post-hoc en reactief, en beperkt zich tot één enkele groep belanghebbenden.
Zoals Thijs ook min of meer benoemt in de documentaire beïnvloed slimme stedelijke infrastructuur de levens van ons allemaal, en je zou kunnen zeggen dat de design en technische keuzes die bij de ontwikkeling daarvan gemaakt worden intrinsiek ook politieke keuzes zijn.
Daarom denk ik dat we er niet omheen kunnen om het proces dat ten grondslag ligt aan deze systemen zelf, ook zo in te richten dat er ruimte is voor tegenspraak. In mijn ideale wereld is de ontwikkeling van een volgende generatie slimme laadpalen daarom participatief, pluriform en inclusief, net als onze democratie dat zelf ook streeft te zijn.
Hoe we dit soort “contestable” algoritmes precies vorm moeten geven, hoe ontwerpen voor tegenspraak moeten gaan werken, is een open vraag. Maar een aantal jaren geleden wist niemand nog hoe een transparante laadpaal er uit zou moeten zien, en dat hebben we ook voor elkaar gekregen.
Update (2021–03-31 16:43): Een opname van het gehele event is nu ook beschikbaar. Het bovenstaande betoog start rond 25:14.
I’ll be at Beyond Smart Cities Today the next couple of days (18–19 September). Below is the abstract I submitted, plus a bibliography of some of the stuff that went into my thinking for this and related matters that I won’t have the time to get into.
In the actually existing smart city, algorithmic systems are increasingly used for the purposes of automated decision-making, including as part of public infrastructure. Algorithmic systems raise a range of ethical concerns, many of which stem from their opacity. As a result, prescriptions for improving the accountability, trustworthiness and legitimacy of algorithmic systems are often based on a transparency ideal. The thinking goes that if the functioning and ownership of an algorithmic system is made perceivable, people understand them and are in turn able to supervise them. However, there are limits to this approach. Algorithmic systems are complex and ever-changing socio-technical assemblages. Rendering them visible is not a straightforward design and engineering task. Furthermore such transparency does not necessarily lead to understanding or, crucially, the ability to act on this understanding. We believe legitimate smart public infrastructure needs to include the possibility for subjects to articulate objections to procedures and outcomes. The resulting “contestable infrastructure” would create spaces that open up the possibility for expressing conflicting views on the smart city. Our project is to explore the design implications of this line of reasoning for the physical assets that citizens encounter in the city. Because after all, these are the perceivable elements of the larger infrastructural systems that recede from view.
Alkhatib, A., & Bernstein, M. (2019). Street-Level Algorithms. 1–13. https://doi.org/10.1145/3290605.3300760
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Centivany, A., & Glushko, B. (2016). “Popcorn tastes good”: Participatory policymaking and Reddit’s “AMAgeddon.” Conference on Human Factors in Computing Systems — Proceedings, 1126–1137. https://doi.org/10.1145/2858036.2858516
Crawford, K. (2016). Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics. Science Technology and Human Values, 41(1), 77–92. https://doi.org/10.1177/0162243915589635
DiSalvo, C. (2010). Design, Democracy and Agonistic Pluralism. Proceedings of the Design Research Society Conference, 366–371.
Hildebrandt, M. (2017). Privacy As Protection of the Incomputable Self: Agonistic Machine Learning. SSRN Electronic Journal, 1–33. https://doi.org/10.2139/ssrn.3081776
Jackson, S. J., Gillespie, T., & Payette, S. (2014). The Policy Knot: Re-integrating Policy, Practice and Design. CSCW Studies of Social Computing, 588–602. https://doi.org/10.1145/2531602.2531674
Jewell, M. (2018). Contesting the decision: living in (and living with) the smart city. International Review of Law, Computers and Technology. https://doi.org/10.1080/13600869.2018.1457000
Lindblom, L. (2019). Consent, Contestability, and Unions. Business Ethics Quarterly. https://doi.org/10.1017/beq.2018.25
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
Van de Poel, I. (2016). An ethical framework for evaluating experimental technology. Science and Engineering Ethics, 22(3), 667–686. https://doi.org/10.1007/s11948-015‑9724‑3
Thijs Turèl of AMS Institute and myself presented a version of the talk below at the Cities for Digital Rights conference on June 19 in Amsterdam during the We Make the City festival. The talk is an attempt to articulate some of the ideas we both have been developing for some time around contestability in smart public infrastructure. As always with this sort of thing, this is intended as a conversation piece so I welcome any thoughts you may have.
The basic message of the talk is that when we start to do automated decision-making in public infrastructure using algorithmic systems, we need to design for the inevitable disagreements that may arise and furthermore, we suggest there is an opportunity to focus on designing for such disagreements in the physical objects that people encounter in urban space as they make use of infrastructure.
We set the scene by showing a number of examples of smart public infrastructure. A cyclist crossing that adapts to weather conditions. If it’s raining cyclists more frequently get a green light. A pedestrian crossing in Tilburg where elderly can use their mobile to get more time to cross. And finally, the case we are involved with ourselves: smart EV charging in the city of Amsterdam, about which more later.
We identify three trends in smart public infrastructure: (1) where previously algorithms were used to inform policy, now they are employed to perform automated decision-making on an individual case basis. This raises the stakes; (2) distributed ownership of these systems as the result of public-private partnerships and other complex collaboration schemes leads to unclear responsibility; and finally (3) the increasing use of machine learning leads to opaque decision-making.
These trends, and algorithmic systems more generally, raise a number of ethical concerns. They include but are not limited to: the use of inductive correlations (for example in the case of machine learning) leads to unjustified results; lack of access to and comprehension of a system’s inner workings produces opacity, which in turn leads to a lack of trust in the systems themselves and the organisations that use them; bias is introduced by a number of factors, including development team prejudices, technical flaws, bad data and unforeseen interactions with other systems; and finally the use of profiling, nudging and personalisation leads to diminished human agency. (We highly recommend the article by Mittelstadt et al. for a comprehensive overview of ethical concerns raised by algorithms.)
So for us, the question that emerges from all this is: How do we organise the supervision of smart public infrastructure in a democratic and lawful way?
There are a number of existing approaches to this question. These include legal and regulatory (e.g. the right to explanation in the GDPR); auditing (e.g. KPMG’s “AI in Control” method, BKZ’s transparantielab); procurement (e.g. open source clauses); insourcing (e.g. GOV.UK) and design and engineering (e.g. our own work on the transparent charging station).
We feel there are two important limitations with these existing approaches. The first is a focus on professionals and the second is a focus on prediction. We’ll discuss each in turn.
First of all, many solutions target a professional class, be it accountants, civil servants, supervisory boards, as well as technologists, designers and so on. But we feel there is a role for the citizen as well, because the supervision of these systems is simply too important to be left to a privileged few. This role would include identifying wrongdoing, and suggesting alternatives.
There is a tension here, which is that from the perspective of the public sector one should only ask citizens for their opinion when you have the intention and the resources to actually act on their suggestions. It can also be a challenge to identify legitimate concerns in the flood of feedback that can sometimes occur. From our point of view though, such concerns should not be used as an excuse to not engage the public. If citizen participation is considered necessary, the focus should be on freeing up resources and setting up structures that make it feasible and effective.
The second limitation is prediction. This is best illustrated with the Collinridge dilemma: in the early phases of new technology, when a technology and its social embedding are still malleable, there is uncertainty about the social effects of that technology. In later phases, social effects may be clear but then often the technology has become so well entrenched in society that it is hard to overcome negative social effects. (This summary is taken from an excellent van de Poel article on the ethics of experimental technology.)
Many solutions disregard the Collingridge dilemma and try to predict and prevent adverse effects of new systems at design-time. One example of this approach would be value-sensitive design. Our focus in stead is on use-time. Considering the fact that smart public infrastructure tends to be developed on an ongoing basis, the question becomes how to make citizens a partner in this process. And even more specifically we are interested in how this can be made part of the design of the “touchpoints” people actually encounter in the streets, as well as their backstage processes.
Why do we focus on these physical objects? Because this is where people actually meet the infrastructural systems, of which large parts recede from view. These are the places where they become aware of their presence. They are the proverbial tip of the iceberg.
The use of automated decision-making in infrastructure reduces people’s agency. For this reason, resources for agency need to be designed back into these systems. Frequently the answer to this question is premised on a transparency ideal. This may be a prerequisite for agency, but it is not sufficient. Transparency may help you become aware of what is going on, but it will not necessarily help you to act on that knowledge. This is why we propose a shift from transparency to contestability. (We can highly recommend Ananny and Crawford’s article for more on why transparency is insufficient.)
To clarify what we mean by contestability, consider the following three examples: When you see the lights on your router blink in the middle of the night when no-one in your household is using the internet you can act on this knowledge by yanking out the device’s power cord. You may never use the emergency brake in a train but its presence does give you a sense of control. And finally, the cash register receipt provides you with a view into both the procedure and the outcome of the supermarket checkout procedure and it offers a resource with which you can dispute them if something appears to be wrong.
Image credits: Aangiftedoen, source unknown for remainder
None of these examples is a perfect illustration of contestability but they hint at something more than transparency, or perhaps even something wholly separate from it. We’ve been investigating what their equivalents would be in the context of smart public infrastructure.
To illustrate this point further let us come back to the smart EV charging project we mentioned earlier. In Amsterdam, public EV charging stations are becoming “smart” which in this case means they automatically adapt the speed of charging to a number of factors. These include grid capacity, and the availability of solar energy. Additional factors can be added in future, one of which under consideration is to give priority to shared cars over privately owned cars. We are involved with an ongoing effort to consider how such charging stations can be redesigned so that people understand what’s going on behind the scenes and can act on this understanding. The motivation for this is that if not designed carefully, the opacity of smart EV charging infrastructure may be detrimental to social acceptance of the technology. (A first outcome of these efforts is the Transparent Charging Station designed by The Incredible Machine. A follow-up project is ongoing.)
We have identified a number of different ways in which people may object to smart EV charging. They are listed in the table below. These types of objections can lead us to feature requirements for making the system contestable.
Because the list is preliminary, we asked the audience if they could imagine additional objections, if those examples represented new categories, and if they would require additional features for people to be able to act on them. One particularly interesting suggestion that emerged was to give local communities control over the policies enacted by the charge points in their vicinity. That’s something to further consider the implications of.
And that’s where we left it. So to summarise:
Algorithmic systems are becoming part of public infrastructure.
Smart public infrastructure raises new ethical concerns.
Many solutions to ethical concerns are premised on a transparency ideal, but do not address the issue of diminished agency.
There are different categories of objections people may have to an algorithmic system’s workings.
Making a system contestable means creating resources for people to object, opening up a space for the exploration of meaningful alternatives to its current implementation.
In this workshop we will take a deep dive into some of the challenges of designing smart public infrastructure.
Smart city ideas are moving from hype into reality. The everyday things that our contemporary world runs on, such as roads, railways and canals are not immune to this development. Basic, “hard” infrastructure is being augmented with internet-connected sensing, processing and actuating capabilities. We are involved as practitioners and researchers in one such project: the MX3D smart bridge, a pedestrian bridge 3D printed from stainless steel and equipped with a network of sensors.
The question facing everyone involved with these developments, from citizens to professionals to policy makers is how to reap the potential benefits of these technologies, without degrading the urban fabric. For this to happen, information technology needs to become more like the city: open-ended, flexible and adaptable. And we need methods and tools for the diverse range of stakeholders to come together and collaborate on the design of truly intelligent public infrastructure.
We will explore these questions in this workshop by first walking you through the architecture of the MX3D smart bridge—offering a uniquely concrete and pragmatic view into a cutting edge smart city project. Subsequently we will together explore the question: What should a smart pedestrian bridge that is aware of itself and its surroundings be able to tell us? We will conclude by sharing some of the highlights from our conversation, and make note of particularly thorny questions that require further work.
The workshop’s structure was quite simple. After a round of introductions, Alec introduced the MX3D bridge to the participants. For a sense of what that introduction talk was like, I recommend viewing this recording of a presentation he delivered at a recent Pakhuis de Zwijger event.
We then ran three rounds of group discussion in the style of world cafe. each discussion was guided by one question. Participants were asked to write, draw and doodle on the large sheets of paper covering each table. At the end of each round, people moved to another table while one person remained to share the preceding round’s discussion with the new group.
The discussion questions were inspired by value-sensitive design. I was interested to see if people could come up with alternative uses for a sensor-equipped 3D-printed footbridge if they first considered what in their opinion made a city worth living in.
The questions we used were:
What specific things do you like about your town? (Places, things to do, etc. Be specific.)
What values underly those things? (A value is what a person or group of people consider important in life.)
How would you redesign the bridge to support those values?
At the end of the three discussion rounds we went around to each table and shared the highlights of what was produced. We then had a bit of a back and forth about the outcomes and the workshop approach, after which we wrapped up.
We did get to some interesting values by starting from personal experience. Participants came from a variety of countries and that was reflected in the range of examples and related values. The design ideas for the bridge remained somewhat abstract. It turned out to be quite a challenge to make the jump from values to different types of smart bridges. Despite this, we did get nice ideas such as having the bridge report on water quality of the canal it crosses, derived from the value of care for the environment.
The response from participants afterwards was positive. People found it thought-provoking, which was definitely the point. People were also eager to learn even more about the bridge project. It remains a thing that captures people’s imagination. For that reason alone, it continues to be a very productive case to use for the grounding of these sorts of discussions.
I’d like to talk about the future of our design practice and what I think we should focus our attention on. It is all related to this idea of complexity and opening up black boxes. We’re going to take the scenic route, though. So bear with me.
Software Design
Two years ago I spent about half a year in Singapore.
While there I worked as product strategist and designer at a startup called ARTO, an art recommendation service. It shows you a random sample of artworks, you tell it which ones you like, and it will then start recommending pieces it thinks you like. In case you were wondering: yes, swiping left and right was involved.
We had this interesting problem of ingesting art from many different sources (mostly online galleries) with metadata of wildly varying levels of quality. So, using metadata to figure out which art to show was a bit of a non-starter. It should come as no surprise then, that we started looking into machine learning—image processing in particular.
And so I found myself working with my engineering colleagues on an art recommendation stream which was driven at least in part by machine learning. And I quickly realised we had a problem. In terms of how we worked together on this part of the product, it felt like we had taken a bunch of steps back in time. Back to a way of collaborating that was less integrated and less responsive.
That’s because we have all these nice tools and techniques for designing traditional software products. But software is deterministic. Machine learning is fundamentally different in nature: it is probabilistic.
It was hard for me to take the lead in the design of this part of the product for two reasons. First of all, it was challenging to get a first-hand feel of the machine learning feature before it was implemented.
And second of all, it was hard for me to communicate or visualise the intended behaviour of the machine learning feature to the rest of the team.
So when I came back to the Netherlands I decided to dig into this problem of design for machine learning. Turns out I opened up quite the can of worms for myself. But that’s okay.
There are two reasons I care about this:
The first is that I think we need more design-led innovation in the machine learning space. At the moment it is engineering-dominated, which doesn’t necessarily lead to useful outcomes. But if you want to take the lead in the design of machine learning applications, you need a firm handle on the nature of the technology.
The second reason why I think we need to educate ourselves as designers on the nature of machine learning is that we need to take responsibility for the impact the technology has on the lives of people. There is a lot of talk about ethics in the design industry at the moment. Which I consider a positive sign. But I also see a reluctance to really grapple with what ethics is and what the relationship between technology and society is. We seem to want easy answers, which is understandable because we are all very busy people. But having spent some time digging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a feature. And we should embrace it.
Machine Learning
At the end of 2016 I attended ThingsCon here in Amsterdam and I was introduced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both interested in machine learning. So with encouragement from Ianus we decided to put together a workshop that would enable industrial design master students to tangle with it in a hands-on manner.
About a year later now, this has grown into a thing we call Prototyping the Useless Butler. During the workshop, you use machine learning algorithms to train a model that takes inputs from a network-connected arduino’s sensors and drives that same arduino’s actuators. In effect, you can create interactive behaviour without writing a single line of code. And you get a first hand feel for how common applications of machine learning work. Things like regression, classification and dynamic time warping.
The thing that makes this workshop tick is an open source software application called Wekinator. Which was created by Rebecca Fiebrink. It was originally aimed at performing artists so that they could build interactive instruments without writing code. But it takes inputs from anything and sends outputs to anything. So we appropriated it towards our own ends.
The thinking behind this workshop is that for us designers to be able to think creatively about applications of machine learning, we need a granular understanding of the nature of the technology. The thing with designers is, we can’t really learn about such things from books. A lot of design knowledge is tacit, it emerges from our physical engagement with the world. This is why things like sketching and prototyping are such essential parts of our way of working. And so with useless butler we aim to create an environment in which you as a designer can gain tacit knowledge about the workings of machine learning.
Simply put, for a lot of us, machine learning is a black box. With Useless Butler, we open the black box a bit and let you peer inside. This should improve the odds of design-led innovation happening in the machine learning space. And it should also help with ethics. But it’s definitely not enough. Knowledge about the technology isn’t the only issue here. There are more black boxes to open.
Values
Which brings me back to that other black box: ethics. Like I already mentioned there is a lot of talk in the tech industry about how we should “be more ethical”. But things are often reduced to this notion that designers should do no harm. As if ethics is a problem to be fixed in stead of a thing to be practiced.
So I started to talk about this to people I know in academia and more than once this thing called Value Sensitive Design was mentioned. It should be no surprise to anyone that scholars have been chewing on this stuff for quite a while. One of the earliest references I came across, an essay by Batya Friedman in Interactions is from 1996! This is a lesson to all of us I think. Pay more attention to what the academics are talking about.
So, at the end of last year I dove into this topic. Our host Iskander Smit, Rob Maijers and myself coordinate a grassroots community for tech workers called Tech Solidarity NL. We want to build technology that serves the needs of the many, not the few. Value Sensitive Design seemed like a good thing to dig into and so we did.
I’m not going to dive into the details here. There’s a report on the Tech Solidarity NL website if you’re interested. But I will highlight a few things that value sensitive design asks us to consider that I think help us unpack what it means to practice ethical design.
First of all, values. Here’s how it is commonly defined in the literature:
“A value refers to what a person or group of people consider important in life.”
I like it because it’s common sense, right? But it also makes clear that there can never be one monolithic definition of what ‘good’ is in all cases. As we designers like to say: “it depends” and when it comes to values things are no different.
“Person or group” implies there can be various stakeholders. Value sensitive design distinguishes between direct and indirect stakeholders. The former have direct contact with the technology, the latter don’t but are affected by it nonetheless. Value sensitive design means taking both into account. So this blows up the conventional notion of a single user to design for.
Various stakeholder groups can have competing values and so to design for them means to arrive at some sort of trade-off between values. This is a crucial point. There is no such thing as a perfect or objectively best solution to ethical conundrums. Not in the design of technology and not anywhere else.
Value sensitive design encourages you to map stakeholders and their values. These will be different for every design project. Another approach is to use lists like the one pictured here as an analytical tool to think about how a design impacts various values.
Furthermore, during your design process you might not only think about the short-term impact of a technology, but also think about how it will affect things in the long run.
And similarly, you might think about the effects of a technology not only when a few people are using it, but also when it becomes wildly successful and everybody uses it.
There are tools out there that can help you think through these things. But so far much of the work in this area is happening on the academic side. I think there is an opportunity for us to create tools and case studies that will help us educate ourselves on this stuff.
There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the technologies we work with, it helps to dig deeper into the nature of the relationship between technology and society. Yes, it complicates things. But that is exactly the point.
Privileging simple and scalable solutions over those adapted to local needs is socially, economically and ecologically unsustainable. So I hope you will join me in embracing complexity.
At the end of last year I was invited to speak at the PLAYTrack conference in Aarhus about the workplace change management games made by Hubbub. It turned out to be a great opportunity to reconnect with the play research community.
I was very much impressed by the program assembled by the organisers. People came from a wide range of disciplines and crucially, there was ample time to discuss and reflect on the materials presented. As I tweeted afterwards, this is a thing that most conference organisers get wrong.
Back in Utrecht after a wonderful time in Århus attending #PLAYTrack. The lectures were uniformly fascinating but the one thing this conference really got right was the ample time to reflect and discuss. Really elevates the experience to something more than the usual info dump.
I was particularly inspired by the work of Benjamin Mardell and Mara Krechevsky at Harvard’s Project Zero – Making Learning Visible looks like a great resource for anyone who teaches. Then there was Reed Stevens from Northwestern University whose project FUSE is one of the most solid examples of playful learning for STEAM I’ve seen thus far. I was also fascinated by Ciara Laverty’s work at PEDAL on observing parent-child play. Miguel Sicart delivered another great provocation on the dark side of playful design. And finally I was delighted to hear about and experience for myself some of Amos Blanton’s work at the LEGO Foundation. I should also call out Ben Fincham’s many provocative contributions from the audience.
The abstract for my talk is below, which covers most of what I talked about. I tried to give people a good sense of:
what the games consisted of,
what we were aiming to achieve,
how both the fiction and the player activities supported these goals,
how we made learning outcomes visible to our players and clients,
and finally how we went about designing and developing these games.
Both projects have solid write-ups over at the Hubbub website, so I’ll just point to those here: Code 4 and Ripple Effect.
In the final section of the talk I spent a bit of time reflecting on how I would approach projects like this today. After all, it has been seven years since we made Code 4, and four years since Ripple Effect. That’s ages ago and my perspective has definitely changes since we made these.
Participatory design
First of all, I would get even more serious about co-designing with players at every step. I would recruit representatives of players and invest them with real influence. In the projects we did, the primary vehicle for player influence was through playtesting. But this is necessarily limited. I also won’t pretend this is at all easy to do in a commercial context.
But, these games are ultimately about improving worker productivity. So how do we make it so that workers share in the real-world profits yielded by a successful culture change?
I know of the existence of participatory design but from my experience it is not a common approach in the industry. Why?
Value sensitive design
On a related note, I would get more serious about what values are supported by the system, in whose interest they are and where they come from. Early field research and workshops with audience do surface some values but values from customer representatives tend to dominate. Again, the commercial context we work in is a potential challenge.
I know of value sensitive design, but as with participatory design, it has yet to catch on in a big way in the industry. So again, why is that?
Disintermediation
One thing I continue to be interested in is to reduce the complexity of a game system’s physical affordances (which includes its code), and to push even more of the substance of the game into those social allowances that make up the non-material aspects of the game. This allows for spontaneous renegotiation of the game by the players. This is disintermediation as a strategy. David Kanaga’s take on games as toys remains hugely inspirational in this regard, as does Bernard De Koven’s book The Well Played Game.
Gamefulness versus playfulness
Code 4 had more focus on satisfying the need for autonomy. Ripple Effect had more focus on competence, or in any case, it had less emphasis on autonomy. There was less room for ‘play’ around the core digital game. It seems to me that mastering a subjective simulation of a subject is not necessarily what a workplace game for culture change should be aiming for. So, less gameful design, more playful design.
Adaptation
Finally, the agency model does not enable us to stick around for the long haul. But workplace games might be better suited to a setup where things aren’t thought of as a one-off project but more of an ongoing process.
In How Buildings Learn, Stewart Brand talks about how architects should revisit buildings they’ve designed after they are built to learn about how people are actually using them. He also talks about how good buildings are buildings that its inhabitants can adapt to their needs. What does that look like in the context of a game for workplace culture change?
Playful Design for Workplace Change Management
Code 4 (2011, commissioned by the Tax Administration of the Netherlands) and Ripple Effect (2013, commissioned by Royal Dutch Shell) are both games for workplace change management designed and developed by Hubbub, a boutique playful design agency which operated from Utrecht, The Netherlands and Berlin, Germany between 2009 and 2015. These games are examples of how a goal-oriented serious game can be used to encourage playful appropriation of workplace infrastructure and social norms, resulting in an open-ended and creative exploration of new and innovative ways of working.
Serious game projects are usually commissioned to solve problems. Solving the problem of cultural change in a straightforward manner means viewing games as a way to persuade workers of a desired future state. They typically take videogame form, simulating the desired new way of working as determined by management. To play the game well, players need to master its system and by extension—it is assumed—learning happens.
These games can be be enjoyable experiences and an improvement on previous forms of workplace learning, but in our view they decrease the possibility space of potential workplace cultural change. They diminish worker agency, and they waste the creative and innovative potential of involving them in the invention of an improved workplace culture.
We instead choose to view workplace games as an opportunity to increase the space of possibility. We resist the temptation to bake the desired new way of working into the game’s physical and digital affordances. Instead, we leave how to play well up to the players. Since these games are team-based and collaborative, players need to negotiate their way of working around the game among themselves. In addition, because the games are distributed in time—running over a number of weeks—and are playable at player discretion during the workday, players are given license to appropriate workplace infrastructure and subvert social norms towards in-game ends.
We tried to make learning tangible in various ways. Because the games at the core are web applications to which players log on with individual accounts we were able to collect data on player behaviour. To guarantee privacy, employers did not have direct access to game databases and only received anonymised reports. We took responsibility for player learning by facilitating coaching sessions in which they could safely reflect on their game experiences. Rounding out these efforts, we conducted surveys to gain insight into the player experience from a more qualitative and subjective perspective.
These games offer a model for a reasonably democratic and ethical way of doing game-based workplace change management. However, we would like to see efforts that further democratise their design and development—involving workers at every step. We also worry about how games can be used to create the illusion of worker influence while at the same time software is deployed throughout the workplace to limit their agency.
Our examples may be inspiring but because of these developments we feel we can’t continue this type of work without seriously reconsidering our current processes, technology stacks and business practices—and ultimately whether we should be making games at all.
We developed three exercises, one for each type of Wekinator output: regression, classification and dynamic time warping.
In contrast to the first version, we had two hours to run through the whole thing, in stead of a day… So we had to cut some corners, and doubled down on walking participants through a number of exercises so that they would come out of it with some readily applicable skills.
We dubbed the workshop ‘prototyping the useless butler’, with thanks to Philip van Allen for the suggestion to frame the exercises around building something non-productive so that the focus was shifted to play and exploration.
All of the code, the circuit diagram and slides are over on GitHub. But I’ll summarise things here.
We spent a very short amount of time introducing machine learning. We used Google’s Teachable Machine as an example and contrasted regular programming with using machine learning algorithms to train models. The point was to provide folks with just enough conceptual scaffolding so that the rest of the workshop would make sense.
We then introduced our ‘toolchain’ which consists of Wekinator, the Arduino MKR1000 module and the OSC protocol. The aim of this toolchain is to allow designers who work in the IoT space to get a feel for the material properties of machine learning through hands-on tinkering. We tried to create a toolchain with as few moving parts as possible, because each additional component would introduce another point of failure which might require debugging. This toolchain would enable designers to either use machine learning to rapidly prototype interactive behaviour with minimal or no programming. It can also be used to prototype products that expose interactive machine learning features to end users. (For a speculative example of one such product, see Bjørn Karmann’s Objectifier.)
Participants were then asked to set up all the required parts on their own workstation. A list can be found on the Useless Butler GitHub page.
We then proceeded to build the circuit. We provided all the components and showed a Fritzing diagram to help people along. The basic idea of this circuit, the eponymous useless butler, was to have a sufficiently rich set of inputs and outputs with which to play, that would suit all three types of Wekinator output. So we settled on a pair of photoresistors or LDRs as inputs and an RGBLED as output.
With the prerequisites installed and the circuit built we were ready to walk through the examples. For regression we mapped the continuous stream of readings from the two LDRs to three outputs, one each for the red, green and blue of the LED. For classification we put the state of both LDRs into one of four categories, each switching the RGBLED to a specific color (cyan, magenta, yellow or white). And finally, for dynamic time warping, we asked Wekinator to recognise one of three gestures and switch the RGBLED to one of three states (red, green or off).
When we reflected on the workshop afterwards, we agreed we now have a proven concept. Participants were able to get the toolchain up and running and could play around with iteratively training and evaluating their model until it behaved as intended.
However, there is still quite a bit of room for improvement. On a practical note, quite a bit of time was taken up by the building of the circuit, which isn’t the point of the workshop. One way of dealing with this is to bring those to a workshop pre-built. Doing so would enable us to get to the machine learning quicker and would open up time and space to also engage with the participants about the point of it all.
We’re keen on bringing this workshop to more settings in future. If we do, I’m sure we’ll find the opportunity to improve on things once more and I will report back here.
Many thanks to Iskander and the rest of the ThingsCon team for inviting us to the conference.
ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
On May 24 of this year, Niels ’t Hooft and myself ran a workshop titled ‘Hybrid Writing for Conversational Interfaces’ at TU Delft. Our aim was twofold: teach students about writing characters and dialog, and teach them how to prototype chat interfaces.
We spent a day with roughly thirty industrial design students alternating between bits of theory, writing exercises, instructions on how to use Twine (our prototyping tool of choice) and closed out with a small project and a show and tell.
I was very pleased to see prototypes with quite a high level of complexity and sophistication at the end of the day. And throughout, I could tell students were enjoying themselves writing and building interactive conversations.
Here’s a rough outline of how the workshop was structured.
After briefly introducing ourselves, Niels presented a mini-lecture on interactive fiction. A highlight for me was a two-by-two of the ways in which fiction and software can intersect.
I then took over and did a show and tell of the absolute basics of using Twine. Things like creating passages, linking them, creating branches and testing and publishing your story.
The first exercise after this was for students to take what they just learned about Twine and try to create a very simple interactive story.
After a coffee break, Niels then presented his second mini-lecture on the very basics of writing. With a particular focus on writing characters and dialog. This included a handy cheatsheet for things to consider while writing.
In our second exercise students worked in pairs. They first each created a character, which they then described to each other. They then first planned out the structure of an encounter between these two characters. And finally they collaboratively wrote the dialogue for this encounter. They were required to stick to Hollywood formatting. Niels and I then did a reading of a few (to great amusement of all present) to close out the morning section of the workshop.
After lunch Niels presented his third and final mini-lecture of the day, on conversational interfaces, relying heavily on the great work of our friend Alper in his book on the subject.
I then took over for the second show and tell. Here we ramped up the challenge and introduced the Twine Texting Project – a framework for prototyping conversational interfaces in Twine. On GitHub, you can find the starter file I had prepared for this section.
The third and final exercise of the day was for students to take what they learned about writing dialog, and prototyping chat interfaces, and to build an interactive prototype of a conversational interface or interactive fiction in chat format. They could either build off of the dialog they have created in the previous exercise, or start from scratch.
We finished the day with demos, where put the Twine story on the big screen and as a group chose what options to select. After each demo the creator would open up the Twine file and walk us through how they had built it. It was pretty cool to see how many students had put what they had learned to very creative uses.
Reflecting on the workshop afterwards, we felt the structure was nicely balanced between theory and practice. The difficulty level was such that students did learn some new things which they could incorporate into future projects, but still built on skills they had already acquired. The choice for Twine worked out well too since it is highly accessible. Non-technical students managed to create something interactive, and more advanced students could apply what they knew about code to produce more sophisticated prototypes.
For future workshops we did feel we could improve on building a bridge between the writing for interactive fiction and writing for conversational interfaces of software products and services. This would require some adaptation of the mini lectures and a slightly different emphasis in the exercises. The key would be to have students imagine existing products and services as characters, and to then write dialog for interactions and prototype them. For a future iteration of the workshop, this would be worth exploring further.
On Wednesday Péter Kun, Holly Robbins and myself taught a one-day workshop on machine learning at Delft University of Technology. We had about thirty master’s students from the industrial design engineering faculty. The aim was to get them acquainted with the technology through hands-on tinkering with the Wekinator as central teaching tool.
Photo credits: Holly Robbins
Background
The reasoning behind this workshop is twofold.
On the one hand I expect designers will find themselves working on projects involving machine learning more and more often. The technology has certain properties that differ from traditional software. Most importantly, machine learning is probabilistic in stead of deterministic. It is important that designers understand this because otherwise they are likely to make bad decisions about its application.
The second reason is that I have a strong sense machine learning can play a role in the augmentation of the design process itself. So-called intelligent design tools could make designers more efficient and effective. They could also enable the creation of designs that would otherwise be impossible or very hard to achieve.
The workshop explored both ideas.
Photo credits: Holly Robbins
Format
The structure was roughly as follows:
In the morning we started out providing a very broad introduction to the technology. We talked about the very basic premise of (supervised) learning. Namely, providing examples of inputs and desired outputs and training a model based on those examples. To make these concepts tangible we then introduced the Wekinator and walked the students through getting it up and running using basic examples from the website. The final step was to invite them to explore alternative inputs and outputs (such as game controllers and Arduino boards).
In the afternoon we provided a design brief, asking the students to prototype a data-enabled object with the set of tools they had acquired in the morning. We assisted with technical hurdles where necessary (of which there were more than a few) and closed out the day with demos and a group discussion reflecting on their experiences with the technology.
Photo credits: Holly Robbins
Results
As I tweeted on the way home that evening, the results were… interesting.
Not all groups managed to put something together in the admittedly short amount of time they were provided with. They were most often stymied by getting an Arduino to talk to the Wekinator. Max was often picked as a go-between because the Wekinator receives OSC messages over UDP, whereas the quickest way to get an Arduino to talk to a computer is over serial. But Max in my experience is a fickle beast and would more than once crap out on us.
The groups that did build something mainly assembled prototypes from the examples on hand. Which is fine, but since we were mainly working with the examples from the Wekinator website they tended towards the interactive instrument side of things. We were hoping for explorations of IoT product concepts. For that more hand-rolling was required and this was only achievable for the students on the higher end of the technical expertise spectrum (and the more tenacious ones).
The discussion yielded some interesting insights into mental models of the technology and how they are affected by hands-on experience. A comment I heard more than once was: Why is this considered learning at all? The Wekinator was not perceived to be learning anything. When challenged on this by reiterating the underlying principles it became clear the black box nature of the Wekinator hampers appreciation of some of the very real achievements of the technology. It seems (for our students at least) machine learning is stuck in a grey area between too-high expectations and too-low recognition of its capabilities.
Next steps
These results, and others, point towards some obvious improvements which can be made to the workshop format, and to teaching design students about machine learning more broadly.
We can improve the toolset so that some of the heavy lifting involved with getting the various parts to talk to each other is made easier and more reliable.
We can build examples that are geared towards the practice of designing IoT products and are ready for adaptation and hacking.
And finally, and probably most challengingly, we can make the workings of machine learning more transparent so that it becomes easier to develop a feel for its capabilities and shortcomings.
We do intend to improve and teach the workshop again. If you’re interested in hosting one (either in an educational or professional context) let me know. And stay tuned for updates on this and other efforts to get designers to work in a hands-on manner with machine learning.
Special thanks to the brilliant Ianus Keller for connecting me to Péter and for allowing us to pilot this crazy idea at IDE Academy.
References
Sources used during preparation and running of the workshop:
The Wekinator – the UI is infuriatingly poor but when it comes to getting started with machine learning this tool is unmatched.
Arduino – I have become particularly fond of the MKR1000 board. Add a lithium-polymer battery and you have everything you need to prototype IoT products.
OSC for Arduino – CNMAT’s implementation of the open sound control (OSC) encoding. Key puzzle piece for getting the above two tools talking to each other.
Remote Control Theremin – an example project I prepared for the workshop demoing how to have the Wekinator talk to an Arduino MKR1000 with OSC over UDP.
Earlier this week I escaped the miserable weather and food of the Netherlands to spend a couple of days in Barcelona, where I attended the ‘Playful Design for Smart Cities’ workshop at RMIT Europe.
I helped Jussi Holopainen run a workshop in which participants from industry, government and academia together defined projects aimed at further exploring this idea of playful design within the context of smart cities, without falling into the trap of solutionism.