At a TU Delft spring symposium on AI education, Hosana and I ran a short workshop titled “AI pedagogy through a design lens.” In it, we identified some of the challenges facing AI teaching, particularly outside of computer science, and explored how design pedagogy, particularly the practices of studios and making, may help to address them. The AI & Society master elective I’ve been developing and teaching over the past five years served as a case study. The session was punctuated by brief brainstorming using an adapted version of the SQUID gamestorming technique. Below are the slides we used.
Tag: design
“Contestable Infrastructures” at Beyond Smart Cities Today
I’ll be at Beyond Smart Cities Today the next couple of days (18–19 September). Below is the abstract I submitted, plus a bibliography of some of the stuff that went into my thinking for this and related matters that I won’t have the time to get into.
In the actually existing smart city, algorithmic systems are increasingly used for the purposes of automated decision-making, including as part of public infrastructure. Algorithmic systems raise a range of ethical concerns, many of which stem from their opacity. As a result, prescriptions for improving the accountability, trustworthiness and legitimacy of algorithmic systems are often based on a transparency ideal. The thinking goes that if the functioning and ownership of an algorithmic system is made perceivable, people understand them and are in turn able to supervise them. However, there are limits to this approach. Algorithmic systems are complex and ever-changing socio-technical assemblages. Rendering them visible is not a straightforward design and engineering task. Furthermore such transparency does not necessarily lead to understanding or, crucially, the ability to act on this understanding. We believe legitimate smart public infrastructure needs to include the possibility for subjects to articulate objections to procedures and outcomes. The resulting “contestable infrastructure” would create spaces that open up the possibility for expressing conflicting views on the smart city. Our project is to explore the design implications of this line of reasoning for the physical assets that citizens encounter in the city. Because after all, these are the perceivable elements of the larger infrastructural systems that recede from view.
- Alkhatib, A., & Bernstein, M. (2019). Street-Level Algorithms. 1–13. https://doi.org/10.1145/3290605.3300760
- Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
- Centivany, A., & Glushko, B. (2016). “Popcorn tastes good”: Participatory policymaking and Reddit’s “AMAgeddon.” Conference on Human Factors in Computing Systems — Proceedings, 1126–1137. https://doi.org/10.1145/2858036.2858516
- Crawford, K. (2016). Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics. Science Technology and Human Values, 41(1), 77–92. https://doi.org/10.1177/0162243915589635
- DiSalvo, C. (2010). Design, Democracy and Agonistic Pluralism. Proceedings of the Design Research Society Conference, 366–371.
- Hildebrandt, M. (2017). Privacy As Protection of the Incomputable Self: Agonistic Machine Learning. SSRN Electronic Journal, 1–33. https://doi.org/10.2139/ssrn.3081776
- Jackson, S. J., Gillespie, T., & Payette, S. (2014). The Policy Knot: Re-integrating Policy, Practice and Design. CSCW Studies of Social Computing, 588–602. https://doi.org/10.1145/2531602.2531674
- Jewell, M. (2018). Contesting the decision: living in (and living with) the smart city. International Review of Law, Computers and Technology. https://doi.org/10.1080/13600869.2018.1457000
- Lindblom, L. (2019). Consent, Contestability, and Unions. Business Ethics Quarterly. https://doi.org/10.1017/beq.2018.25
- Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
- Van de Poel, I. (2016). An ethical framework for evaluating experimental technology. Science and Engineering Ethics, 22(3), 667–686. https://doi.org/10.1007/s11948-015‑9724‑3
“Contestable Infrastructures: Designing for Dissent in Smart Public Objects” at We Make the City 2019
Thijs Turèl of AMS Institute and myself presented a version of the talk below at the Cities for Digital Rights conference on June 19 in Amsterdam during the We Make the City festival. The talk is an attempt to articulate some of the ideas we both have been developing for some time around contestability in smart public infrastructure. As always with this sort of thing, this is intended as a conversation piece so I welcome any thoughts you may have.
The basic message of the talk is that when we start to do automated decision-making in public infrastructure using algorithmic systems, we need to design for the inevitable disagreements that may arise and furthermore, we suggest there is an opportunity to focus on designing for such disagreements in the physical objects that people encounter in urban space as they make use of infrastructure.
We set the scene by showing a number of examples of smart public infrastructure. A cyclist crossing that adapts to weather conditions. If it’s raining cyclists more frequently get a green light. A pedestrian crossing in Tilburg where elderly can use their mobile to get more time to cross. And finally, the case we are involved with ourselves: smart EV charging in the city of Amsterdam, about which more later.
We identify three trends in smart public infrastructure: (1) where previously algorithms were used to inform policy, now they are employed to perform automated decision-making on an individual case basis. This raises the stakes; (2) distributed ownership of these systems as the result of public-private partnerships and other complex collaboration schemes leads to unclear responsibility; and finally (3) the increasing use of machine learning leads to opaque decision-making.
These trends, and algorithmic systems more generally, raise a number of ethical concerns. They include but are not limited to: the use of inductive correlations (for example in the case of machine learning) leads to unjustified results; lack of access to and comprehension of a system’s inner workings produces opacity, which in turn leads to a lack of trust in the systems themselves and the organisations that use them; bias is introduced by a number of factors, including development team prejudices, technical flaws, bad data and unforeseen interactions with other systems; and finally the use of profiling, nudging and personalisation leads to diminished human agency. (We highly recommend the article by Mittelstadt et al. for a comprehensive overview of ethical concerns raised by algorithms.)
So for us, the question that emerges from all this is: How do we organise the supervision of smart public infrastructure in a democratic and lawful way?
There are a number of existing approaches to this question. These include legal and regulatory (e.g. the right to explanation in the GDPR); auditing (e.g. KPMG’s “AI in Control” method, BKZ’s transparantielab); procurement (e.g. open source clauses); insourcing (e.g. GOV.UK) and design and engineering (e.g. our own work on the transparent charging station).
We feel there are two important limitations with these existing approaches. The first is a focus on professionals and the second is a focus on prediction. We’ll discuss each in turn.
First of all, many solutions target a professional class, be it accountants, civil servants, supervisory boards, as well as technologists, designers and so on. But we feel there is a role for the citizen as well, because the supervision of these systems is simply too important to be left to a privileged few. This role would include identifying wrongdoing, and suggesting alternatives.
There is a tension here, which is that from the perspective of the public sector one should only ask citizens for their opinion when you have the intention and the resources to actually act on their suggestions. It can also be a challenge to identify legitimate concerns in the flood of feedback that can sometimes occur. From our point of view though, such concerns should not be used as an excuse to not engage the public. If citizen participation is considered necessary, the focus should be on freeing up resources and setting up structures that make it feasible and effective.
The second limitation is prediction. This is best illustrated with the Collinridge dilemma: in the early phases of new technology, when a technology and its social embedding are still malleable, there is uncertainty about the social effects of that technology. In later phases, social effects may be clear but then often the technology has become so well entrenched in society that it is hard to overcome negative social effects. (This summary is taken from an excellent van de Poel article on the ethics of experimental technology.)
Many solutions disregard the Collingridge dilemma and try to predict and prevent adverse effects of new systems at design-time. One example of this approach would be value-sensitive design. Our focus in stead is on use-time. Considering the fact that smart public infrastructure tends to be developed on an ongoing basis, the question becomes how to make citizens a partner in this process. And even more specifically we are interested in how this can be made part of the design of the “touchpoints” people actually encounter in the streets, as well as their backstage processes.
Why do we focus on these physical objects? Because this is where people actually meet the infrastructural systems, of which large parts recede from view. These are the places where they become aware of their presence. They are the proverbial tip of the iceberg.
The use of automated decision-making in infrastructure reduces people’s agency. For this reason, resources for agency need to be designed back into these systems. Frequently the answer to this question is premised on a transparency ideal. This may be a prerequisite for agency, but it is not sufficient. Transparency may help you become aware of what is going on, but it will not necessarily help you to act on that knowledge. This is why we propose a shift from transparency to contestability. (We can highly recommend Ananny and Crawford’s article for more on why transparency is insufficient.)
To clarify what we mean by contestability, consider the following three examples: When you see the lights on your router blink in the middle of the night when no-one in your household is using the internet you can act on this knowledge by yanking out the device’s power cord. You may never use the emergency brake in a train but its presence does give you a sense of control. And finally, the cash register receipt provides you with a view into both the procedure and the outcome of the supermarket checkout procedure and it offers a resource with which you can dispute them if something appears to be wrong.
None of these examples is a perfect illustration of contestability but they hint at something more than transparency, or perhaps even something wholly separate from it. We’ve been investigating what their equivalents would be in the context of smart public infrastructure.
To illustrate this point further let us come back to the smart EV charging project we mentioned earlier. In Amsterdam, public EV charging stations are becoming “smart” which in this case means they automatically adapt the speed of charging to a number of factors. These include grid capacity, and the availability of solar energy. Additional factors can be added in future, one of which under consideration is to give priority to shared cars over privately owned cars. We are involved with an ongoing effort to consider how such charging stations can be redesigned so that people understand what’s going on behind the scenes and can act on this understanding. The motivation for this is that if not designed carefully, the opacity of smart EV charging infrastructure may be detrimental to social acceptance of the technology. (A first outcome of these efforts is the Transparent Charging Station designed by The Incredible Machine. A follow-up project is ongoing.)
We have identified a number of different ways in which people may object to smart EV charging. They are listed in the table below. These types of objections can lead us to feature requirements for making the system contestable.
Because the list is preliminary, we asked the audience if they could imagine additional objections, if those examples represented new categories, and if they would require additional features for people to be able to act on them. One particularly interesting suggestion that emerged was to give local communities control over the policies enacted by the charge points in their vicinity. That’s something to further consider the implications of.
And that’s where we left it. So to summarise:
- Algorithmic systems are becoming part of public infrastructure.
- Smart public infrastructure raises new ethical concerns.
- Many solutions to ethical concerns are premised on a transparency ideal, but do not address the issue of diminished agency.
- There are different categories of objections people may have to an algorithmic system’s workings.
- Making a system contestable means creating resources for people to object, opening up a space for the exploration of meaningful alternatives to its current implementation.
Research Through Design Reading List
After posting the list of engineering ethics readings it occurred to me I also have a really nice collection of things to read from a course on research through design taught by Pieter Jan Stappers, which I took earlier this year. I figured some might get some use out of it and I like having it for my own reference here as well.
The backbone for this course is the chapter on research through design by Stappers and Giaccardi in the encyclopedia of human-computer interaction, which I highly recommend.
All of the readings below are referenced in that chapter. I’ve read some, quickly gutted others for meaning and the remainder is still on my to-read list. For me personally, the things on annotated portfolios and intermediate-level knowledge by Gaver and Löwgren were the most immediately useful and applicable. I’d read the Zimmerman paper earlier and although it’s pretty concrete in its prescriptions I did not really latch on to it.
- Brandt, Eva, and Thomas Binder. “Experimental design research: genealogy, intervention, argument.” International Association of Societies of Design Research, Hong Kong 10 (2007).
- Gaver, Bill, and John Bowers. “Annotated portfolios.” interactions 19.4 (2012): 40–49.
- Gaver, William. “What should we expect from research through design?.” Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2012.
- Löwgren, Jonas. “Annotated portfolios and other forms of intermediate-level knowledge.” Interactions 20.1 (2013): 30–34.
- Stappers, Pieter Jan, F. Sleeswijk Visser, and A. I. Keller. “The role of prototypes and frameworks for structuring explorations by research through design.” The Routledge Companion to Design Research (2014): 163–174.
- Stappers, Pieter Jan. “Meta-levels in Design Research.”
- Stappers, Pieter Jan. “Prototypes as central vein for knowledge development.” Prototype: Design and craft in the 21st century (2013): 85–97.
- Wensveen, Stephan, and Ben Matthews. “Prototypes and prototyping in design research.” The Routledge Companion to Design Research. Taylor & Francis (2015).
- Zimmerman, John, Jodi Forlizzi, and Shelley Evenson. “Research through design as a method for interaction design research in HCI.” Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 2007.
Bonus level: several items related to “muddling through”…
- Flach, John M., and Fred Voorhorst. “What matters?: Putting common sense to work.” (2016).
- Lindblom, Charles E. “Still Muddling, Not Yet Through.” Public Administration Review 39.6 (1979): 517–26.
- Lindblom, Charles E. “The science of muddling through.” Public Administration Review 19.2 (1959): 79–88.
Engineering Ethics Reading List
I recently followed an excellent three-day course on engineering ethics. It was offered by the TU Delft graduate school and taught by Behnam Taibi with guest lectures from several of our faculty.
I found it particularly helpful to get some suggestions for further reading that represent some of the foundational ideas in the field. I figured it would be useful to others as well to have a pointer to them.
So here they are. I’ve quickly gutted these for their meaning. The one by Van de Poel I did read entirely and can highly recommend for anyone who’s doing design of emerging technologies and wants to escape from the informed consent conundrum.
I intend to dig into the Doorn one, not just because she’s one of my promoters but also because resilience is a concept that is closely related to my own interests. I’ll also get into the Floridi one in detail but the concept of information quality and the care ethics perspective on the problem of information abundance and attention scarcity I found immediately applicable in interaction design.
- Stilgoe, Jack, Richard Owen, and Phil Macnaghten. “Developing a framework for responsible innovation.” Research Policy 42.9 (2013): 1568–1580.
- Van den Hoven, Jeroen. “Value sensitive design and responsible innovation.” Responsible innovation (2013): 75–83.
- Hansson, Sven Ove. “Ethical criteria of risk acceptance.” Erkenntnis 59.3 (2003): 291–309.
- Van de Poel, Ibo. “An ethical framework for evaluating experimental technology.” Science and engineering ethics22.3 (2016): 667–686.
- Hansson, Sven Ove. “Philosophical problems in cost–benefit analysis.” Economics & Philosophy 23.2 (2007): 163–183.
- Floridi, Luciano. “Big Data and information quality.” The philosophy of information quality. Springer, Cham, 2014. 303–315.
- Doorn, Neelke, Paolo Gardoni, and Colleen Murphy. “A multidisciplinary definition and evaluation of resilience: The role of social justice in defining resilience.” Sustainable and Resilient Infrastructure (2018): 1–12.
We also got a draft of the intro chapter to a book on engineering and ethics that Behnam is writing. That looks very promising as well but I can’t share yet for obvious reasons.
‘Unboxing’ at Behavior Design Amsterdam #16
Below is a write-up of the talk I gave at the Behavior Design Amsterdam #16 meetup on Thursday, February 15, 2018.
I’d like to talk about the future of our design practice and what I think we should focus our attention on. It is all related to this idea of complexity and opening up black boxes. We’re going to take the scenic route, though. So bear with me.
Software Design
Two years ago I spent about half a year in Singapore.
While there I worked as product strategist and designer at a startup called ARTO, an art recommendation service. It shows you a random sample of artworks, you tell it which ones you like, and it will then start recommending pieces it thinks you like. In case you were wondering: yes, swiping left and right was involved.
We had this interesting problem of ingesting art from many different sources (mostly online galleries) with metadata of wildly varying levels of quality. So, using metadata to figure out which art to show was a bit of a non-starter. It should come as no surprise then, that we started looking into machine learning—image processing in particular.
And so I found myself working with my engineering colleagues on an art recommendation stream which was driven at least in part by machine learning. And I quickly realised we had a problem. In terms of how we worked together on this part of the product, it felt like we had taken a bunch of steps back in time. Back to a way of collaborating that was less integrated and less responsive.
That’s because we have all these nice tools and techniques for designing traditional software products. But software is deterministic. Machine learning is fundamentally different in nature: it is probabilistic.
It was hard for me to take the lead in the design of this part of the product for two reasons. First of all, it was challenging to get a first-hand feel of the machine learning feature before it was implemented.
And second of all, it was hard for me to communicate or visualise the intended behaviour of the machine learning feature to the rest of the team.
So when I came back to the Netherlands I decided to dig into this problem of design for machine learning. Turns out I opened up quite the can of worms for myself. But that’s okay.
There are two reasons I care about this:
The first is that I think we need more design-led innovation in the machine learning space. At the moment it is engineering-dominated, which doesn’t necessarily lead to useful outcomes. But if you want to take the lead in the design of machine learning applications, you need a firm handle on the nature of the technology.
The second reason why I think we need to educate ourselves as designers on the nature of machine learning is that we need to take responsibility for the impact the technology has on the lives of people. There is a lot of talk about ethics in the design industry at the moment. Which I consider a positive sign. But I also see a reluctance to really grapple with what ethics is and what the relationship between technology and society is. We seem to want easy answers, which is understandable because we are all very busy people. But having spent some time digging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a feature. And we should embrace it.
Machine Learning
At the end of 2016 I attended ThingsCon here in Amsterdam and I was introduced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both interested in machine learning. So with encouragement from Ianus we decided to put together a workshop that would enable industrial design master students to tangle with it in a hands-on manner.
About a year later now, this has grown into a thing we call Prototyping the Useless Butler. During the workshop, you use machine learning algorithms to train a model that takes inputs from a network-connected arduino’s sensors and drives that same arduino’s actuators. In effect, you can create interactive behaviour without writing a single line of code. And you get a first hand feel for how common applications of machine learning work. Things like regression, classification and dynamic time warping.
The thing that makes this workshop tick is an open source software application called Wekinator. Which was created by Rebecca Fiebrink. It was originally aimed at performing artists so that they could build interactive instruments without writing code. But it takes inputs from anything and sends outputs to anything. So we appropriated it towards our own ends.
You can find everything related to Useless Butler on this GitHub repo.
The thinking behind this workshop is that for us designers to be able to think creatively about applications of machine learning, we need a granular understanding of the nature of the technology. The thing with designers is, we can’t really learn about such things from books. A lot of design knowledge is tacit, it emerges from our physical engagement with the world. This is why things like sketching and prototyping are such essential parts of our way of working. And so with useless butler we aim to create an environment in which you as a designer can gain tacit knowledge about the workings of machine learning.
Simply put, for a lot of us, machine learning is a black box. With Useless Butler, we open the black box a bit and let you peer inside. This should improve the odds of design-led innovation happening in the machine learning space. And it should also help with ethics. But it’s definitely not enough. Knowledge about the technology isn’t the only issue here. There are more black boxes to open.
Values
Which brings me back to that other black box: ethics. Like I already mentioned there is a lot of talk in the tech industry about how we should “be more ethical”. But things are often reduced to this notion that designers should do no harm. As if ethics is a problem to be fixed in stead of a thing to be practiced.
So I started to talk about this to people I know in academia and more than once this thing called Value Sensitive Design was mentioned. It should be no surprise to anyone that scholars have been chewing on this stuff for quite a while. One of the earliest references I came across, an essay by Batya Friedman in Interactions is from 1996! This is a lesson to all of us I think. Pay more attention to what the academics are talking about.
So, at the end of last year I dove into this topic. Our host Iskander Smit, Rob Maijers and myself coordinate a grassroots community for tech workers called Tech Solidarity NL. We want to build technology that serves the needs of the many, not the few. Value Sensitive Design seemed like a good thing to dig into and so we did.
I’m not going to dive into the details here. There’s a report on the Tech Solidarity NL website if you’re interested. But I will highlight a few things that value sensitive design asks us to consider that I think help us unpack what it means to practice ethical design.
First of all, values. Here’s how it is commonly defined in the literature:
“A value refers to what a person or group of people consider important in life.”
I like it because it’s common sense, right? But it also makes clear that there can never be one monolithic definition of what ‘good’ is in all cases. As we designers like to say: “it depends” and when it comes to values things are no different.
“Person or group” implies there can be various stakeholders. Value sensitive design distinguishes between direct and indirect stakeholders. The former have direct contact with the technology, the latter don’t but are affected by it nonetheless. Value sensitive design means taking both into account. So this blows up the conventional notion of a single user to design for.
Various stakeholder groups can have competing values and so to design for them means to arrive at some sort of trade-off between values. This is a crucial point. There is no such thing as a perfect or objectively best solution to ethical conundrums. Not in the design of technology and not anywhere else.
Value sensitive design encourages you to map stakeholders and their values. These will be different for every design project. Another approach is to use lists like the one pictured here as an analytical tool to think about how a design impacts various values.
Furthermore, during your design process you might not only think about the short-term impact of a technology, but also think about how it will affect things in the long run.
And similarly, you might think about the effects of a technology not only when a few people are using it, but also when it becomes wildly successful and everybody uses it.
There are tools out there that can help you think through these things. But so far much of the work in this area is happening on the academic side. I think there is an opportunity for us to create tools and case studies that will help us educate ourselves on this stuff.
There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the technologies we work with, it helps to dig deeper into the nature of the relationship between technology and society. Yes, it complicates things. But that is exactly the point.
Privileging simple and scalable solutions over those adapted to local needs is socially, economically and ecologically unsustainable. So I hope you will join me in embracing complexity.
Starting a PhD
Today is the first official work day of my new doctoral researcher position at Delft University of Technology. After more than two years of laying the ground work, I’m starting out on a new challenge.
I remember sitting outside a Jewel coffee bar in Singapore1 and going over the various options for whatever would be next after shutting down Hubbub. I knew I wanted to delve into the impact of machine learning and data science on interaction design. And largely through process of elimination I felt the best place for me to do so would be inside of academia.
Back in the Netherlands, with help from Ianus Keller, I started making inroads at TU Delft, my first choice for this kind of work. I had visited it on and off over the years, coaching students and doing guest lectures. I’d felt at home right away.
There were quite a few twists and turns along the way but now here we are. Starting this month I am a doctoral candidate at Delft University of Technology’s faculty of Industrial Design Engineering.
My research is provisionally titled ‘Intelligibility and Transparency of Smart Public Infrastructures: A Design Oriented Approach’. Its main object of study is the MX3D smart bridge. My supervisors are Gerd Kortuem and Neelke Doorn. And it’s all part of the NWO-funded project ‘BRIdging Data in the built Environment (BRIDE)’.
Below is a first rough abstract of the research. But in the months to come this is likely to change substantially as I start hammering out a proper research plan. I plan to post the occasional update on my work here, so if you’re interested your best bet is probably to do the old RSS thing. There’s social media too, of course. And I might set up a newsletter at some point. We’ll see.
If any of this resonates, do get in touch. I’d love to start a conversation with as many people as possible about this stuff.
Intelligibility and Transparency of Smart Public Infrastructures: A Design Oriented Approach
This phd will explore how designers, technologists, and citizens can utilize rapid urban manufacturing and IoT technologies for designing urban space that expresses its intelligence from the intersection of people, places, activities and technology, not merely from the presence of cutting-edge technology. The key question is how smart public infrastructure, i.e. data-driven and algorithm-rich public infrastructures, can be understood by lay-people.
The design-oriented research will utilize a ‘research through design’ approach to develop a digital experience around the bridge and the surrounding urban space. During this extended design and making process the phd student will conduct empirical research to investigate design choices and their implications on (1) new forms of participatory data-informed design processes, (2) the technology-mediated experience of urban space, (3) the emerging relationship between residents and “their” bridge, and (4) new forms of data-informed, citizen led governance of public space.
- My Foursquare history and 750 Words archive tell me this was on Saturday, January 16, 2016. [↩]
‘Playful Design for Workplace Change Management’ at PLAYTrack conference 2017 in Aarhus
At the end of last year I was invited to speak at the PLAYTrack conference in Aarhus about the workplace change management games made by Hubbub. It turned out to be a great opportunity to reconnect with the play research community.
I was very much impressed by the program assembled by the organisers. People came from a wide range of disciplines and crucially, there was ample time to discuss and reflect on the materials presented. As I tweeted afterwards, this is a thing that most conference organisers get wrong.
Back in Utrecht after a wonderful time in Århus attending #PLAYTrack. The lectures were uniformly fascinating but the one thing this conference really got right was the ample time to reflect and discuss. Really elevates the experience to something more than the usual info dump.
— Kars Alfrink (@kaeru) December 8, 2017
I was particularly inspired by the work of Benjamin Mardell and Mara Krechevsky at Harvard’s Project Zero – Making Learning Visible looks like a great resource for anyone who teaches. Then there was Reed Stevens from Northwestern University whose project FUSE is one of the most solid examples of playful learning for STEAM I’ve seen thus far. I was also fascinated by Ciara Laverty’s work at PEDAL on observing parent-child play. Miguel Sicart delivered another great provocation on the dark side of playful design. And finally I was delighted to hear about and experience for myself some of Amos Blanton’s work at the LEGO Foundation. I should also call out Ben Fincham’s many provocative contributions from the audience.
The abstract for my talk is below, which covers most of what I talked about. I tried to give people a good sense of:
- what the games consisted of,
- what we were aiming to achieve,
- how both the fiction and the player activities supported these goals,
- how we made learning outcomes visible to our players and clients,
- and finally how we went about designing and developing these games.
Both projects have solid write-ups over at the Hubbub website, so I’ll just point to those here: Code 4 and Ripple Effect.
In the final section of the talk I spent a bit of time reflecting on how I would approach projects like this today. After all, it has been seven years since we made Code 4, and four years since Ripple Effect. That’s ages ago and my perspective has definitely changes since we made these.
Participatory design
First of all, I would get even more serious about co-designing with players at every step. I would recruit representatives of players and invest them with real influence. In the projects we did, the primary vehicle for player influence was through playtesting. But this is necessarily limited. I also won’t pretend this is at all easy to do in a commercial context.
But, these games are ultimately about improving worker productivity. So how do we make it so that workers share in the real-world profits yielded by a successful culture change?
I know of the existence of participatory design but from my experience it is not a common approach in the industry. Why?
Value sensitive design
On a related note, I would get more serious about what values are supported by the system, in whose interest they are and where they come from. Early field research and workshops with audience do surface some values but values from customer representatives tend to dominate. Again, the commercial context we work in is a potential challenge.
I know of value sensitive design, but as with participatory design, it has yet to catch on in a big way in the industry. So again, why is that?
Disintermediation
One thing I continue to be interested in is to reduce the complexity of a game system’s physical affordances (which includes its code), and to push even more of the substance of the game into those social allowances that make up the non-material aspects of the game. This allows for spontaneous renegotiation of the game by the players. This is disintermediation as a strategy. David Kanaga’s take on games as toys remains hugely inspirational in this regard, as does Bernard De Koven’s book The Well Played Game.
Gamefulness versus playfulness
Code 4 had more focus on satisfying the need for autonomy. Ripple Effect had more focus on competence, or in any case, it had less emphasis on autonomy. There was less room for ‘play’ around the core digital game. It seems to me that mastering a subjective simulation of a subject is not necessarily what a workplace game for culture change should be aiming for. So, less gameful design, more playful design.
Adaptation
Finally, the agency model does not enable us to stick around for the long haul. But workplace games might be better suited to a setup where things aren’t thought of as a one-off project but more of an ongoing process.
In How Buildings Learn, Stewart Brand talks about how architects should revisit buildings they’ve designed after they are built to learn about how people are actually using them. He also talks about how good buildings are buildings that its inhabitants can adapt to their needs. What does that look like in the context of a game for workplace culture change?
Playful Design for Workplace Change Management
Code 4 (2011, commissioned by the Tax Administration of the Netherlands) and Ripple Effect (2013, commissioned by Royal Dutch Shell) are both games for workplace change management designed and developed by Hubbub, a boutique playful design agency which operated from Utrecht, The Netherlands and Berlin, Germany between 2009 and 2015. These games are examples of how a goal-oriented serious game can be used to encourage playful appropriation of workplace infrastructure and social norms, resulting in an open-ended and creative exploration of new and innovative ways of working.
Serious game projects are usually commissioned to solve problems. Solving the problem of cultural change in a straightforward manner means viewing games as a way to persuade workers of a desired future state. They typically take videogame form, simulating the desired new way of working as determined by management. To play the game well, players need to master its system and by extension—it is assumed—learning happens.
These games can be be enjoyable experiences and an improvement on previous forms of workplace learning, but in our view they decrease the possibility space of potential workplace cultural change. They diminish worker agency, and they waste the creative and innovative potential of involving them in the invention of an improved workplace culture.
We instead choose to view workplace games as an opportunity to increase the space of possibility. We resist the temptation to bake the desired new way of working into the game’s physical and digital affordances. Instead, we leave how to play well up to the players. Since these games are team-based and collaborative, players need to negotiate their way of working around the game among themselves. In addition, because the games are distributed in time—running over a number of weeks—and are playable at player discretion during the workday, players are given license to appropriate workplace infrastructure and subvert social norms towards in-game ends.
We tried to make learning tangible in various ways. Because the games at the core are web applications to which players log on with individual accounts we were able to collect data on player behaviour. To guarantee privacy, employers did not have direct access to game databases and only received anonymised reports. We took responsibility for player learning by facilitating coaching sessions in which they could safely reflect on their game experiences. Rounding out these efforts, we conducted surveys to gain insight into the player experience from a more qualitative and subjective perspective.
These games offer a model for a reasonably democratic and ethical way of doing game-based workplace change management. However, we would like to see efforts that further democratise their design and development—involving workers at every step. We also worry about how games can be used to create the illusion of worker influence while at the same time software is deployed throughout the workplace to limit their agency.
Our examples may be inspiring but because of these developments we feel we can’t continue this type of work without seriously reconsidering our current processes, technology stacks and business practices—and ultimately whether we should be making games at all.
Design and machine learning – an annotated reading list
Earlier this year I coached Design for Interaction master students at Delft University of Technology in the course Research Methodology. The students organised three seminars for which I provided the claims and assigned reading. In the seminars they argued about my claims using the Toulmin Model of Argumentation. The readings served as sources for backing and evidence.
The claims and readings were all related to my nascent research project about machine learning. We delved into both designing for machine learning, and using machine learning as a design tool.
Below are the readings I assigned, with some notes on each, which should help you decide if you want to dive into them yourself.
Hebron, Patrick. 2016. Machine Learning for Designers. Sebastopol: O’Reilly.
The only non-academic piece in this list. This served the purpose of getting all students on the same page with regards to what machine learning is, its applications of machine learning in interaction design, and common challenges encountered. I still can’t think of any other single resource that is as good a starting point for the subject as this one.
Fiebrink, Rebecca. 2016. “Machine Learning as Meta-Instrument: Human-Machine Partnerships Shaping Expressive Instrumental Creation.” In Musical Instruments in the 21st Century, 14:137–51. Singapore: Springer Singapore. doi:10.1007/978–981–10–2951–6_10.
Fiebrink’s Wekinator is groundbreaking, fun and inspiring so I had to include some of her writing in this list. This is mostly of interest for those looking into the use of machine learning for design and other creative and artistic endeavours. An important idea explored here is that tools that make use of (interactive, supervised) machine learning can be thought of as instruments. Using such a tool is like playing or performing, exploring a possibility space, engaging in a dialogue with the tool. For a tool to feel like an instrument requires a tight action-feedback loop.
Dove, Graham, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. The 2017 CHI Conference. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.
A really good survey of how designers currently deal with machine learning. Key takeaways include that in most cases, the application of machine learning is still engineering-led as opposed to design-led, which hampers the creation of non-obvious machine learning applications. It also makes it hard for designers to consider ethical implications of design choices. A key reason for this is that at the moment, prototyping with machine learning is prohibitively cumbersome.
Fiebrink, Rebecca, Perry R Cook, and Dan Trueman. 2011. “Human Model Evaluation in Interactive Supervised Learning.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.
The second Fiebrink piece in this list, which is more of a deep dive into how people use Wekinator. As with the chapter listed above this is required reading for those working on design tools which make use of interactive machine learning. An important finding here is that users of intelligent design tools might have very different criteria for evaluating the ‘correctness’ of a trained model than engineers do. Such criteria are likely subjective and evaluation requires first-hand use of the model in real time.
Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M Ramsey, 316–34. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139046855.020.
Bostrom is known for his somewhat crazy but thoughtprovoking book on superintelligence and although a large part of this chapter is about the ethics of general artificial intelligence (which at the very least is still a way out), the first section discusses the ethics of current “narrow” artificial intelligence. It makes for a good checklist of things designers should keep in mind when they create new applications of machine learning. Key insight: when a machine learning system takes on work with social dimensions—tasks previously performed by humans—the system inherits its social requirements.
Yang, Qian, John Zimmerman, Aaron Steinfeld, and Anthony Tomasic. 2016. Planning Adaptive Mobile Experiences When Wireframing. The 2016 ACM Conference. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.
Finally, a feet-in-the-mud exploration of what it actually means to design for machine learning with the tools most commonly used by designers today: drawings and diagrams of various sorts. In this case the focus is on using machine learning to make an interface adaptive. It includes an interesting discussion of how to balance the use of implicit and explicit user inputs for adaptation, and how to deal with inference errors. Once again the limitations of current sketching and prototyping tools is mentioned, and related to the need for designers to develop tacit knowledge about machine learning. Such tacit knowledge will only be gained when designers can work with machine learning in a hands-on manner.
Supplemental material
Floyd, Christiane. 1984. “A Systematic Look at Prototyping.” In Approaches to Prototyping, 1–18. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978–3–642–69796–8_1.
I provided this to students so that they get some additional grounding in the various kinds of prototyping that are out there. It helps to prevent reductive notions of prototyping, and it makes for a nice complement to Buxton’s work on sketching.
Blevis, E, Y Lim, and E Stolterman. 2006. “Regarding Software as a Material of Design.”
Some of the papers refer to machine learning as a “design material” and this paper helps to understand what that idea means. Software is a material without qualities (it is extremely malleable, it can simulate nearly anything). Yet, it helps to consider it as a physical material in the metaphorical sense because we can then apply ways of design thinking and doing to software programming.
Status update
This is not exactly a now page, but I thought I would write up what I am doing at the moment since last reporting on my status in my end-of-year report.
The majority of my workdays are spent doing freelance design consulting. My primary gig has been through Eend at the Dutch Victim Support Foundation, where until very recently I was part of a team building online services. I helped out with product strategy, setting up a lean UX design process, and getting an integrated agile design and development team up and running. The first services are now shipping so it is time for me to move on, after 10 months of very gratifying work. I really enjoy working in the public sector and I hope to be doing more of it in future.
So yes, this means I am available and you can hire me to do strategy and design for software products and services. Just send me an email.
Shortly before the Dutch national elections of this year, Iskander and I gathered a group of fellow tech workers under the banner of “Tech Solidarity NL” to discuss the concerning lurch to the right in national politics and what our field can do about it. This has developed into a small but active community who gather monthly to educate ourselves and develop plans for collective action. I am getting a huge boost out of this. Figuring out how to be a leftist in this day and age is not easy. The only way to do it is to practice and for that reflection with peers is invaluable. Building and facilitating a group like this is hugely educational too. I have learned a lot about how a community is boot-strapped and nurtured.
If you are in the Netherlands, your politics are left of center, and you work in technology, consider yourself invited to join.
And finally, the last major thing on my plate is a continuing effort to secure a PhD position for myself. I am getting great support from people at Delft University of Technology, in particular Gerd Kortuem. I am focusing on internet of things products that have features driven by machine learning. My ultimate aim is to develop prototyping tools for design and development teams that will help them create more innovative and more ethical solutions. The first step for this will be to conduct field research inside companies who are creating such products right now. So I am reaching out to people to see if I can secure a reasonable amount of potential collaborators for this, which will go a long way in proving the feasibility of my whole plan.
If you know of any companies that develop consumer-facing products that have a connected hardware component and make use of machine learning to drive features, do let me know.
That’s about it. Freelance UX consulting, leftist tech-worker organising and design-for-machine-learning research. Quite happy with that mix, really.