Ik had laatst contact met een internationale “thought leader” op het gebied van “tech ethics”. Hij vertelde mij dat hij heel dankbaar is voor het bestaan van de transparante laadpaal omdat het zo’n goed voorbeeld is van hoe design kan bijdragen aan eerlijke technologie.
Dat is natuurlijk ontzettend leuk om te horen. En het past in een bredere trend in de industrie gericht op het transparant en uitlegbaar maken van algoritmes. Inmiddels is het zelfs zo ver dat wetgeving uitlegbaarheid (in sommige gevallen) verplicht stelt.
In de documentaire hoor je meerdere mensen vertellen (mijzelf inbegrepen) waarom het belangrijk is dat stedelijke algoritmes transparant zijn. Thijs benoemt heel mooi twee redenen: Enerzijds het collectieve belang om democratische controle op de ontwikkeling van stedelijke algoritmes mogelijk te maken. Anderzijds is er het individuele belang om je recht te kunnen halen als een systeem een beslissing maakt waarmee je het (om wat voor reden dan ook) niet eens bent.
En inderdaad, in beide gevallen (collectieve controle en individuele remedie) is transparantie een randvoorwaarde. Ik denk dat we met dit project een hoop problemen qua design en techniek hebben opgelost die daarbij komen kijken. Tegelijkertijd doemt er een nieuwe vraag aan de horizon op: Als we begrijpen hoe een slim systeem werkt, en we zijn het er niet mee eens, wat dan? Hoe krijg je vervolgens daadwerkelijk invloed op de werking van het systeem?
Ik denk dat we onze focus zullen moeten gaan verleggen van transparantie naar wat ik tegenspraak of in goed Engels “contestability” noem.
Ontwerpen voor tegenspraak betekent dat we na moeten gaan denken over de middelen die mensen nodig hebben voor het uitoefenen van hun recht op menselijke interventie. Ja, dit betekent dat we informatie moeten aanleveren over het hoe en waarom van individuele beslissingen. Transparantie dus. Maar het betekent ook dat we nieuwe kanalen en processen moeten inrichten waarmee mensen verzoeken kunnen indienen voor het herzien van een beslissing. We zullen na moeten gaan denken over hoe we dergelijke verzoeken beoordelen, en hoe we er voor zorgen dat het slimme systeem in kwestie “leert” van de signalen die we op deze manier oppikken uit de samenleving.
Je zou kunnen zeggen dat ontwerpen van transparantie eenrichtingsverkeer is. Informatie stroomt van de ontwikkelende partij, naar de eindgebruiker. Bij het ontwerpen voor tegenspraak gaat het om het creëren van een dialoog tussen ontwikkelaars en burgers.
Ik zeg burgers want niet alleen klassieke eindgebruikers worden geraakt door slimme systemen. Allerlei andere groepen worden ook, vaak indirect beïnvloed.
Dat is ook een nieuwe ontwerp uitdaging. Hoe ontwerp je niet alleen voor de eindgebruiker (zoals bij de transparante laadpaal de EV bestuurder) maar ook voor zogenaamde indirecte belanghebbenden, bijvoorbeeld bewoners van straten waar laadpalen geplaatst worden, die geen EV rijden, of zelfs geen auto, maar evengoed een belang hebben bij hoe stoepen en straten worden ingericht.
Deze verbreding van het blikveld betekent dat we bij het ontwerpen voor tegenspraak nóg een stap verder kunnen en zelfs moeten gaan dan het mogelijk maken van remedie bij individuele beslissingen.
Want ontwerpen voor tegenspraak bij individuele beslissingen van een reeds uitgerold systeem is noodzakelijkerwijs post-hoc en reactief, en beperkt zich tot één enkele groep belanghebbenden.
Zoals Thijs ook min of meer benoemt in de documentaire beïnvloed slimme stedelijke infrastructuur de levens van ons allemaal, en je zou kunnen zeggen dat de design en technische keuzes die bij de ontwikkeling daarvan gemaakt worden intrinsiek ook politieke keuzes zijn.
Daarom denk ik dat we er niet omheen kunnen om het proces dat ten grondslag ligt aan deze systemen zelf, ook zo in te richten dat er ruimte is voor tegenspraak. In mijn ideale wereld is de ontwikkeling van een volgende generatie slimme laadpalen daarom participatief, pluriform en inclusief, net als onze democratie dat zelf ook streeft te zijn.
Hoe we dit soort “contestable” algoritmes precies vorm moeten geven, hoe ontwerpen voor tegenspraak moeten gaan werken, is een open vraag. Maar een aantal jaren geleden wist niemand nog hoe een transparante laadpaal er uit zou moeten zien, en dat hebben we ook voor elkaar gekregen.
I’ll be at Beyond Smart Cities Today the next couple of days (18–19 September). Below is the abstract I submitted, plus a bibliography of some of the stuff that went into my thinking for this and related matters that I won’t have the time to get into.
In the actually existing smart city, algorithmic systems are increasingly used for the purposes of automated decision-making, including as part of public infrastructure. Algorithmic systems raise a range of ethical concerns, many of which stem from their opacity. As a result, prescriptions for improving the accountability, trustworthiness and legitimacy of algorithmic systems are often based on a transparency ideal. The thinking goes that if the functioning and ownership of an algorithmic system is made perceivable, people understand them and are in turn able to supervise them. However, there are limits to this approach. Algorithmic systems are complex and ever-changing socio-technical assemblages. Rendering them visible is not a straightforward design and engineering task. Furthermore such transparency does not necessarily lead to understanding or, crucially, the ability to act on this understanding. We believe legitimate smart public infrastructure needs to include the possibility for subjects to articulate objections to procedures and outcomes. The resulting “contestable infrastructure” would create spaces that open up the possibility for expressing conflicting views on the smart city. Our project is to explore the design implications of this line of reasoning for the physical assets that citizens encounter in the city. Because after all, these are the perceivable elements of the larger infrastructural systems that recede from view.
Alkhatib, A., & Bernstein, M. (2019). Street-Level Algorithms. 1–13. https://doi.org/10.1145/3290605.3300760
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Centivany, A., & Glushko, B. (2016). “Popcorn tastes good”: Participatory policymaking and Reddit’s “AMAgeddon.” Conference on Human Factors in Computing Systems — Proceedings, 1126–1137. https://doi.org/10.1145/2858036.2858516
Crawford, K. (2016). Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics. Science Technology and Human Values, 41(1), 77–92. https://doi.org/10.1177/0162243915589635
DiSalvo, C. (2010). Design, Democracy and Agonistic Pluralism. Proceedings of the Design Research Society Conference, 366–371.
Hildebrandt, M. (2017). Privacy As Protection of the Incomputable Self: Agonistic Machine Learning. SSRN Electronic Journal, 1–33. https://doi.org/10.2139/ssrn.3081776
Jackson, S. J., Gillespie, T., & Payette, S. (2014). The Policy Knot: Re-integrating Policy, Practice and Design. CSCW Studies of Social Computing, 588–602. https://doi.org/10.1145/2531602.2531674
Jewell, M. (2018). Contesting the decision: living in (and living with) the smart city. International Review of Law, Computers and Technology. https://doi.org/10.1080/13600869.2018.1457000
Lindblom, L. (2019). Consent, Contestability, and Unions. Business Ethics Quarterly. https://doi.org/10.1017/beq.2018.25
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
Van de Poel, I. (2016). An ethical framework for evaluating experimental technology. Science and Engineering Ethics, 22(3), 667–686. https://doi.org/10.1007/s11948-015‑9724-3
Thijs Turèl of AMS Institute and myself presented a version of the talk below at the Cities for Digital Rights conference on June 19 in Amsterdam during the We Make the City festival. The talk is an attempt to articulate some of the ideas we both have been developing for some time around contestability in smart public infrastructure. As always with this sort of thing, this is intended as a conversation piece so I welcome any thoughts you may have.
The basic message of the talk is that when we start to do automated decision-making in public infrastructure using algorithmic systems, we need to design for the inevitable disagreements that may arise and furthermore, we suggest there is an opportunity to focus on designing for such disagreements in the physical objects that people encounter in urban space as they make use of infrastructure.
We set the scene by showing a number of examples of smart public infrastructure. A cyclist crossing that adapts to weather conditions. If it’s raining cyclists more frequently get a green light. A pedestrian crossing in Tilburg where elderly can use their mobile to get more time to cross. And finally, the case we are involved with ourselves: smart EV charging in the city of Amsterdam, about which more later.
We identify three trends in smart public infrastructure: (1) where previously algorithms were used to inform policy, now they are employed to perform automated decision-making on an individual case basis. This raises the stakes; (2) distributed ownership of these systems as the result of public-private partnerships and other complex collaboration schemes leads to unclear responsibility; and finally (3) the increasing use of machine learning leads to opaque decision-making.
These trends, and algorithmic systems more generally, raise a number of ethical concerns. They include but are not limited to: the use of inductive correlations (for example in the case of machine learning) leads to unjustified results; lack of access to and comprehension of a system’s inner workings produces opacity, which in turn leads to a lack of trust in the systems themselves and the organisations that use them; bias is introduced by a number of factors, including development team prejudices, technical flaws, bad data and unforeseen interactions with other systems; and finally the use of profiling, nudging and personalisation leads to diminished human agency. (We highly recommend the article by Mittelstadt et al. for a comprehensive overview of ethical concerns raised by algorithms.)
So for us, the question that emerges from all this is: How do we organise the supervision of smart public infrastructure in a democratic and lawful way?
There are a number of existing approaches to this question. These include legal and regulatory (e.g. the right to explanation in the GDPR); auditing (e.g. KPMG’s “AI in Control” method, BKZ’s transparantielab); procurement (e.g. open source clauses); insourcing (e.g. GOV.UK) and design and engineering (e.g. our own work on the transparent charging station).
We feel there are two important limitations with these existing approaches. The first is a focus on professionals and the second is a focus on prediction. We’ll discuss each in turn.
First of all, many solutions target a professional class, be it accountants, civil servants, supervisory boards, as well as technologists, designers and so on. But we feel there is a role for the citizen as well, because the supervision of these systems is simply too important to be left to a privileged few. This role would include identifying wrongdoing, and suggesting alternatives.
There is a tension here, which is that from the perspective of the public sector one should only ask citizens for their opinion when you have the intention and the resources to actually act on their suggestions. It can also be a challenge to identify legitimate concerns in the flood of feedback that can sometimes occur. From our point of view though, such concerns should not be used as an excuse to not engage the public. If citizen participation is considered necessary, the focus should be on freeing up resources and setting up structures that make it feasible and effective.
The second limitation is prediction. This is best illustrated with the Collinridge dilemma: in the early phases of new technology, when a technology and its social embedding are still malleable, there is uncertainty about the social effects of that technology. In later phases, social effects may be clear but then often the technology has become so well entrenched in society that it is hard to overcome negative social effects. (This summary is taken from an excellent van de Poel article on the ethics of experimental technology.)
Many solutions disregard the Collingridge dilemma and try to predict and prevent adverse effects of new systems at design-time. One example of this approach would be value-sensitive design. Our focus in stead is on use-time. Considering the fact that smart public infrastructure tends to be developed on an ongoing basis, the question becomes how to make citizens a partner in this process. And even more specifically we are interested in how this can be made part of the design of the “touchpoints” people actually encounter in the streets, as well as their backstage processes.
Why do we focus on these physical objects? Because this is where people actually meet the infrastructural systems, of which large parts recede from view. These are the places where they become aware of their presence. They are the proverbial tip of the iceberg.
The use of automated decision-making in infrastructure reduces people’s agency. For this reason, resources for agency need to be designed back into these systems. Frequently the answer to this question is premised on a transparency ideal. This may be a prerequisite for agency, but it is not sufficient. Transparency may help you become aware of what is going on, but it will not necessarily help you to act on that knowledge. This is why we propose a shift from transparency to contestability. (We can highly recommend Ananny and Crawford’s article for more on why transparency is insufficient.)
To clarify what we mean by contestability, consider the following three examples: When you see the lights on your router blink in the middle of the night when no-one in your household is using the internet you can act on this knowledge by yanking out the device’s power cord. You may never use the emergency brake in a train but its presence does give you a sense of control. And finally, the cash register receipt provides you with a view into both the procedure and the outcome of the supermarket checkout procedure and it offers a resource with which you can dispute them if something appears to be wrong.
None of these examples is a perfect illustration of contestability but they hint at something more than transparency, or perhaps even something wholly separate from it. We’ve been investigating what their equivalents would be in the context of smart public infrastructure.
To illustrate this point further let us come back to the smart EV charging project we mentioned earlier. In Amsterdam, public EV charging stations are becoming “smart” which in this case means they automatically adapt the speed of charging to a number of factors. These include grid capacity, and the availability of solar energy. Additional factors can be added in future, one of which under consideration is to give priority to shared cars over privately owned cars. We are involved with an ongoing effort to consider how such charging stations can be redesigned so that people understand what’s going on behind the scenes and can act on this understanding. The motivation for this is that if not designed carefully, the opacity of smart EV charging infrastructure may be detrimental to social acceptance of the technology. (A first outcome of these efforts is the Transparent Charging Station designed by The Incredible Machine. A follow-up project is ongoing.)
We have identified a number of different ways in which people may object to smart EV charging. They are listed in the table below. These types of objections can lead us to feature requirements for making the system contestable.
Because the list is preliminary, we asked the audience if they could imagine additional objections, if those examples represented new categories, and if they would require additional features for people to be able to act on them. One particularly interesting suggestion that emerged was to give local communities control over the policies enacted by the charge points in their vicinity. That’s something to further consider the implications of.
And that’s where we left it. So to summarise:
Algorithmic systems are becoming part of public infrastructure.
Smart public infrastructure raises new ethical concerns.
Many solutions to ethical concerns are premised on a transparency ideal, but do not address the issue of diminished agency.
There are different categories of objections people may have to an algorithmic system’s workings.
Making a system contestable means creating resources for people to object, opening up a space for the exploration of meaningful alternatives to its current implementation.
After posting the list of engineering ethics readings it occurred to me I also have a really nice collection of things to read from a course on research through design taught by Pieter Jan Stappers, which I took earlier this year. I figured some might get some use out of it and I like having it for my own reference here as well.
The backbone for this course is the chapter on research through design by Stappers and Giaccardi in the encyclopedia of human-computer interaction, which I highly recommend.
All of the readings below are referenced in that chapter. I’ve read some, quickly gutted others for meaning and the remainder is still on my to-read list. For me personally, the things on annotated portfolios and intermediate-level knowledge by Gaver and Löwgren were the most immediately useful and applicable. I’d read the Zimmerman paper earlier and although it’s pretty concrete in its prescriptions I did not really latch on to it.
Brandt, Eva, and Thomas Binder. “Experimental design research: genealogy, intervention, argument.” International Association of Societies of Design Research, Hong Kong 10 (2007).
Gaver, Bill, and John Bowers. “Annotated portfolios.” interactions 19.4 (2012): 40–49.
Gaver, William. “What should we expect from research through design?.” Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2012.
Löwgren, Jonas. “Annotated portfolios and other forms of intermediate-level knowledge.” Interactions 20.1 (2013): 30–34.
Stappers, Pieter Jan, F. Sleeswijk Visser, and A. I. Keller. “The role of prototypes and frameworks for structuring explorations by research through design.” The Routledge Companion to Design Research (2014): 163–174.
Stappers, Pieter Jan. “Meta-levels in Design Research.”
Stappers, Pieter Jan. “Prototypes as central vein for knowledge development.” Prototype: Design and craft in the 21st century (2013): 85–97.
Wensveen, Stephan, and Ben Matthews. “Prototypes and prototyping in design research.” The Routledge Companion to Design Research. Taylor & Francis (2015).
Zimmerman, John, Jodi Forlizzi, and Shelley Evenson. “Research through design as a method for interaction design research in HCI.” Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 2007.
Bonus level: several items related to “muddling through”…
Flach, John M., and Fred Voorhorst. “What matters?: Putting common sense to work.” (2016).
Lindblom, Charles E. “Still Muddling, Not Yet Through.” Public Administration Review 39.6 (1979): 517–26.
Lindblom, Charles E. “The science of muddling through.” Public Administration Review 19.2 (1959): 79–88.
I recently followed an excellent three-day course on engineering ethics. It was offered by the TU Delft graduate school and taught by Behnam Taibi with guest lectures from several of our faculty.
I found it particularly helpful to get some suggestions for further reading that represent some of the foundational ideas in the field. I figured it would be useful to others as well to have a pointer to them.
So here they are. I’ve quickly gutted these for their meaning. The one by Van de Poel I did read entirely and can highly recommend for anyone who’s doing design of emerging technologies and wants to escape from the informed consent conundrum.
I intend to dig into the Doorn one, not just because she’s one of my promoters but also because resilience is a concept that is closely related to my own interests. I’ll also get into the Floridi one in detail but the concept of information quality and the care ethics perspective on the problem of information abundance and attention scarcity I found immediately applicable in interaction design.
Stilgoe, Jack, Richard Owen, and Phil Macnaghten. “Developing a framework for responsible innovation.” Research Policy 42.9 (2013): 1568–1580.
Van den Hoven, Jeroen. “Value sensitive design and responsible innovation.” Responsible innovation (2013): 75–83.
Hansson, Sven Ove. “Ethical criteria of risk acceptance.” Erkenntnis 59.3 (2003): 291–309.
Van de Poel, Ibo. “An ethical framework for evaluating experimental technology.” Science and engineering ethics22.3 (2016): 667–686.
Hansson, Sven Ove. “Philosophical problems in cost–benefit analysis.” Economics & Philosophy 23.2 (2007): 163–183.
Floridi, Luciano. “Big Data and information quality.” The philosophy of information quality. Springer, Cham, 2014. 303–315.
Doorn, Neelke, Paolo Gardoni, and Colleen Murphy. “A multidisciplinary definition and evaluation of resilience: The role of social justice in defining resilience.” Sustainable and Resilient Infrastructure (2018): 1–12.
We also got a draft of the intro chapter to a book on engineering and ethics that Behnam is writing. That looks very promising as well but I can’t share yet for obvious reasons.
In this workshop we will take a deep dive into some of the challenges of designing smart public infrastructure.
Smart city ideas are moving from hype into reality. The everyday things that our contemporary world runs on, such as roads, railways and canals are not immune to this development. Basic, “hard” infrastructure is being augmented with internet-connected sensing, processing and actuating capabilities. We are involved as practitioners and researchers in one such project: the MX3D smart bridge, a pedestrian bridge 3D printed from stainless steel and equipped with a network of sensors.
The question facing everyone involved with these developments, from citizens to professionals to policy makers is how to reap the potential benefits of these technologies, without degrading the urban fabric. For this to happen, information technology needs to become more like the city: open-ended, flexible and adaptable. And we need methods and tools for the diverse range of stakeholders to come together and collaborate on the design of truly intelligent public infrastructure.
We will explore these questions in this workshop by first walking you through the architecture of the MX3D smart bridge—offering a uniquely concrete and pragmatic view into a cutting edge smart city project. Subsequently we will together explore the question: What should a smart pedestrian bridge that is aware of itself and its surroundings be able to tell us? We will conclude by sharing some of the highlights from our conversation, and make note of particularly thorny questions that require further work.
The workshop’s structure was quite simple. After a round of introductions, Alec introduced the MX3D bridge to the participants. For a sense of what that introduction talk was like, I recommend viewing this recording of a presentation he delivered at a recent Pakhuis de Zwijger event.
We then ran three rounds of group discussion in the style of world cafe. each discussion was guided by one question. Participants were asked to write, draw and doodle on the large sheets of paper covering each table. At the end of each round, people moved to another table while one person remained to share the preceding round’s discussion with the new group.
The discussion questions were inspired by value-sensitive design. I was interested to see if people could come up with alternative uses for a sensor-equipped 3D-printed footbridge if they first considered what in their opinion made a city worth living in.
The questions we used were:
What specific things do you like about your town? (Places, things to do, etc. Be specific.)
What values underly those things? (A value is what a person or group of people consider important in life.)
How would you redesign the bridge to support those values?
At the end of the three discussion rounds we went around to each table and shared the highlights of what was produced. We then had a bit of a back and forth about the outcomes and the workshop approach, after which we wrapped up.
We did get to some interesting values by starting from personal experience. Participants came from a variety of countries and that was reflected in the range of examples and related values. The design ideas for the bridge remained somewhat abstract. It turned out to be quite a challenge to make the jump from values to different types of smart bridges. Despite this, we did get nice ideas such as having the bridge report on water quality of the canal it crosses, derived from the value of care for the environment.
The response from participants afterwards was positive. People found it thought-provoking, which was definitely the point. People were also eager to learn even more about the bridge project. It remains a thing that captures people’s imagination. For that reason alone, it continues to be a very productive case to use for the grounding of these sorts of discussions.
Thought I’d post a quick update on my PhD. Since my previous post almost five months have passed. I’ve been developing my plan further, for which you’ll find an updated description below. I’ve also put together my very first conference paper, co-authored with my supervisor Gerd Kortuem. It’s a case study of the MX3D smart bridge for Designing Interactive Systems 2019. We’ll see if it gets accepted. But in any case, writing something has been hugely educational. And once I finally figured out what the hell I was doing, it was sort of fun as well. Still kind of a trip to be paid to do this kind of work. Looking ahead, I am setting goals for this year and the nearer term as well. It’s all very rough still but it will likely involve research through design as a method and maybe object oriented ontology as a theory. All of which will serve to operationalise and evaluate the usefulness of the “contestability” concept in the context of smart city infrastructure. To be continued—and I welcome all your thoughts!
Designing Smart City Infrastructure for Contestability
The use of information technology in cities increasingly subjects citizens to automated data collection, algorithmic decision making and remote control of physical space. Citizens tend to find these systems and their outcomes hard to understand and predict . Moreover, the opacity of smart urban systems precludes full citizenship and obstructs people’s ‘right to the city’ .
A commonly proposed solution is to improve citizens understanding of systems by making them more open and transparent . For example, GDPR prescribes people’s right to explanation of automated decisions they have been subjected to. For another example, the city of Amsterdam offers a publicly accessible register of urban sensors, and is committed to opening up all the data they collect.
However, it is not clear that openness and transparency in and of itself will yield the desired improvements in understanding and governing of smart city infrastructures . We would like to suggest that for a system to perceived as accountable, people must be able to contest its workings—from the data it collects, to the decisions it makes, all the way through to how those decisions are acted on in the world.
The leading research question for this PhD therefore is how to design smart city infrastructure—urban systems augmented with internet-connected sensing, processing and actuating capabilities—for contestability : the extent to which a system supports the ability of those subjected to it to oppose its workings as wrong or mistaken.
Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016): 2053951715622512.
Kitchin, Rob, Paolo Cardullo, and Cesare Di Feliciantonio. “Citizenship, Justice and the Right to the Smart City.” (2018).
Abdul, Ashraf, et al. “Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda.” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018.
Ananny, Mike, and Kate Crawford. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society 20.3 (2018): 973–989.
Hirsch, Tad, et al. “Designing contestability: Interaction design, machine learning, and mental health.” Proceedings of the 2017 Conference on Designing Interactive Systems. ACM, 2017.
Goodreads tells me I’ve read 48 books in 2018. I set myself the goal of 36 so it looks like I beat it handily. But included in that count are quite a few roleplaying game books and comics. If I discard those I’m left with 28 titles. Still a decent amount but nothing particularly remarkable. Below are a few lists and some notes to go with them.
Most of the non-fiction is somewhere on the intersection of design, technology and Left politics. A lot of this reading was driven by my desire to develop some kind of mental framework for the work we were doing with Tech Solidarity NL. More recently—since I started my PhD—I’ve mostly been reading textbooks on research methodology. Hidden from this list is the academic papers I’ve started consuming as part of this new job. I should figure out a way of sharing some of that here or elsewhere as well.
I took a break from technology and indulged in a deep dive into the history of the thirty year’s war with a massive non-fiction treatment as well as a classic picaresque set in the same time period. While reading these I was transitioning into my new role as a father of twin boys. Somewhat related was a brief history of The Netherlands, which I’ve started recommending to foreigners who are struggling to understand our idiosyncratic little nation and go beyond superficialities.
Then there’s the fiction, which in the beginning of the year consisted of highbrow weird and historical novels but then ventured into classic fantasy and (utopian) sci-fi territory. Again, mostly because of a justifiable desire for some escapism in the sleep deprived evenings and nights.
Having mentioned the arrival of our boys a few times it should come as no surprise that I also read a couple of parenting books. These were more than enough for me and really to be honest I think parenting is a thing best learned through practice. Especially if you’re raising two babies at once.
So that’s it. I’ve set myself the modest goal of 24 books for this year because I’m quite sure most of my reading will be papers and such. Here’s to a year of what I expect will be many more late night and early morning reading sessions of escapist weird fiction.
I’d like to talk about the future of our design practice and what I think we should focus our attention on. It is all related to this idea of complexity and opening up black boxes. We’re going to take the scenic route, though. So bear with me.
Two years ago I spent about half a year in Singapore.
While there I worked as product strategist and designer at a startup called ARTO, an art recommendation service. It shows you a random sample of artworks, you tell it which ones you like, and it will then start recommending pieces it thinks you like. In case you were wondering: yes, swiping left and right was involved.
We had this interesting problem of ingesting art from many different sources (mostly online galleries) with metadata of wildly varying levels of quality. So, using metadata to figure out which art to show was a bit of a non-starter. It should come as no surprise then, that we started looking into machine learning—image processing in particular.
And so I found myself working with my engineering colleagues on an art recommendation stream which was driven at least in part by machine learning. And I quickly realised we had a problem. In terms of how we worked together on this part of the product, it felt like we had taken a bunch of steps back in time. Back to a way of collaborating that was less integrated and less responsive.
That’s because we have all these nice tools and techniques for designing traditional software products. But software is deterministic. Machine learning is fundamentally different in nature: it is probabilistic.
It was hard for me to take the lead in the design of this part of the product for two reasons. First of all, it was challenging to get a first-hand feel of the machine learning feature before it was implemented.
And second of all, it was hard for me to communicate or visualise the intended behaviour of the machine learning feature to the rest of the team.
So when I came back to the Netherlands I decided to dig into this problem of design for machine learning. Turns out I opened up quite the can of worms for myself. But that’s okay.
There are two reasons I care about this:
The first is that I think we need more design-led innovation in the machine learning space. At the moment it is engineering-dominated, which doesn’t necessarily lead to useful outcomes. But if you want to take the lead in the design of machine learning applications, you need a firm handle on the nature of the technology.
The second reason why I think we need to educate ourselves as designers on the nature of machine learning is that we need to take responsibility for the impact the technology has on the lives of people. There is a lot of talk about ethics in the design industry at the moment. Which I consider a positive sign. But I also see a reluctance to really grapple with what ethics is and what the relationship between technology and society is. We seem to want easy answers, which is understandable because we are all very busy people. But having spent some time digging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a feature. And we should embrace it.
At the end of 2016 I attended ThingsCon here in Amsterdam and I was introduced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both interested in machine learning. So with encouragement from Ianus we decided to put together a workshop that would enable industrial design master students to tangle with it in a hands-on manner.
About a year later now, this has grown into a thing we call Prototyping the Useless Butler. During the workshop, you use machine learning algorithms to train a model that takes inputs from a network-connected arduino’s sensors and drives that same arduino’s actuators. In effect, you can create interactive behaviour without writing a single line of code. And you get a first hand feel for how common applications of machine learning work. Things like regression, classification and dynamic time warping.
The thing that makes this workshop tick is an open source software application called Wekinator. Which was created by Rebecca Fiebrink. It was originally aimed at performing artists so that they could build interactive instruments without writing code. But it takes inputs from anything and sends outputs to anything. So we appropriated it towards our own ends.
The thinking behind this workshop is that for us designers to be able to think creatively about applications of machine learning, we need a granular understanding of the nature of the technology. The thing with designers is, we can’t really learn about such things from books. A lot of design knowledge is tacit, it emerges from our physical engagement with the world. This is why things like sketching and prototyping are such essential parts of our way of working. And so with useless butler we aim to create an environment in which you as a designer can gain tacit knowledge about the workings of machine learning.
Simply put, for a lot of us, machine learning is a black box. With Useless Butler, we open the black box a bit and let you peer inside. This should improve the odds of design-led innovation happening in the machine learning space. And it should also help with ethics. But it’s definitely not enough. Knowledge about the technology isn’t the only issue here. There are more black boxes to open.
Which brings me back to that other black box: ethics. Like I already mentioned there is a lot of talk in the tech industry about how we should “be more ethical”. But things are often reduced to this notion that designers should do no harm. As if ethics is a problem to be fixed in stead of a thing to be practiced.
So I started to talk about this to people I know in academia and more than once this thing called Value Sensitive Design was mentioned. It should be no surprise to anyone that scholars have been chewing on this stuff for quite a while. One of the earliest references I came across, an essay by Batya Friedman in Interactions is from 1996! This is a lesson to all of us I think. Pay more attention to what the academics are talking about.
So, at the end of last year I dove into this topic. Our host Iskander Smit, Rob Maijers and myself coordinate a grassroots community for tech workers called Tech Solidarity NL. We want to build technology that serves the needs of the many, not the few. Value Sensitive Design seemed like a good thing to dig into and so we did.
I’m not going to dive into the details here. There’s a report on the Tech Solidarity NL website if you’re interested. But I will highlight a few things that value sensitive design asks us to consider that I think help us unpack what it means to practice ethical design.
First of all, values. Here’s how it is commonly defined in the literature:
“A value refers to what a person or group of people consider important in life.”
I like it because it’s common sense, right? But it also makes clear that there can never be one monolithic definition of what ‘good’ is in all cases. As we designers like to say: “it depends” and when it comes to values things are no different.
“Person or group” implies there can be various stakeholders. Value sensitive design distinguishes between direct and indirect stakeholders. The former have direct contact with the technology, the latter don’t but are affected by it nonetheless. Value sensitive design means taking both into account. So this blows up the conventional notion of a single user to design for.
Various stakeholder groups can have competing values and so to design for them means to arrive at some sort of trade-off between values. This is a crucial point. There is no such thing as a perfect or objectively best solution to ethical conundrums. Not in the design of technology and not anywhere else.
Value sensitive design encourages you to map stakeholders and their values. These will be different for every design project. Another approach is to use lists like the one pictured here as an analytical tool to think about how a design impacts various values.
Furthermore, during your design process you might not only think about the short-term impact of a technology, but also think about how it will affect things in the long run.
And similarly, you might think about the effects of a technology not only when a few people are using it, but also when it becomes wildly successful and everybody uses it.
There are tools out there that can help you think through these things. But so far much of the work in this area is happening on the academic side. I think there is an opportunity for us to create tools and case studies that will help us educate ourselves on this stuff.
There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the technologies we work with, it helps to dig deeper into the nature of the relationship between technology and society. Yes, it complicates things. But that is exactly the point.
Privileging simple and scalable solutions over those adapted to local needs is socially, economically and ecologically unsustainable. So I hope you will join me in embracing complexity.
Today is the first official work day of my new doctoral researcher position at Delft University of Technology. After more than two years of laying the ground work, I’m starting out on a new challenge.
I remember sitting outside a Jewel coffee bar in Singapore1 and going over the various options for whatever would be next after shutting down Hubbub. I knew I wanted to delve into the impact of machine learning and data science on interaction design. And largely through process of elimination I felt the best place for me to do so would be inside of academia.
Back in the Netherlands, with help from Ianus Keller, I started making inroads at TU Delft, my first choice for this kind of work. I had visited it on and off over the years, coaching students and doing guest lectures. I’d felt at home right away.
There were quite a few twists and turns along the way but now here we are. Starting this month I am a doctoral candidate at Delft University of Technology’s faculty of Industrial Design Engineering.
Below is a first rough abstract of the research. But in the months to come this is likely to change substantially as I start hammering out a proper research plan. I plan to post the occasional update on my work here, so if you’re interested your best bet is probably to do the old RSS thing. There’s social media too, of course. And I might set up a newsletter at some point. We’ll see.
If any of this resonates, do get in touch. I’d love to start a conversation with as many people as possible about this stuff.
Intelligibility and Transparency of Smart Public Infrastructures: A Design Oriented Approach
This phd will explore how designers, technologists, and citizens can utilize rapid urban manufacturing and IoT technologies for designing urban space that expresses its intelligence from the intersection of people, places, activities and technology, not merely from the presence of cutting-edge technology. The key question is how smart public infrastructure, i.e. data-driven and algorithm-rich public infrastructures, can be understood by lay-people.
The design-oriented research will utilize a ‘research through design’ approach to develop a digital experience around the bridge and the surrounding urban space. During this extended design and making process the phd student will conduct empirical research to investigate design choices and their implications on (1) new forms of participatory data-informed design processes, (2) the technology-mediated experience of urban space, (3) the emerging relationship between residents and “their” bridge, and (4) new forms of data-informed, citizen led governance of public space.