Hello again, and welcome to another update on my Ph.D. research progress. I will briefly run down the things that happened since the last update, what I am currently working on, and some notable events on the horizon.
CHI 2023 paper
First off, the big news is that the paper I submitted to CHI 2023 was accepted. This is a big deal for me because HCI is the core field I aim to contribute to, and CHI is its flagship conference.
Here’s the full citation:
Alfrink, K., Keller, I., Doorn, N., & Kortuem, G. (2023). Contestable Camera Cars: A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute. https://doi.org/10/jwrx
I have had several papers rejected in the past (CHI is notoriously hard to get accepted at), so I feel vindicated. The paper is already available as an arXiv preprint, as is the concept video that forms the core of the study I report on (many thanks to my pal Simon for collaborating on this with me). CHI 2023 happens in late April. I will be riding a train over there to present the paper in person. Very much looking forward to that.
Responsible Sensing Lab anniversary event
I briefly presented my research at the Responsible Sensing Lab anniversary event on February 16. The whole event was quite enjoyable, and I got some encouraging responses to my ideas afterward which is always nice. The event was recorded in full. My appearance starts around the 1:47:00 mark.
Tweeting, tooting, blogging
I have been getting back into the habit of tweeting, tooting, and even the occasional spot of blogging on this website again. As the end of my Ph.D. nears, I figured it might be worth it to engage more actively with “the discourse,” as they say. I mostly share stuff I read that is related to my research and that I find interesting. Although, of course, posts related to my twin sons’ music taste and struggles with university bureaucracy always win out in the end. (Yes, I am aware my timing is terrible, seeing as how we have basically finally concluded social media was a bad idea after all.)
Envisioning Contestability Loops
At the moment, the majority of my time is taken up by conducting a final study (working title: “Envisioning Contestability Loops”). I am excited about this one because I get to once again collaborate with a professional designer on an artifact, in this case, a visual explanation of my framework, and use the result as a research instrument to dig into, in this case, the strengths and weaknesses of contestability as a generative metaphor for the design of public AI.
In parallel, I have begun to put together my thesis. It is paper-based, but of course, the introductory and concluding chapters require some thought still.
The aim is to have both the final article and thesis finished by the end of summer and then begin the arduous process of getting a date for my defense, assembling a committee, etc.
Agonistic Machine Vision Development
In the meantime, I am also mentoring Laura, another brilliant master graduation student. Her project, titled “Agonistic Machine Vision Development,” builds on my previous research. In particular, one of the challenges I identified in Contestable Camera Cars, that of the differential in information position between citizens and experts when they collaborate in participatory machine learning sessions. It’s very gratifying to see others do design work that pushes these ideas further.
So yeah, like I already mentioned, I will be speaking at CHI 2023, which takes place on 23–28 April in Hamburg. The schedule says I am presenting on April 25 as part of the session on “AI Trust, Transparency and Fairness”, which includes some excellent-looking contributions.
And before that, I will be at ICT.OPEN in Utrecht on April 20 to present briefly on the Contestable AI by Design framework as part of the CHINL track. It should be fun.
That’s it for this update. Maybe, by the time the next one rolls around, I will be able to share a date for my defense. But let’s not jinx it.
Note that these tensions are independent of each other. The diagram does not imply two “sides” of design. At any given moment, a design activity can be plotted on each axis independently. This is also not an exhaustive list of tensions. Finally, Dorst claims these tensions are irreconcilable.
The original passage:
Contemporary developments in design can be described and understood in much the same way. The professional field that we so easily label ‘design’ is complex, and full of inner contradictions. These inner tensions feed the discussions in the field. To name a few: (1) the objectives of design and the motivation of designers can range from commercial success to the common good. (2) The role and position of the designer can be as an autonomous creator, or as a problem solver in-service to the client. (3) The drive of the designer can be idealistic, or it can be more pragmatic (4) The resulting design can be a ‘thing’, but also immaterial (5) The basis for the process of designing can be intuitive, or based on knowledge and research… Etcetera… The development of the design disciplines can be traced along these lines of tension — with designers in different environments and times changing position relative to these fundamental paradoxes, but never resolving them. Ultimately, the real strength and coherence of design as a field of professions comes from recognizing these contradictions, and the dynamics of the field is a result of continuous experimentation along the rifts defined by them. Rather than a common set of practices and skills that designers might have [Cross, 1990] it is these inner contradictions in design that define its culture, its mentality. Design research should be an active force in these discussions, building bridges between them where possible. Not to resolve them into a monolithic Science of Design, but advancing the discussion in this dynamically shifting set of relations.
Dorst, K. (2016, June 27). Design practice and design research: Finally together? Proceedings of DRS 2016. Design Research Society 50th Anniversary Conference, Brighton, UK. https://www.drs2016.org/212
Below are some choice quotes on “citizen participation” from chapter 8 of The End of the End of History, a recommended book on our recent global political history. I feel like many of us in the participatory technology design space are complicit in these practices to some extent. I continue to grapple with alternative models of mass democratic control over technology.
The Center-Left will propose a range of measures designed to promote “civic engagement” or “community participation.”
Citizens’ summits, juries and panels all aim at participation rather than power, at the technocratic incorporation of the people into politics in order to manage away conflict.
Likewise the popularity of deliberative modes of engagement, deliberative stakeholder events or workshops are characteristic tools of technocratic do-gooders as they create the simulacrum of a democratic process in which people are assembled to provide an ostensibly collective solution to a problem, but decisions lack a binding quality or have already been taken in advance.
Though unable to gain traction at a transnational level, the Left may find some success in municipal politics, following the 2010s example of Barcelona.
Sidestepping […] animus toward Big Tech companies, [tech solutionism (Morozov, 2013) and the ideology of ease (Greenfield, 2017)] may come to be applied to non-market activities, such as solving community problems, perhaps at the level of municipal government.
Sovereign, national politics – which neoliberalism was designed to defang – will remain beyond the grasp of the Left. Progressives will prefer instead to operate at the municipal, the everyday or the supranational level – precisely the arena to which neoliberalism sought to displace politics, to where it could do no harm.
Thijs Kleinpaste heeft een mooie boekbespreking van Michael Young’s De opkomst van de meritocratie in de Nederlandse Boekengids. Een paar passages die ik vooral sterk vond hieronder.
De grote verdienste van Young is dat hij inzichtelijk maakt hoe onschuldige principes als ‘beloning naar verdienste’ volkomen kunnen ontsporen als ze worden ingezet binnen een verder onveranderd sociaal en economisch stelsel. Concreet: sommigen een uitverkoren positie geven in een maatschappelijke hiërarchie en anderen opdragen om hun plek te kennen.
Het klassenbelang van de meritocratie is abstracter. Het belangrijkste is om allereerst een klasse of kaste te blijven om zo de voordelen daarvan te kunnen blijven oogsten. In iedere moderne staat wordt macht uitgeoefend – of meritocratischer gezegd: moet er bestuurd worden – en als er dan toch een kaste moet zijn die deze taak vervult, laat dat die van de hoogst gediplomeerden zijn. De meritocratie reproduceert zichzelf door deze gedachte mee te geven aan elke nieuwe lichting die tot haar uitverkoren rangen toetreedt: dat zij de juiste, met recht geroepen groep is om de wereld te ordenen. Niet de arbeidersklasse, niet de ongeleide democratie, niet het gekrioel van belangengroepjes – maar zij. Alle materiële voordelen van de meritocratie vloeien voort uit het in stand houden van die uitverkoren status.
Te vaak lijkt de gedachte te zijn dat vertegenwoordiging en het bedienen van belangen onproblematisch in elkaars verlengde liggen. Om die zelfgenoegzaamheid te doorbreken is kennelijk iets stelligers nodig, zoals de gedachte dat waar managers en bestuurders zijn, er gestaakt moet kunnen worden: dat waar macht wordt uitgeoefend en waar aanwijzingen worden gegeven, zij die de aanwijzingen moeten opvolgen kunnen stemmen met hun voeten. Dat conflict omarmd wordt en niet wordt gezien als iets wat gevaarlijk is voor de maatschappelijke lieve vrede, de ‘economie’, of zelfs de democratie. Conflict is ongetwijfeld gevaarlijk voor de hegemonie van de manager en diens klasse van droomkoninkjes, en daarmee voor de soevereiniteit van de meritocratische orde, maar dat gevaar is zowel heilzaam als noodzakelijk. Een van de lessen van het boek van Young is immers ook dat je moet kiezen: zelf een revolutie maken, of wachten tot die uitbreekt.
Seven months since the last update. Much better than the gap of three years between the previous two. These past months I feel like I have begun to reap the rewards of the grunt work of the last couple of years. Two papers finally saw the light of day, as well as a course syllabus. Read on for some more details.
Things that happened:
First, a pair of talks. In February I presented on “Contestable AI& Civic Co-Design” as part of a panel chaired by Roy Bendor at Reinventing the City. A PDF of my slides is available on the contestable.ai website, here. In March, I presented at the AiTech Agora. The title of the talk is “Meaningful Human Control Through Contestability by Design” and the slides are available here.
In February a short interview was published by Bold Cities, a smart city research center I am loosely affiliated with.
Then, in March, came a big moment for me, with the publication of my first journal article in AI& Society. Here’s the abstract, and reference. It’s available open access.
The increasing use of artificial intelligence (AI) by public actors has led to a push for more transparency. Previous research has conceptualized AI transparency as knowledge that empowers citizens and experts to make informed choices about the use and governance of AI. Conversely, in this paper, we critically examine if transparency-as-knowledge is an appropriate concept for a public realm where private interests intersect with democratic concerns. We conduct a practice-based design research study in which we prototype and evaluate a transparent smart electric vehicle charge point, and investigate experts’ and citizens’ understanding of AI transparency. We find that citizens experience transparency as burdensome; experts hope transparency ensures acceptance, while citizens are mostly indifferent to AI; and with absent means of control, citizens question transparency’s relevance. The tensions we identify suggest transparency cannot be reduced to a product feature, but should be seen as a mediator of debate between experts and citizens.
Alfrink, Kars, Ianus Keller, Neelke Doorn, and Gerd Kortuem. “Tensions in Transparent Urban AI: Designing a Smart Electric Vehicle Charge Point.” AI&SOCIETY, March 31, 2022. https://doi.org/10/gpszwh.
In April, the Responsible Sensing Lab published a report on “Responsible Drones”, to which I contributed a little as participant on workshops that lead up to it.
Artificial Intelligence (AI) is increasingly used by a variety of organizations in ways that impact society at scale. This 6 EC master elective course aims to equip students with tools and methods for the responsible design of public AI. During seven weeks students attend a full-day session of lectures and workshops. Students collaborate on a group design project throughout. At the end, students individually deliver a short paper.
The third big milestone was the publication of my second journal article in Minds & Machines. It is the theoretical cornerstone of my thesis, a provisional framework for designing contestability into AI systems. Abstract and reference follow. This one is also open access.
As the use of AI systems continues to increase, so do concerns over their lack of fairness, legitimacy and accountability. Such harmful automated decision-making can be guarded against by ensuring AI systems are contestable by design: responsive to human intervention throughout the system lifecycle. Contestable AI by design is a small but growing field of research. However, most available knowledge requires a significant amount of translation to be applicable in practice. A proven way of conveying intermediate-level, generative design knowledge is in the form of frameworks. In this article we use qualitative-interpretative methods and visual mapping techniques to extract from the literature sociotechnical features and practices that contribute to contestable AI, and synthesize these into a design framework.
Alfrink, Kars, Ianus Keller, Gerd Kortuem, and Neelke Doorn. “Contestable AI by Design: Towards a Framework.” Minds and Machines, August 13, 2022. https://doi.org/10/gqnjcs.
And finally, as these things were going on, I have been quietly chipping away at a third paper that applies the contestable AI by design framework to the phenomenon of camera cars used by municipalities. My aim was to create an example of what I mean by contestable AI, and use the example to interview civil servants about their views on the challenges facing implementation of contestability in the public AI systems they are involved with. I’ve submitted the manuscript, titled “Contestable Camera Cars: A speculative design exploration of public AI that is open and responsive to dispute”, to CHI, and will hear back early November. Fingers crossed for that one.
So what’s next? Well, I have little under a year left on my PhD contract, so I should really begin wrapping up. I am considering a final publication, but have not settled on any topic in particular yet. Current interests include AI system monitoring, visual methods, and more besides. Once that final paper is in the can I will turn my attention to putting together the thesis itself, which is paper-based, so mostly requires writing an overall introduction and conclusion to bookend the included publications. Should be a piece of cake, right?
And after the PhD? I am not sure yet, but I hope to remain involved in research and teaching, while at the same time perhaps getting a bit more back into design practice besides. If at all possible, hopefully in the domain of public sector applications of AI.
That’s it for this update. I will be back at some point when there is more news to share.
It has been three years since I last wrote an update on my PhD. I guess another post is in order.
My PhD plan was formally green-lit in October 2019. I am now over three years into this thing. There are roughly two more years left on the clock. I update my plans on a rolling basis. By my latest estimation, I should be ready to request a date for my defense in May 2023.
Of course, the pandemic forced me to adjust course. I am lucky enough not to be locked into particular methods or cases that are fundamentally incompatible with our current predicament. But still, I had to change up my methods, and reconsider the sequencing of my planned studies.
The conference paper I mentioned in the previous update, using the MX3D bridge to explore smart cities’ logic of control and cityness, was rejected by DIS. I performed a rewrite, but then came to the conclusion it was kind of a false start. These kinds of things are all in the game, of course.
The second paper I wrote uses the Transparent Charging Station to investigate how notions of transparent AI differ between experts and citizens. It was finally accepted late last year and should see publication in AI& Society soon. It is titled Tensions in Transparent Urban AI: Designing A Smart Electric Vehicle Charge Point. This piece went through multiple major revisions and was previously rejected by DIS and CHI.
A third paper, Contestable AI by Design: Towards A Framework, uses a systematic literature review of AI contestability to construct a preliminary design framework, is currently under review at a major philosophy of technology journal. Fingers crossed.
And currently, I am working on my fourth publication, tangentially titled Contestable Camera Cars: A Speculative Design Exploration of Public AI Systems Responsive to Value Change, which will be based on empirical work that uses speculative design as a way to develop guidelines and examples for the aforementioned design framework, and to investigate civil servants’ views on the pathways towards contestable AI systems in public administration.
Once that one is done, I intend to do one more study, probably looking into monitoring and traceability as potential leverage points for contestability, after which I will turn my attention to completing my thesis.
Aside from my research, in 2021 was allowed to develop and teach a master elective centered around my PhD topic, titled AI& Society. In it, students are equipped with technical knowledge of AI, and tools for thinking about AI ethics. They apply these to a design studio project focused on conceptualizing a responsible AI-enabled service that addresses a social issue the city of Amsterdam might conceivably struggle with. Students also write a brief paper reflecting on and critiquing their group design work. You can see me on Vimeo do a brief video introduction for students who are considering the course. I will be running the course again this year starting end of February.
I also mentored a number of brilliant master graduation students: Xueyao Wang (with Jacky Bourgeois as chair) Jooyoung Park, Loes Sloetjes (both with Roy Bendor as chair) and currently Fabian Geiser (with Euiyoung Kim as chair). Working with students is one of the best parts of being in academia.
All of the above would not have been possible without the great support from my supervisory team: Ianus Keller, Neelke Doorn and Gerd Kortuem. I should also give special mention to Thijs Turel at AMS Institute’s Responsible Sensing Lab, where most of my empirical work is situated.
If you want to dig a little deeper into some of this, I recently set up a website for my PhD project over at contestable.ai.
Ik had laatst contact met een internationale “thought leader” op het gebied van “tech ethics”. Hij vertelde mij dat hij heel dankbaar is voor het bestaan van de transparante laadpaal omdat het zo’n goed voorbeeld is van hoe design kan bijdragen aan eerlijke technologie.
Dat is natuurlijk ontzettend leuk om te horen. En het past in een bredere trend in de industrie gericht op het transparant en uitlegbaar maken van algoritmes. Inmiddels is het zelfs zo ver dat wetgeving uitlegbaarheid (in sommige gevallen) verplicht stelt.
In de documentaire hoor je meerdere mensen vertellen (mijzelf inbegrepen) waarom het belangrijk is dat stedelijke algoritmes transparant zijn. Thijs benoemt heel mooi twee redenen: Enerzijds het collectieve belang om democratische controle op de ontwikkeling van stedelijke algoritmes mogelijk te maken. Anderzijds is er het individuele belang om je recht te kunnen halen als een systeem een beslissing maakt waarmee je het (om wat voor reden dan ook) niet eens bent.
En inderdaad, in beide gevallen (collectieve controle en individuele remedie) is transparantie een randvoorwaarde. Ik denk dat we met dit project een hoop problemen qua design en techniek hebben opgelost die daarbij komen kijken. Tegelijkertijd doemt er een nieuwe vraag aan de horizon op: Als we begrijpen hoe een slim systeem werkt, en we zijn het er niet mee eens, wat dan? Hoe krijg je vervolgens daadwerkelijk invloed op de werking van het systeem?
Ik denk dat we onze focus zullen moeten gaan verleggen van transparantie naar wat ik tegenspraak of in goed Engels “contestability” noem.
Ontwerpen voor tegenspraak betekent dat we na moeten gaan denken over de middelen die mensen nodig hebben voor het uitoefenen van hun recht op menselijke interventie. Ja, dit betekent dat we informatie moeten aanleveren over het hoe en waarom van individuele beslissingen. Transparantie dus. Maar het betekent ook dat we nieuwe kanalen en processen moeten inrichten waarmee mensen verzoeken kunnen indienen voor het herzien van een beslissing. We zullen na moeten gaan denken over hoe we dergelijke verzoeken beoordelen, en hoe we er voor zorgen dat het slimme systeem in kwestie “leert” van de signalen die we op deze manier oppikken uit de samenleving.
Je zou kunnen zeggen dat ontwerpen van transparantie eenrichtingsverkeer is. Informatie stroomt van de ontwikkelende partij, naar de eindgebruiker. Bij het ontwerpen voor tegenspraak gaat het om het creëren van een dialoog tussen ontwikkelaars en burgers.
Ik zeg burgers want niet alleen klassieke eindgebruikers worden geraakt door slimme systemen. Allerlei andere groepen worden ook, vaak indirect beïnvloed.
Dat is ook een nieuwe ontwerp uitdaging. Hoe ontwerp je niet alleen voor de eindgebruiker (zoals bij de transparante laadpaal de EV bestuurder) maar ook voor zogenaamde indirecte belanghebbenden, bijvoorbeeld bewoners van straten waar laadpalen geplaatst worden, die geen EV rijden, of zelfs geen auto, maar evengoed een belang hebben bij hoe stoepen en straten worden ingericht.
Deze verbreding van het blikveld betekent dat we bij het ontwerpen voor tegenspraak nóg een stap verder kunnen en zelfs moeten gaan dan het mogelijk maken van remedie bij individuele beslissingen.
Want ontwerpen voor tegenspraak bij individuele beslissingen van een reeds uitgerold systeem is noodzakelijkerwijs post-hoc en reactief, en beperkt zich tot één enkele groep belanghebbenden.
Zoals Thijs ook min of meer benoemt in de documentaire beïnvloed slimme stedelijke infrastructuur de levens van ons allemaal, en je zou kunnen zeggen dat de design en technische keuzes die bij de ontwikkeling daarvan gemaakt worden intrinsiek ook politieke keuzes zijn.
Daarom denk ik dat we er niet omheen kunnen om het proces dat ten grondslag ligt aan deze systemen zelf, ook zo in te richten dat er ruimte is voor tegenspraak. In mijn ideale wereld is de ontwikkeling van een volgende generatie slimme laadpalen daarom participatief, pluriform en inclusief, net als onze democratie dat zelf ook streeft te zijn.
Hoe we dit soort “contestable” algoritmes precies vorm moeten geven, hoe ontwerpen voor tegenspraak moeten gaan werken, is een open vraag. Maar een aantal jaren geleden wist niemand nog hoe een transparante laadpaal er uit zou moeten zien, en dat hebben we ook voor elkaar gekregen.
I’ll be at Beyond Smart Cities Today the next couple of days (18–19 September). Below is the abstract I submitted, plus a bibliography of some of the stuff that went into my thinking for this and related matters that I won’t have the time to get into.
In the actually existing smart city, algorithmic systems are increasingly used for the purposes of automated decision-making, including as part of public infrastructure. Algorithmic systems raise a range of ethical concerns, many of which stem from their opacity. As a result, prescriptions for improving the accountability, trustworthiness and legitimacy of algorithmic systems are often based on a transparency ideal. The thinking goes that if the functioning and ownership of an algorithmic system is made perceivable, people understand them and are in turn able to supervise them. However, there are limits to this approach. Algorithmic systems are complex and ever-changing socio-technical assemblages. Rendering them visible is not a straightforward design and engineering task. Furthermore such transparency does not necessarily lead to understanding or, crucially, the ability to act on this understanding. We believe legitimate smart public infrastructure needs to include the possibility for subjects to articulate objections to procedures and outcomes. The resulting “contestable infrastructure” would create spaces that open up the possibility for expressing conflicting views on the smart city. Our project is to explore the design implications of this line of reasoning for the physical assets that citizens encounter in the city. Because after all, these are the perceivable elements of the larger infrastructural systems that recede from view.
Alkhatib, A., & Bernstein, M. (2019). Street-Level Algorithms. 1–13. https://doi.org/10.1145/3290605.3300760
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Centivany, A., & Glushko, B. (2016). “Popcorn tastes good”: Participatory policymaking and Reddit’s “AMAgeddon.” Conference on Human Factors in Computing Systems — Proceedings, 1126–1137. https://doi.org/10.1145/2858036.2858516
Crawford, K. (2016). Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics. Science Technology and Human Values, 41(1), 77–92. https://doi.org/10.1177/0162243915589635
DiSalvo, C. (2010). Design, Democracy and Agonistic Pluralism. Proceedings of the Design Research Society Conference, 366–371.
Hildebrandt, M. (2017). Privacy As Protection of the Incomputable Self: Agonistic Machine Learning. SSRN Electronic Journal, 1–33. https://doi.org/10.2139/ssrn.3081776
Jackson, S. J., Gillespie, T., & Payette, S. (2014). The Policy Knot: Re-integrating Policy, Practice and Design. CSCW Studies of Social Computing, 588–602. https://doi.org/10.1145/2531602.2531674
Jewell, M. (2018). Contesting the decision: living in (and living with) the smart city. International Review of Law, Computers and Technology. https://doi.org/10.1080/13600869.2018.1457000
Lindblom, L. (2019). Consent, Contestability, and Unions. Business Ethics Quarterly. https://doi.org/10.1017/beq.2018.25
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
Van de Poel, I. (2016). An ethical framework for evaluating experimental technology. Science and Engineering Ethics, 22(3), 667–686. https://doi.org/10.1007/s11948-015‑9724‑3
Thijs Turèl of AMS Institute and myself presented a version of the talk below at the Cities for Digital Rights conference on June 19 in Amsterdam during the We Make the City festival. The talk is an attempt to articulate some of the ideas we both have been developing for some time around contestability in smart public infrastructure. As always with this sort of thing, this is intended as a conversation piece so I welcome any thoughts you may have.
The basic message of the talk is that when we start to do automated decision-making in public infrastructure using algorithmic systems, we need to design for the inevitable disagreements that may arise and furthermore, we suggest there is an opportunity to focus on designing for such disagreements in the physical objects that people encounter in urban space as they make use of infrastructure.
We set the scene by showing a number of examples of smart public infrastructure. A cyclist crossing that adapts to weather conditions. If it’s raining cyclists more frequently get a green light. A pedestrian crossing in Tilburg where elderly can use their mobile to get more time to cross. And finally, the case we are involved with ourselves: smart EV charging in the city of Amsterdam, about which more later.
We identify three trends in smart public infrastructure: (1) where previously algorithms were used to inform policy, now they are employed to perform automated decision-making on an individual case basis. This raises the stakes; (2) distributed ownership of these systems as the result of public-private partnerships and other complex collaboration schemes leads to unclear responsibility; and finally (3) the increasing use of machine learning leads to opaque decision-making.
These trends, and algorithmic systems more generally, raise a number of ethical concerns. They include but are not limited to: the use of inductive correlations (for example in the case of machine learning) leads to unjustified results; lack of access to and comprehension of a system’s inner workings produces opacity, which in turn leads to a lack of trust in the systems themselves and the organisations that use them; bias is introduced by a number of factors, including development team prejudices, technical flaws, bad data and unforeseen interactions with other systems; and finally the use of profiling, nudging and personalisation leads to diminished human agency. (We highly recommend the article by Mittelstadt et al. for a comprehensive overview of ethical concerns raised by algorithms.)
So for us, the question that emerges from all this is: How do we organise the supervision of smart public infrastructure in a democratic and lawful way?
There are a number of existing approaches to this question. These include legal and regulatory (e.g. the right to explanation in the GDPR); auditing (e.g. KPMG’s “AI in Control” method, BKZ’s transparantielab); procurement (e.g. open source clauses); insourcing (e.g. GOV.UK) and design and engineering (e.g. our own work on the transparent charging station).
We feel there are two important limitations with these existing approaches. The first is a focus on professionals and the second is a focus on prediction. We’ll discuss each in turn.
First of all, many solutions target a professional class, be it accountants, civil servants, supervisory boards, as well as technologists, designers and so on. But we feel there is a role for the citizen as well, because the supervision of these systems is simply too important to be left to a privileged few. This role would include identifying wrongdoing, and suggesting alternatives.
There is a tension here, which is that from the perspective of the public sector one should only ask citizens for their opinion when you have the intention and the resources to actually act on their suggestions. It can also be a challenge to identify legitimate concerns in the flood of feedback that can sometimes occur. From our point of view though, such concerns should not be used as an excuse to not engage the public. If citizen participation is considered necessary, the focus should be on freeing up resources and setting up structures that make it feasible and effective.
The second limitation is prediction. This is best illustrated with the Collinridge dilemma: in the early phases of new technology, when a technology and its social embedding are still malleable, there is uncertainty about the social effects of that technology. In later phases, social effects may be clear but then often the technology has become so well entrenched in society that it is hard to overcome negative social effects. (This summary is taken from an excellent van de Poel article on the ethics of experimental technology.)
Many solutions disregard the Collingridge dilemma and try to predict and prevent adverse effects of new systems at design-time. One example of this approach would be value-sensitive design. Our focus in stead is on use-time. Considering the fact that smart public infrastructure tends to be developed on an ongoing basis, the question becomes how to make citizens a partner in this process. And even more specifically we are interested in how this can be made part of the design of the “touchpoints” people actually encounter in the streets, as well as their backstage processes.
Why do we focus on these physical objects? Because this is where people actually meet the infrastructural systems, of which large parts recede from view. These are the places where they become aware of their presence. They are the proverbial tip of the iceberg.
The use of automated decision-making in infrastructure reduces people’s agency. For this reason, resources for agency need to be designed back into these systems. Frequently the answer to this question is premised on a transparency ideal. This may be a prerequisite for agency, but it is not sufficient. Transparency may help you become aware of what is going on, but it will not necessarily help you to act on that knowledge. This is why we propose a shift from transparency to contestability. (We can highly recommend Ananny and Crawford’s article for more on why transparency is insufficient.)
To clarify what we mean by contestability, consider the following three examples: When you see the lights on your router blink in the middle of the night when no-one in your household is using the internet you can act on this knowledge by yanking out the device’s power cord. You may never use the emergency brake in a train but its presence does give you a sense of control. And finally, the cash register receipt provides you with a view into both the procedure and the outcome of the supermarket checkout procedure and it offers a resource with which you can dispute them if something appears to be wrong.
None of these examples is a perfect illustration of contestability but they hint at something more than transparency, or perhaps even something wholly separate from it. We’ve been investigating what their equivalents would be in the context of smart public infrastructure.
To illustrate this point further let us come back to the smart EV charging project we mentioned earlier. In Amsterdam, public EV charging stations are becoming “smart” which in this case means they automatically adapt the speed of charging to a number of factors. These include grid capacity, and the availability of solar energy. Additional factors can be added in future, one of which under consideration is to give priority to shared cars over privately owned cars. We are involved with an ongoing effort to consider how such charging stations can be redesigned so that people understand what’s going on behind the scenes and can act on this understanding. The motivation for this is that if not designed carefully, the opacity of smart EV charging infrastructure may be detrimental to social acceptance of the technology. (A first outcome of these efforts is the Transparent Charging Station designed by The Incredible Machine. A follow-up project is ongoing.)
We have identified a number of different ways in which people may object to smart EV charging. They are listed in the table below. These types of objections can lead us to feature requirements for making the system contestable.
Because the list is preliminary, we asked the audience if they could imagine additional objections, if those examples represented new categories, and if they would require additional features for people to be able to act on them. One particularly interesting suggestion that emerged was to give local communities control over the policies enacted by the charge points in their vicinity. That’s something to further consider the implications of.
And that’s where we left it. So to summarise:
Algorithmic systems are becoming part of public infrastructure.
Smart public infrastructure raises new ethical concerns.
Many solutions to ethical concerns are premised on a transparency ideal, but do not address the issue of diminished agency.
There are different categories of objections people may have to an algorithmic system’s workings.
Making a system contestable means creating resources for people to object, opening up a space for the exploration of meaningful alternatives to its current implementation.