Democratizing AI Through Continuous Adaptability: The Role of DevOps

Below are the abstract and slides for my con­tri­bu­tion to the TILT­ing Per­spec­tives 2024 pan­el “The mutu­al shap­ing of demo­c­ra­t­ic prac­tices & AI,” mod­er­at­ed by Mer­el Noorman.

Slides

Abstract

Contestability

This pre­sen­ta­tion delves into democ­ra­tiz­ing arti­fi­cial intel­li­gence (AI) sys­tems through con­testa­bil­i­ty. Con­testa­bil­i­ty refers to the abil­i­ty of AI sys­tems to remain open and respon­sive to dis­putes through­out their life­cy­cle. It approach­es AI sys­tems as are­nas where groups com­pete for pow­er over designs and outcomes. 

Autonomy, democratic agency, legitimation

We iden­ti­fy con­testa­bil­i­ty as a crit­i­cal sys­tem qual­i­ty for respect­ing peo­ple’s auton­o­my. This includes their demo­c­ra­t­ic agency: their abil­i­ty to legit­i­mate poli­cies. This includes poli­cies enact­ed by AI systems. 

For a deci­sion to be legit­i­mate, it must be demo­c­ra­t­i­cal­ly willed or rely on “nor­ma­tive author­i­ty.” The demo­c­ra­t­ic path­way should be con­strained by nor­ma­tive bounds to avoid arbi­trari­ness. The appeal to author­i­ty should meet the “access con­straint,” which ensures cit­i­zens can form beliefs about poli­cies with a suf­fi­cient degree of agency (Peter, 2020 in Rubel et al., 2021).

Con­testa­bil­i­ty is the qual­i­ty that ensures mech­a­nisms are in place for sub­jects to exer­cise their demo­c­ra­t­ic agency. In the case of an appeal to nor­ma­tive author­i­ty, con­testa­bil­i­ty mech­a­nisms are how sub­jects and their rep­re­sen­ta­tives gain access to the infor­ma­tion that will enable them to eval­u­ate its jus­ti­fi­a­bil­i­ty. In this way, con­testa­bil­i­ty sat­is­fies the access con­straint. In the case of demo­c­ra­t­ic will, con­testa­bil­i­ty-by-design prac­tices are how sys­tem devel­op­ment is democ­ra­tized. The auton­o­my account of legit­i­ma­tion adds the nor­ma­tive con­straints that should bind this demo­c­ra­t­ic pathway.

Him­mel­re­ich (2022) sim­i­lar­ly argues that only a “thick” con­cep­tion of democ­ra­cy will address some of the cur­rent short­com­ings of AI devel­op­ment. This is a path­way that not only allows for par­tic­i­pa­tion but also includes delib­er­a­tion over justifications.

The agonistic arena

Else­where, we have pro­posed the Ago­nis­tic Are­na as a metaphor for think­ing about the democ­ra­ti­za­tion of AI sys­tems (Alfrink et al., 2024). Con­testable AI embod­ies the gen­er­a­tive metaphor of the Are­na. This metaphor char­ac­ter­izes pub­lic AI as a space where inter­locu­tors embrace con­flict as pro­duc­tive. Seen through the lens of the Are­na, pub­lic AI prob­lems stem from a need for oppor­tu­ni­ties for adver­sar­i­al inter­ac­tion between stakeholders. 

This metaphor­i­cal fram­ing sug­gests pre­scrip­tions to make more con­tentious and open to dis­pute the norms and pro­ce­dures that shape:

  1. AI sys­tem design deci­sions on a glob­al lev­el, and
  2. human-AI sys­tem out­put deci­sions on a local lev­el (i.e., indi­vid­ual deci­sion out­comes), estab­lish­ing new dia­log­i­cal feed­back loops between stake­hold­ers that ensure con­tin­u­ous monitoring.

The Are­na metaphor encour­ages a design ethos of revis­abil­i­ty and reversibil­i­ty so that AI sys­tems embody the ago­nis­tic ide­al of contingency.

Post-deployment malleability, feedback-ladenness

Unlike phys­i­cal sys­tems, AI tech­nolo­gies exhib­it a unique mal­leabil­i­ty post-deployment. 

For exam­ple, LLM chat­bots opti­mize their per­for­mance based on a vari­ety of feed­back sources, includ­ing inter­ac­tions with users, as well as feed­back col­lect­ed through crowd-sourced data work.

Because of this open-end­ed­ness, demo­c­ra­t­ic con­trol and over­sight in the oper­a­tions phase of the sys­tem’s life­cy­cle become a par­tic­u­lar concern.

This is a con­cern because while AI sys­tems are dynam­ic and feed­back-laden (Gilbert et al., 2023), many of the exist­ing over­sight and con­trol mea­sures are sta­t­ic, one-off exer­cis­es that strug­gle to track sys­tems as they evolve over time.

DevOps

The field of DevOps is piv­otal in this con­text. DevOps focus­es on sys­tem instru­men­ta­tion for enhanced mon­i­tor­ing and con­trol for con­tin­u­ous improve­ment. Typ­i­cal­ly, met­rics for DevOps and their machine learn­ing-spe­cif­ic MLOps off­shoot empha­size tech­ni­cal per­for­mance and busi­ness objectives.

How­ev­er, there is scope to expand these to include mat­ters of pub­lic con­cern. The mat­ters-of-con­cern per­spec­tive shifts the focus on issues such as fair­ness or dis­crim­i­na­tion, view­ing them as chal­lenges that can­not be resolved through uni­ver­sal meth­ods with absolute cer­tain­ty. Rather, it high­lights how stan­dards are local­ly nego­ti­at­ed with­in spe­cif­ic insti­tu­tion­al con­texts, empha­siz­ing that such stan­dards are nev­er guar­an­teed (Lam­p­land & Star, 2009, Geiger et al., 2023).

MLOps Metrics

In the con­text of machine learn­ing sys­tems, tech­ni­cal met­rics focus on mod­el accu­ra­cy. For exam­ple, a finan­cial ser­vices com­pa­ny might use Area Under The Curve Receiv­er Oper­at­ing Char­ac­ter­is­tics (AUC-ROC) to con­tin­u­ous­ly mon­i­tor and main­tain the per­for­mance of their fraud detec­tion mod­el in production.

Busi­ness met­rics focus on cost-ben­e­fit analy­ses. For exam­ple, a bank might use a cost-ben­e­fit matrix to bal­ance the poten­tial rev­enue from approv­ing a loan against the risk of default, ensur­ing that the over­all prof­itabil­i­ty of their loan port­fo­lio is optimized.

Drift

These met­rics can be mon­i­tored over time to detect “drift” between a mod­el and the world. Train­ing sets are sta­t­ic. Real­i­ty is dynam­ic. It changes over time. Drift occurs when the nature of new input data diverges from the data a mod­el was trained on. A change in per­for­mance met­rics may be used to alert sys­tem oper­a­tors, who can then inves­ti­gate and decide on a course of action, e.g., retrain­ing a mod­el on updat­ed data. This, in effect, cre­ates a feed­back loop between the sys­tem in use and its ongo­ing development.

An expan­sion of these prac­tices in the inter­est of con­testa­bil­i­ty would require:

  1. set­ting dif­fer­ent metrics,
  2. expos­ing these met­rics to addi­tion­al audi­ences, and
  3. estab­lish­ing feed­back loops with the process­es that gov­ern mod­els and the sys­tems they are embed­ded in.

Example 1: Camera Cars

Let’s say a city gov­ern­ment uses a cam­era-equipped vehi­cle and a com­put­er vision mod­el to detect pot­holes in pub­lic roads. In addi­tion to accu­ra­cy and a favor­able cost-ben­e­fit ratio, cit­i­zens, and road users in par­tic­u­lar, may care about the time between a detect­ed pot­hole and its fix­ing. Or, they may care about the dis­tri­b­u­tion of pot­holes across the city. Fur­ther­more, when road main­te­nance appears to be degrad­ing, this should be tak­en up with depart­ment lead­er­ship, the respon­si­ble alder­per­son, and coun­cil members.

Example 2: EV Charching

Or, let’s say the same city gov­ern­ment uses an algo­rith­mic sys­tem to opti­mize pub­lic elec­tric vehi­cle (EV) charg­ing sta­tions for green ener­gy use by adapt­ing charg­ing speeds to expect­ed sun and wind. EV dri­vers may want to know how much ener­gy has been shift­ed to green­er time win­dows and its trends. With­out such vis­i­bil­i­ty on a sys­tem’s actu­al goal achieve­ment, cit­i­zens’ abil­i­ty to legit­i­mate its use suf­fers. As I have already men­tioned, demo­c­ra­t­ic agency, when enact­ed via the appeal to author­i­ty, depends on access to “nor­ma­tive facts” that under­pin poli­cies. And final­ly, pro­fessed sys­tem func­tion­al­i­ty must be demon­strat­ed as well (Raji et al., 2022).

DevOps as sociotechnical leverage point for democratizing AI

These brief exam­ples show that the DevOps approach is a poten­tial sociotech­ni­cal lever­age point. It offers path­ways for democ­ra­tiz­ing AI sys­tem design, devel­op­ment, and operations. 

DevOps can be adapt­ed to fur­ther con­testa­bil­i­ty. It cre­ates new chan­nels between human and machine actors. One of DevOp­s’s essen­tial activ­i­ties is mon­i­tor­ing (Smith, 2020), which pre­sup­pos­es fal­li­bil­i­ty, a nec­es­sary pre­con­di­tion for con­testa­bil­i­ty. Final­ly, it requires and pro­vides infra­struc­ture for tech­ni­cal flex­i­bil­i­ty so that recov­ery from error is low-cost and con­tin­u­ous improve­ment becomes prac­ti­cal­ly feasible.

The mutual shaping of democratic practices & AI

Zoom­ing out fur­ther, let’s reflect on this pan­el’s over­all theme, pick­ing out three ele­ments: legit­i­ma­tion, rep­re­sen­ta­tion of mar­gin­al­ized groups, and deal­ing with con­flict and con­tes­ta­tion after imple­men­ta­tion and dur­ing use.

Con­testa­bil­i­ty is a lever for demand­ing jus­ti­fi­ca­tions from oper­a­tors, which is a nec­es­sary input for legit­i­ma­tion by sub­jects (Henin & Le Métay­er, 2022). Con­testa­bil­i­ty frames dif­fer­ent actors’ stances as adver­sar­i­al posi­tions on a polit­i­cal field rather than “equal­ly valid” per­spec­tives (Scott, 2023). And final­ly, rela­tions, mon­i­tor­ing, and revis­abil­i­ty are all ways to give voice to and enable respon­sive­ness to con­tes­ta­tions (Genus & Stir­ling, 2018).

And again, all of these things can be fur­thered in the post-deploy­ment phase by adapt­ing the DevOps lens.

Bibliography

  • Alfrink, K., Keller, I., Kortuem, G., & Doorn, N. (2022). Con­testable AI by Design: Towards a Frame­work. Minds and Machines33(4), 613–639. https://doi.org/10/gqnjcs
  • Alfrink, K., Keller, I., Yur­ri­ta Sem­per­e­na, M., Buly­gin, D., Kortuem, G., & Doorn, N. (2024). Envi­sion­ing Con­testa­bil­i­ty Loops: Eval­u­at­ing the Ago­nis­tic Are­na as a Gen­er­a­tive Metaphor for Pub­lic AIShe Ji: The Jour­nal of Design, Eco­nom­ics, and Inno­va­tion10(1), 53–93. https://doi.org/10/gtzwft
  • Geiger, R. S., Tan­don, U., Gakhokidze, A., Song, L., & Irani, L. (2023). Mak­ing Algo­rithms Pub­lic: Reimag­in­ing Audit­ing From Mat­ters of Fact to Mat­ters of Con­cern. Inter­na­tion­al Jour­nal of Com­mu­ni­ca­tion18(0), Arti­cle 0.
  • Genus, A., & Stir­ling, A. (2018). Collingridge and the dilem­ma of con­trol: Towards respon­si­ble and account­able inno­va­tion. Research Pol­i­cy47(1), 61–69. https://doi.org/10/gcs7sn
  • Gilbert, T. K., Lam­bert, N., Dean, S., Zick, T., Snoswell, A., & Mehta, S. (2023). Reward Reports for Rein­force­ment Learn­ing. Pro­ceed­ings of the 2023 AAAI/ACM Con­fer­ence on AI, Ethics, and Soci­ety, 84–130. https://doi.org/10/gs9cnh
  • Henin, C., & Le Métay­er, D. (2022). Beyond explain­abil­i­ty: Jus­ti­fi­a­bil­i­ty and con­testa­bil­i­ty of algo­rith­mic deci­sion sys­tems. AI & SOCIETY37(4), 1397–1410. https://doi.org/10/gmg8pf
  • Him­mel­re­ich, J. (2022). Against “Democ­ra­tiz­ing AI.” AI & SOCIETYhttps://doi.org/10/gr95d5
  • Lam­p­land, M., & Star, S. L. (Eds.). (2008). Stan­dards and Their Sto­ries: How Quan­ti­fy­ing, Clas­si­fy­ing, and For­mal­iz­ing Prac­tices Shape Every­day Life (1st edi­tion). Cor­nell Uni­ver­si­ty Press.
  • Peter, F. (2020). The Grounds of Polit­i­cal Legit­i­ma­cy. Jour­nal of the Amer­i­can Philo­soph­i­cal Asso­ci­a­tion6(3), 372–390. https://doi.org/10/grqfhn
  • Raji, I. D., Kumar, I. E., Horowitz, A., & Selb­st, A. (2022). The Fal­la­cy of AI Func­tion­al­i­ty. 2022 ACM Con­fer­ence on Fair­ness, Account­abil­i­ty, and Trans­paren­cy, 959–972. https://doi.org/10/gqfvf5
  • Rubel, A., Cas­tro, C., & Pham, A. K. (2021). Algo­rithms and auton­o­my: The ethics of auto­mat­ed deci­sion sys­tems. Cam­bridge Uni­ver­si­ty Press.
  • Scott, D. (2023). Diver­si­fy­ing the Delib­er­a­tive Turn: Toward an Ago­nis­tic RRISci­ence, Tech­nol­o­gy, & Human Val­ues48(2), 295–318. https://doi.org/10/gpk2pr
  • Smith, J. D. (2020). Oper­a­tions anti-pat­terns, DevOps solu­tions. Man­ning Publications.
  • Treveil, M. (2020). Intro­duc­ing MLOps: How to scale machine learn­ing in the enter­prise (First edi­tion). O’Reilly.

PhD update – January 2022

It has been three years since I last wrote an update on my PhD. I guess anoth­er post is in order. 

My PhD plan was for­mal­ly green-lit in Octo­ber 2019. I am now over three years into this thing. There are rough­ly two more years left on the clock. I update my plans on a rolling basis. By my lat­est esti­ma­tion, I should be ready to request a date for my defense in May 2023. 

Of course, the pan­dem­ic forced me to adjust course. I am lucky enough not to be locked into par­tic­u­lar meth­ods or cas­es that are fun­da­men­tal­ly incom­pat­i­ble with our cur­rent predica­ment. But still, I had to change up my meth­ods, and recon­sid­er the sequenc­ing of my planned studies. 

The con­fer­ence paper I men­tioned in the pre­vi­ous update, using the MX3D bridge to explore smart cities’ log­ic of con­trol and city­ness, was reject­ed by DIS. I per­formed a rewrite, but then came to the con­clu­sion it was kind of a false start. These kinds of things are all in the game, of course.

The sec­ond paper I wrote uses the Trans­par­ent Charg­ing Sta­tion to inves­ti­gate how notions of trans­par­ent AI dif­fer between experts and cit­i­zens. It was final­ly accept­ed late last year and should see pub­li­ca­tion in AI & Soci­ety soon. It is titled Ten­sions in Trans­par­ent Urban AI: Design­ing A Smart Elec­tric Vehi­cle Charge Point. This piece went through mul­ti­ple major revi­sions and was pre­vi­ous­ly reject­ed by DIS and CHI.

A third paper, Con­testable AI by Design: Towards A Frame­work, uses a sys­tem­at­ic lit­er­a­ture review of AI con­testa­bil­i­ty to con­struct a pre­lim­i­nary design frame­work, is cur­rent­ly under review at a major phi­los­o­phy of tech­nol­o­gy jour­nal. Fin­gers crossed.

And cur­rent­ly, I am work­ing on my fourth pub­li­ca­tion, tan­gen­tial­ly titled Con­testable Cam­era Cars: A Spec­u­la­tive Design Explo­ration of Pub­lic AI Sys­tems Respon­sive to Val­ue Change, which will be based on empir­i­cal work that uses spec­u­la­tive design as a way to devel­op guide­lines and exam­ples for the afore­men­tioned design frame­work, and to inves­ti­gate civ­il ser­vants’ views on the path­ways towards con­testable AI sys­tems in pub­lic administration.

Once that one is done, I intend to do one more study, prob­a­bly look­ing into mon­i­tor­ing and trace­abil­i­ty as poten­tial lever­age points for con­testa­bil­i­ty, after which I will turn my atten­tion to com­plet­ing my thesis. 

Aside from my research, in 2021 was allowed to devel­op and teach a mas­ter elec­tive cen­tered around my PhD top­ic, titled AI & Soci­ety. In it, stu­dents are equipped with tech­ni­cal knowl­edge of AI, and tools for think­ing about AI ethics. They apply these to a design stu­dio project focused on con­cep­tu­al­iz­ing a respon­si­ble AI-enabled ser­vice that address­es a social issue the city of Ams­ter­dam might con­ceiv­ably strug­gle with. Stu­dents also write a brief paper reflect­ing on and cri­tiquing their group design work. You can see me on Vimeo do a brief video intro­duc­tion for stu­dents who are con­sid­er­ing the course. I will be run­ning the course again this year start­ing end of February.

I also men­tored a num­ber of bril­liant mas­ter grad­u­a­tion stu­dents: Xueyao Wang (with Jacky Bour­geois as chair) Jooy­oung Park, Loes Sloet­jes (both with Roy Ben­dor as chair) and cur­rent­ly Fabi­an Geis­er (with Euiy­oung Kim as chair). Work­ing with stu­dents is one of the best parts of being in academia.

All of the above would not have been pos­si­ble with­out the great sup­port from my super­vi­so­ry team: Ianus Keller, Neelke Doorn and Gerd Kortuem. I should also give spe­cial men­tion to Thi­js Turel at AMS Institute’s Respon­si­ble Sens­ing Lab, where most of my empir­i­cal work is situated.

If you want to dig a lit­tle deep­er into some of this, I recent­ly set up a web­site for my PhD project over at contestable.ai.

Geen transparantie zonder tegenspraak” — betoog voor première documentaire transparante laadpaal

Het onder­staande korte betoog sprak ik uit tij­dens de online pre­miere van de doc­u­men­taire over de transparante laad­paal op don­derdag 18 maart 2021.

Ik had laatst con­tact met een inter­na­tionale “thought leader” op het gebied van “tech ethics”. Hij vertelde mij dat hij heel dankbaar is voor het bestaan van de transparante laad­paal omdat het zo’n goed voor­beeld is van hoe design kan bij­dra­gen aan eerlijke technologie.

Dat is natu­urlijk ontzettend leuk om te horen. En het past in een bredere trend in de indus­trie gericht op het transparant en uitleg­baar mak­en van algo­ritmes. Inmid­dels is het zelfs zo ver dat wet­gev­ing uitleg­baarheid (in som­mige gevallen) ver­plicht stelt.

In de doc­u­men­taire hoor je meerdere mensen vertellen (mijzelf inbe­grepen) waarom het belan­grijk is dat stedelijke algo­ritmes transparant zijn. Thi­js benoemt heel mooi twee rede­nen: Enerz­i­jds het col­lec­tieve belang om democ­ra­tis­che con­t­role op de ontwik­kel­ing van stedelijke algo­ritmes mogelijk te mak­en. Anderz­i­jds is er het indi­vidu­ele belang om je recht te kun­nen halen als een sys­teem een besliss­ing maakt waarmee je het (om wat voor reden dan ook) niet eens bent.

En inder­daad, in bei­de gevallen (col­lec­tieve con­t­role en indi­vidu­ele reme­die) is transparantie een rand­voor­waarde. Ik denk dat we met dit project een hoop prob­le­men qua design en tech­niek hebben opgelost die daar­bij komen kijken. Tegelijk­er­ti­jd doemt er een nieuwe vraag aan de hori­zon op: Als we begri­jpen hoe een slim sys­teem werkt, en we zijn het er niet mee eens, wat dan? Hoe kri­jg je ver­vol­gens daad­w­erke­lijk invloed op de werk­ing van het systeem? 

Ik denk dat we onze focus zullen moeten gaan ver­leggen van transparantie naar wat ik tegen­spraak of in goed Engels “con­testa­bil­i­ty” noem.

Ontwer­pen voor tegen­spraak betekent dat we na moeten gaan denken over de mid­de­len die mensen nodig hebben voor het uitoe­fe­nen van hun recht op menselijke inter­ven­tie. Ja, dit betekent dat we infor­matie moeten aan­lev­eren over het hoe en waarom van indi­vidu­ele beslissin­gen. Transparantie dus. Maar het betekent ook dat we nieuwe kanalen en processen moeten inricht­en waarmee mensen ver­zoeken kun­nen indi­enen voor het herzien van een besliss­ing. We zullen na moeten gaan denken over hoe we dergelijke ver­zoeken beo­orde­len, en hoe we er voor zor­gen dat het slimme sys­teem in kwest­ie “leert” van de sig­nalen die we op deze manier oppikken uit de samenleving.

Je zou kun­nen zeggen dat ontwer­pen van transparantie een­richt­ingsver­keer is. Infor­matie stroomt van de ontwikke­lende par­tij, naar de eindge­bruik­er. Bij het ontwer­pen voor tegen­spraak gaat het om het creëren van een dialoog tussen ontwikke­laars en burgers.

Ik zeg burg­ers want niet alleen klassieke eindge­bruik­ers wor­den ger­aakt door slimme sys­te­men. Aller­lei andere groepen wor­den ook, vaak indi­rect beïnvloed.

Dat is ook een nieuwe ontwerp uitdag­ing. Hoe ontwerp je niet alleen voor de eindge­bruik­er (zoals bij de transparante laad­paal de EV bestu­ur­der) maar ook voor zoge­naamde indi­recte belanghebben­den, bijvoor­beeld bewon­ers van strat­en waar laad­palen geplaatst wor­den, die geen EV rij­den, of zelfs geen auto, maar even­goed een belang hebben bij hoe stoepen en strat­en wor­den ingericht.

Deze ver­bred­ing van het blikveld betekent dat we bij het ontwer­pen voor tegen­spraak nóg een stap verder kun­nen en zelfs moeten gaan dan het mogelijk mak­en van reme­die bij indi­vidu­ele beslissingen.

Want ontwer­pen voor tegen­spraak bij indi­vidu­ele beslissin­gen van een reeds uit­gerold sys­teem is noodza­ke­lijk­er­wi­js post-hoc en reac­tief, en beperkt zich tot één enkele groep belanghebbenden.

Zoals Thi­js ook min of meer benoemt in de doc­u­men­taire beïn­vloed slimme stedelijke infra­struc­tu­ur de lev­ens van ons alle­maal, en je zou kun­nen zeggen dat de design en tech­nis­che keuzes die bij de ontwik­kel­ing daar­van gemaakt wor­den intrin­siek ook poli­tieke keuzes zijn. 

Daarom denk ik dat we er niet omheen kun­nen om het pro­ces dat ten grond­slag ligt aan deze sys­te­men zelf, ook zo in te richt­en dat er ruimte is voor tegen­spraak. In mijn ide­ale wereld is de ontwik­kel­ing van een vol­gende gen­er­atie slimme laad­palen daarom par­tic­i­patief, plu­ri­form en inclusief, net als onze democ­ra­tie dat zelf ook streeft te zijn. 

Hoe we dit soort “con­testable” algo­ritmes pre­cies vorm moeten geven, hoe ontwer­pen voor tegen­spraak moeten gaan werken, is een open vraag. Maar een aan­tal jaren gele­den wist nie­mand nog hoe een transparante laad­paal er uit zou moeten zien, en dat hebben we ook voor elka­ar gekregen.

Update (2021–03-31 16:43): Een opname van het gehele event is nu ook beschik­baar. Het boven­staande betoog start rond 25:14.

Contestable Infrastructures” at Beyond Smart Cities Today

I’ll be at Beyond Smart Cities Today the next cou­ple of days (18–19 Sep­tem­ber). Below is the abstract I sub­mit­ted, plus a bib­li­og­ra­phy of some of the stuff that went into my think­ing for this and relat­ed mat­ters that I won’t have the time to get into.

In the actu­al­ly exist­ing smart city, algo­rith­mic sys­tems are increas­ing­ly used for the pur­pos­es of auto­mat­ed deci­sion-mak­ing, includ­ing as part of pub­lic infra­struc­ture. Algo­rith­mic sys­tems raise a range of eth­i­cal con­cerns, many of which stem from their opac­i­ty. As a result, pre­scrip­tions for improv­ing the account­abil­i­ty, trust­wor­thi­ness and legit­i­ma­cy of algo­rith­mic sys­tems are often based on a trans­paren­cy ide­al. The think­ing goes that if the func­tion­ing and own­er­ship of an algo­rith­mic sys­tem is made per­ceiv­able, peo­ple under­stand them and are in turn able to super­vise them. How­ev­er, there are lim­its to this approach. Algo­rith­mic sys­tems are com­plex and ever-chang­ing socio-tech­ni­cal assem­blages. Ren­der­ing them vis­i­ble is not a straight­for­ward design and engi­neer­ing task. Fur­ther­more such trans­paren­cy does not nec­es­sar­i­ly lead to under­stand­ing or, cru­cial­ly, the abil­i­ty to act on this under­stand­ing. We believe legit­i­mate smart pub­lic infra­struc­ture needs to include the pos­si­bil­i­ty for sub­jects to artic­u­late objec­tions to pro­ce­dures and out­comes. The result­ing “con­testable infra­struc­ture” would cre­ate spaces that open up the pos­si­bil­i­ty for express­ing con­flict­ing views on the smart city. Our project is to explore the design impli­ca­tions of this line of rea­son­ing for the phys­i­cal assets that cit­i­zens encounter in the city. Because after all, these are the per­ceiv­able ele­ments of the larg­er infra­struc­tur­al sys­tems that recede from view.

  • Alkhat­ib, A., & Bern­stein, M. (2019). Street-Lev­el Algo­rithms. 1–13. https://doi.org/10.1145/3290605.3300760
  • Anan­ny, M., & Craw­ford, K. (2018). See­ing with­out know­ing: Lim­i­ta­tions of the trans­paren­cy ide­al and its appli­ca­tion to algo­rith­mic account­abil­i­ty. New Media and Soci­ety, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
  • Cen­ti­vany, A., & Glushko, B. (2016). “Pop­corn tastes good”: Par­tic­i­pa­to­ry pol­i­cy­mak­ing and Reddit’s “AMAged­don.” Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems — Pro­ceed­ings, 1126–1137. https://doi.org/10.1145/2858036.2858516
  • Craw­ford, K. (2016). Can an Algo­rithm be Ago­nis­tic? Ten Scenes from Life in Cal­cu­lat­ed Publics. Sci­ence Tech­nol­o­gy and Human Val­ues, 41(1), 77–92. https://doi.org/10.1177/0162243915589635
  • DiS­al­vo, C. (2010). Design, Democ­ra­cy and Ago­nis­tic Plu­ral­ism. Pro­ceed­ings of the Design Research Soci­ety Con­fer­ence, 366–371.
  • Hilde­brandt, M. (2017). Pri­va­cy As Pro­tec­tion of the Incom­putable Self: Ago­nis­tic Machine Learn­ing. SSRN Elec­tron­ic Jour­nal, 1–33. https://doi.org/10.2139/ssrn.3081776
  • Jack­son, S. J., Gille­spie, T., & Payette, S. (2014). The Pol­i­cy Knot: Re-inte­grat­ing Pol­i­cy, Prac­tice and Design. CSCW Stud­ies of Social Com­put­ing, 588–602. https://doi.org/10.1145/2531602.2531674
  • Jew­ell, M. (2018). Con­test­ing the deci­sion: liv­ing in (and liv­ing with) the smart city. Inter­na­tion­al Review of Law, Com­put­ers and Tech­nol­o­gy. https://doi.org/10.1080/13600869.2018.1457000
  • Lind­blom, L. (2019). Con­sent, Con­testa­bil­i­ty, and Unions. Busi­ness Ethics Quar­ter­ly. https://doi.org/10.1017/beq.2018.25
  • Mit­tel­stadt, B. D., Allo, P., Tad­deo, M., Wachter, S., & Flori­di, L. (2016). The ethics of algo­rithms: Map­ping the debate. Big Data & Soci­ety, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
  • Van de Poel, I. (2016). An eth­i­cal frame­work for eval­u­at­ing exper­i­men­tal tech­nol­o­gy. Sci­ence and Engi­neer­ing Ethics, 22(3), 667–686. https://doi.org/10.1007/s11948-015‑9724‑3

Contestable Infrastructures: Designing for Dissent in Smart Public Objects” at We Make the City 2019

Thi­js Turèl of AMS Insti­tute and myself pre­sent­ed a ver­sion of the talk below at the Cities for Dig­i­tal Rights con­fer­ence on June 19 in Ams­ter­dam dur­ing the We Make the City fes­ti­val. The talk is an attempt to artic­u­late some of the ideas we both have been devel­op­ing for some time around con­testa­bil­i­ty in smart pub­lic infra­struc­ture. As always with this sort of thing, this is intend­ed as a con­ver­sa­tion piece so I wel­come any thoughts you may have.


The basic mes­sage of the talk is that when we start to do auto­mat­ed deci­sion-mak­ing in pub­lic infra­struc­ture using algo­rith­mic sys­tems, we need to design for the inevitable dis­agree­ments that may arise and fur­ther­more, we sug­gest there is an oppor­tu­ni­ty to focus on design­ing for such dis­agree­ments in the phys­i­cal objects that peo­ple encounter in urban space as they make use of infrastructure.

We set the scene by show­ing a num­ber of exam­ples of smart pub­lic infra­struc­ture. A cyclist cross­ing that adapts to weath­er con­di­tions. If it’s rain­ing cyclists more fre­quent­ly get a green light. A pedes­tri­an cross­ing in Tilburg where elder­ly can use their mobile to get more time to cross. And final­ly, the case we are involved with our­selves: smart EV charg­ing in the city of Ams­ter­dam, about which more later.

Image cred­its: Vat­ten­fall, Fietsfan010, De Nieuwe Draai

We iden­ti­fy three trends in smart pub­lic infra­struc­ture: (1) where pre­vi­ous­ly algo­rithms were used to inform pol­i­cy, now they are employed to per­form auto­mat­ed deci­sion-mak­ing on an indi­vid­ual case basis. This rais­es the stakes; (2) dis­trib­uted own­er­ship of these sys­tems as the result of pub­lic-pri­vate part­ner­ships and oth­er com­plex col­lab­o­ra­tion schemes leads to unclear respon­si­bil­i­ty; and final­ly (3) the increas­ing use of machine learn­ing leads to opaque decision-making.

These trends, and algo­rith­mic sys­tems more gen­er­al­ly, raise a num­ber of eth­i­cal con­cerns. They include but are not lim­it­ed to: the use of induc­tive cor­re­la­tions (for exam­ple in the case of machine learn­ing) leads to unjus­ti­fied results; lack of access to and com­pre­hen­sion of a system’s inner work­ings pro­duces opac­i­ty, which in turn leads to a lack of trust in the sys­tems them­selves and the organ­i­sa­tions that use them; bias is intro­duced by a num­ber of fac­tors, includ­ing devel­op­ment team prej­u­dices, tech­ni­cal flaws, bad data and unfore­seen inter­ac­tions with oth­er sys­tems; and final­ly the use of pro­fil­ing, nudg­ing and per­son­al­i­sa­tion leads to dimin­ished human agency. (We high­ly rec­om­mend the arti­cle by Mit­tel­stadt et al. for a com­pre­hen­sive overview of eth­i­cal con­cerns raised by algorithms.)

So for us, the ques­tion that emerges from all this is: How do we organ­ise the super­vi­sion of smart pub­lic infra­struc­ture in a demo­c­ra­t­ic and law­ful way?

There are a num­ber of exist­ing approach­es to this ques­tion. These include legal and reg­u­la­to­ry (e.g. the right to expla­na­tion in the GDPR); audit­ing (e.g. KPMG’s AI in Con­trol” method, BKZ’s transparantielab); pro­cure­ment (e.g. open source claus­es); insourc­ing (e.g. GOV.UK) and design and engi­neer­ing (e.g. our own work on the trans­par­ent charg­ing sta­tion).

We feel there are two impor­tant lim­i­ta­tions with these exist­ing approach­es. The first is a focus on pro­fes­sion­als and the sec­ond is a focus on pre­dic­tion. We’ll dis­cuss each in turn.

Image cred­its: Cities Today

First of all, many solu­tions tar­get a pro­fes­sion­al class, be it accoun­tants, civ­il ser­vants, super­vi­so­ry boards, as well as tech­nol­o­gists, design­ers and so on. But we feel there is a role for the cit­i­zen as well, because the super­vi­sion of these sys­tems is sim­ply too impor­tant to be left to a priv­i­leged few. This role would include iden­ti­fy­ing wrong­do­ing, and sug­gest­ing alternatives. 

There is a ten­sion here, which is that from the per­spec­tive of the pub­lic sec­tor one should only ask cit­i­zens for their opin­ion when you have the inten­tion and the resources to actu­al­ly act on their sug­ges­tions. It can also be a chal­lenge to iden­ti­fy legit­i­mate con­cerns in the flood of feed­back that can some­times occur. From our point of view though, such con­cerns should not be used as an excuse to not engage the pub­lic. If cit­i­zen par­tic­i­pa­tion is con­sid­ered nec­es­sary, the focus should be on free­ing up resources and set­ting up struc­tures that make it fea­si­ble and effective.

The sec­ond lim­i­ta­tion is pre­dic­tion. This is best illus­trat­ed with the Collinridge dilem­ma: in the ear­ly phas­es of new tech­nol­o­gy, when a tech­nol­o­gy and its social embed­ding are still mal­leable, there is uncer­tain­ty about the social effects of that tech­nol­o­gy. In lat­er phas­es, social effects may be clear but then often the tech­nol­o­gy has become so well entrenched in soci­ety that it is hard to over­come neg­a­tive social effects. (This sum­ma­ry is tak­en from an excel­lent van de Poel arti­cle on the ethics of exper­i­men­tal technology.) 

Many solu­tions dis­re­gard the Collingridge dilem­ma and try to pre­dict and pre­vent adverse effects of new sys­tems at design-time. One exam­ple of this approach would be val­ue-sen­si­tive design. Our focus in stead is on use-time. Con­sid­er­ing the fact that smart pub­lic infra­struc­ture tends to be devel­oped on an ongo­ing basis, the ques­tion becomes how to make cit­i­zens a part­ner in this process. And even more specif­i­cal­ly we are inter­est­ed in how this can be made part of the design of the “touch­points” peo­ple actu­al­ly encounter in the streets, as well as their back­stage processes.

Why do we focus on these phys­i­cal objects? Because this is where peo­ple actu­al­ly meet the infra­struc­tur­al sys­tems, of which large parts recede from view. These are the places where they become aware of their pres­ence. They are the prover­bial tip of the iceberg. 

Image cred­its: Sagar Dani

The use of auto­mat­ed deci­sion-mak­ing in infra­struc­ture reduces people’s agency. For this rea­son, resources for agency need to be designed back into these sys­tems. Fre­quent­ly the answer to this ques­tion is premised on a trans­paren­cy ide­al. This may be a pre­req­ui­site for agency, but it is not suf­fi­cient. Trans­paren­cy may help you become aware of what is going on, but it will not nec­es­sar­i­ly help you to act on that knowl­edge. This is why we pro­pose a shift from trans­paren­cy to con­testa­bil­i­ty. (We can high­ly rec­om­mend Anan­ny and Crawford’s arti­cle for more on why trans­paren­cy is insufficient.)

To clar­i­fy what we mean by con­testa­bil­i­ty, con­sid­er the fol­low­ing three exam­ples: When you see the lights on your router blink in the mid­dle of the night when no-one in your house­hold is using the inter­net you can act on this knowl­edge by yank­ing out the device’s pow­er cord. You may nev­er use the emer­gency brake in a train but its pres­ence does give you a sense of con­trol. And final­ly, the cash reg­is­ter receipt pro­vides you with a view into both the pro­ce­dure and the out­come of the super­mar­ket check­out pro­ce­dure and it offers a resource with which you can dis­pute them if some­thing appears to be wrong.

Image cred­its: Aangifte­doen, source unknown for remainder

None of these exam­ples is a per­fect illus­tra­tion of con­testa­bil­i­ty but they hint at some­thing more than trans­paren­cy, or per­haps even some­thing whol­ly sep­a­rate from it. We’ve been inves­ti­gat­ing what their equiv­a­lents would be in the con­text of smart pub­lic infrastructure.

To illus­trate this point fur­ther let us come back to the smart EV charg­ing project we men­tioned ear­li­er. In Ams­ter­dam, pub­lic EV charg­ing sta­tions are becom­ing “smart” which in this case means they auto­mat­i­cal­ly adapt the speed of charg­ing to a num­ber of fac­tors. These include grid capac­i­ty, and the avail­abil­i­ty of solar ener­gy. Addi­tion­al fac­tors can be added in future, one of which under con­sid­er­a­tion is to give pri­or­i­ty to shared cars over pri­vate­ly owned cars. We are involved with an ongo­ing effort to con­sid­er how such charg­ing sta­tions can be redesigned so that peo­ple under­stand what’s going on behind the scenes and can act on this under­stand­ing. The moti­va­tion for this is that if not designed care­ful­ly, the opac­i­ty of smart EV charg­ing infra­struc­ture may be detri­men­tal to social accep­tance of the tech­nol­o­gy. (A first out­come of these efforts is the Trans­par­ent Charg­ing Sta­tion designed by The Incred­i­ble Machine. A fol­low-up project is ongoing.)

Image cred­its: The Incred­i­ble Machine, Kars Alfrink

We have iden­ti­fied a num­ber of dif­fer­ent ways in which peo­ple may object to smart EV charg­ing. They are list­ed in the table below. These types of objec­tions can lead us to fea­ture require­ments for mak­ing the sys­tem contestable. 

Because the list is pre­lim­i­nary, we asked the audi­ence if they could imag­ine addi­tion­al objec­tions, if those exam­ples rep­re­sent­ed new cat­e­gories, and if they would require addi­tion­al fea­tures for peo­ple to be able to act on them. One par­tic­u­lar­ly inter­est­ing sug­ges­tion that emerged was to give local com­mu­ni­ties con­trol over the poli­cies enact­ed by the charge points in their vicin­i­ty. That’s some­thing to fur­ther con­sid­er the impli­ca­tions of.

And that’s where we left it. So to summarise: 

  1. Algo­rith­mic sys­tems are becom­ing part of pub­lic infrastructure.
  2. Smart pub­lic infra­struc­ture rais­es new eth­i­cal concerns.
  3. Many solu­tions to eth­i­cal con­cerns are premised on a trans­paren­cy ide­al, but do not address the issue of dimin­ished agency.
  4. There are dif­fer­ent cat­e­gories of objec­tions peo­ple may have to an algo­rith­mic system’s workings.
  5. Mak­ing a sys­tem con­testable means cre­at­ing resources for peo­ple to object, open­ing up a space for the explo­ration of mean­ing­ful alter­na­tives to its cur­rent implementation.

PhD update – January 2019

Thought I’d post a quick update on my PhD. Since my pre­vi­ous post almost five months have passed. I’ve been devel­op­ing my plan fur­ther, for which you’ll find an updat­ed descrip­tion below. I’ve also put togeth­er my very first con­fer­ence paper, co-authored with my super­vi­sor Gerd Kortuem. It’s a case study of the MX3D smart bridge for Design­ing Inter­ac­tive Sys­tems 2019. We’ll see if it gets accept­ed. But in any case, writ­ing some­thing has been huge­ly edu­ca­tion­al. And once I final­ly fig­ured out what the hell I was doing, it was sort of fun as well. Still kind of a trip to be paid to do this kind of work. Look­ing ahead, I am set­ting goals for this year and the near­er term as well. It’s all very rough still but it will like­ly involve research through design as a method and maybe object ori­ent­ed ontol­ogy as a the­o­ry. All of which will serve to oper­a­tionalise and eval­u­ate the use­ful­ness of the “con­testa­bil­i­ty” con­cept in the con­text of smart city infra­struc­ture. To be continued—and I wel­come all your thoughts!


Design­ing Smart City Infra­struc­ture for Contestability

The use of infor­ma­tion tech­nol­o­gy in cities increas­ing­ly sub­jects cit­i­zens to auto­mat­ed data col­lec­tion, algo­rith­mic deci­sion mak­ing and remote con­trol of phys­i­cal space. Cit­i­zens tend to find these sys­tems and their out­comes hard to under­stand and pre­dict [1]. More­over, the opac­i­ty of smart urban sys­tems pre­cludes full cit­i­zen­ship and obstructs people’s ‘right to the city’ [2].

A com­mon­ly pro­posed solu­tion is to improve cit­i­zens under­stand­ing of sys­tems by mak­ing them more open and trans­par­ent [3]. For exam­ple, GDPR pre­scribes people’s right to expla­na­tion of auto­mat­ed deci­sions they have been sub­ject­ed to. For anoth­er exam­ple, the city of Ams­ter­dam offers a pub­licly acces­si­ble reg­is­ter of urban sen­sors, and is com­mit­ted to open­ing up all the data they collect.

How­ev­er, it is not clear that open­ness and trans­paren­cy in and of itself will yield the desired improve­ments in under­stand­ing and gov­ern­ing of smart city infra­struc­tures [4]. We would like to sug­gest that for a sys­tem to per­ceived as account­able, peo­ple must be able to con­test its workings—from the data it col­lects, to the deci­sions it makes, all the way through to how those deci­sions are act­ed on in the world.

The lead­ing research ques­tion for this PhD there­fore is how to design smart city infrastructure—urban sys­tems aug­ment­ed with inter­net-con­nect­ed sens­ing, pro­cess­ing and actu­at­ing capabilities—for con­testa­bil­i­ty [5]: the extent to which a sys­tem sup­ports the abil­i­ty of those sub­ject­ed to it to oppose its work­ings as wrong or mistaken.

Ref­er­ences

  1. Bur­rell, Jen­na. “How the machine ‘thinks’: Under­stand­ing opac­i­ty in machine learn­ing algo­rithms.” Big Data & Soci­ety 3.1 (2016): 2053951715622512.
  2. Kitchin, Rob, Pao­lo Car­dul­lo, and Cesare Di Feli­cianto­nio. “Cit­i­zen­ship, Jus­tice and the Right to the Smart City.” (2018).
  3. Abdul, Ashraf, et al. “Trends and tra­jec­to­ries for explain­able, account­able and intel­li­gi­ble sys­tems: An hci research agen­da.” Pro­ceed­ings of the 2018 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems. ACM, 2018.
  4. Anan­ny, Mike, and Kate Craw­ford. “See­ing with­out know­ing: Lim­i­ta­tions of the trans­paren­cy ide­al and its appli­ca­tion to algo­rith­mic account­abil­i­ty.” New Media & Soci­ety 20.3 (2018): 973–989.
  5. Hirsch, Tad, et al. “Design­ing con­testa­bil­i­ty: Inter­ac­tion design, machine learn­ing, and men­tal health.” Pro­ceed­ings of the 2017 Con­fer­ence on Design­ing Inter­ac­tive Sys­tems. ACM, 2017.