PhD update – September 2023

I’m back again with anoth­er Ph.D. update. Five years after I start­ed in Delft, we are near­ing the fin­ish line on this whole thing. But before we look ahead, let’s review notable events since the pre­vi­ous update in March 2023.

Occurrences

  1. I pre­sent­ed our frame­work, Con­testable AI by Design, at the annu­al NWO ICT Open con­fer­ence, which, for the first time, had an entire track ded­i­cat­ed to HCI research in the Nether­lands. It was an excel­lent oppor­tu­ni­ty to meet fel­low researchers from oth­er Dutch insti­tu­tions. The slides are avail­able as PDF at contestable.ai.
  2. I vis­it­ed Ham­burg to present our paper, Con­testable Cam­era Cars, at CHI 2023. We also received a Best Paper award, which I am, of course, very pleased with. The con­fer­ence was equal parts inspir­ing and over­whelm­ing. The best part of it was meet­ing in-per­son researchers who shared my interests.
  3. Also, at CHI, I was inter­viewed about my research by Mike Green for his pod­cast Under­stand­ing Users. You can lis­ten to it here. It is always good prac­tice to try and lay out some of my argu­ments spon­ta­neous­ly live.
  4. In June, I joined a pan­el at a BOLD Cities “talk show” to dis­cuss the design of smart city sys­tems for con­testa­bil­i­ty. It was quite an hon­or to be on the same pan­el as Eef­je Cup­pen, direc­tor of the Rathenau Insti­tute. This event was great because we had tech­no­log­i­cal, design, polit­i­cal, and pol­i­cy per­spec­tives. Sev­er­al guests argued for the need to rein­vig­o­rate rep­re­sen­ta­tive democ­ra­cy and give a more promi­nent role to elect­ed politi­cians in set­ting tech­nol­o­gy pol­i­cy. A report is avail­able here.
  5. In August, the BRIDE project had its clos­ing event. This is the NWO research project that par­tial­ly fund­ed my Ph.D. The event was an excel­lent oppor­tu­ni­ty to reflect on our work togeth­er over the past years. I took the oppor­tu­ni­ty to revis­it the work of Sask­ia Sassen on city­ness and to think through some of the impli­ca­tions of my work on con­testa­bil­i­ty for the field of smart urban­ism. The slides are avail­able at contestable.ai.
  6. Final­ly, last week, a short opin­ion piece that lays out the argu­ment for con­testable AI in what I hope is a rea­son­ably acces­si­ble man­ner, was pub­lished on the TU Delft website.
Photo of Eefje Cuppen and I being interviewed by Inge Janse at the BOLD Cities talk show on June 22, 2023—photo by Tiffany Konings.

Eef­je Cup­pen and I being inter­viewed by Inge Janse at the BOLD Cities talk show on June 22, 2023—photo by Tiffany Konings.

Envisioning Contestability Loops

Through­out this, I have been dili­gent­ly chip­ping away at my final pub­li­ca­tion, “Envi­sion­ing Con­testa­bil­i­ty Loops: Eval­u­at­ing the Ago­nis­tic Are­na as a Gen­er­a­tive Metaphor for Pub­lic AI.” I had a great time col­lab­o­rat­ing with Leon de Korte on an info­graph­ic of part of my design framework.

We took this info­graph­ic on a tour of Dutch inter­ac­tion design agen­cies and con­duct­ed con­cept design work­shops. I enjoyed return­ing to prac­tice and shar­ing the work of the past cou­ple of years with peers in prac­tice. My friends at Eend wrote a nice blog post about it.

The analy­sis of the out­comes of these work­shops forms the basis for the arti­cle, in which I explore the degree to which the guid­ing con­cept (gen­er­a­tive metaphor) behind con­testable AI, which I have dubbed the “Ago­nis­tic Are­na” is a pro­duc­tive one for design prac­ti­tion­ers. Spoil­ers: It is, but com­pet­ing metaphors are also at play in the pub­lic AI design space.

The man­u­script is close to com­ple­tion. As usu­al, putting some­thing like this togeth­er is a heavy but grat­i­fy­ing lift. I look for­ward to shar­ing the results and the under­ly­ing info­graph­ic with the broad­er world.

Are we there yet?

Look­ing ahead, I will be on a pan­el along­side the great Julian Bleeck­er and a host of oth­ers at the annu­al TU Delft Design & AI sym­po­sium in October.

But aside from that, I will keep my head down and focus on com­plet­ing my the­sis. The aim is to hand it in by the end of Novem­ber. So, two more months on the clock. Will I make it? Let’s find out!

PhD update – March 2023

Hel­lo again, and wel­come to anoth­er update on my Ph.D. research progress. I will briefly run down the things that hap­pened since the last update, what I am cur­rent­ly work­ing on, and some notable events on the horizon.

Recent happenings

CHI 2023 paper

Stills from Con­testable Cam­era Cars con­cept video.

First off, the big news is that the paper I sub­mit­ted to CHI 2023 was accept­ed. This is a big deal for me because HCI is the core field I aim to con­tribute to, and CHI is its flag­ship conference.

Here’s the full citation:

Alfrink, K., Keller, I., Doorn, N., & Kortuem, G. (2023). Con­testable Cam­era Cars: A Spec­u­la­tive Design Explo­ration of Pub­lic AI That Is Open and Respon­sive to Dis­pute. https://doi.org/10/jwrx

I have had sev­er­al papers reject­ed in the past (CHI is noto­ri­ous­ly hard to get accept­ed at), so I feel vin­di­cat­ed. The paper is already avail­able as an arX­iv preprint, as is the con­cept video that forms the core of the study I report on (many thanks to my pal Simon for col­lab­o­rat­ing on this with me). CHI 2023 hap­pens in late April. I will be rid­ing a train over there to present the paper in per­son. Very much look­ing for­ward to that.

Con­testable Cam­era Cars con­cept video.

Responsible Sensing Lab anniversary event

I briefly pre­sent­ed my research at the Respon­si­ble Sens­ing Lab anniver­sary event on Feb­ru­ary 16. The whole event was quite enjoy­able, and I got some encour­ag­ing respons­es to my ideas after­ward which is always nice. The event was record­ed in full. My appear­ance starts around the 1:47:00 mark.

It me. (Cred­it: Respon­si­ble Sens­ing Lab.)
Video of my con­tri­bu­tion. (Pakhuis de Zwi­jger / Respon­si­ble Sens­ing Lab.)

Tweeting, tooting, blogging

I have been get­ting back into the habit of tweet­ing, toot­ing, and even the occa­sion­al spot of blog­ging on this web­site again. As the end of my Ph.D. nears, I fig­ured it might be worth it to engage more active­ly with “the dis­course,” as they say. I most­ly share stuff I read that is relat­ed to my research and that I find inter­est­ing. Although, of course, posts relat­ed to my twin sons’ music taste and strug­gles with uni­ver­si­ty bureau­cra­cy always win out in the end. (Yes, I am aware my tim­ing is ter­ri­ble, see­ing as how we have basi­cal­ly final­ly con­clud­ed social media was a bad idea after all.)

Current activities

Envisioning Contestability Loops

At the moment, the major­i­ty of my time is tak­en up by con­duct­ing a final study (work­ing title: “Envi­sion­ing Con­testa­bil­i­ty Loops”). I am excit­ed about this one because I get to once again col­lab­o­rate with a pro­fes­sion­al design­er on an arti­fact, in this case, a visu­al expla­na­tion of my frame­work, and use the result as a research instru­ment to dig into, in this case, the strengths and weak­ness­es of con­testa­bil­i­ty as a gen­er­a­tive metaphor for the design of pub­lic AI.

Thesis

In par­al­lel, I have begun to put togeth­er my the­sis. It is paper-based, but of course, the intro­duc­to­ry and con­clud­ing chap­ters require some thought still.

The aim is to have both the final arti­cle and the­sis fin­ished by the end of sum­mer and then begin the ardu­ous process of get­ting a date for my defense, assem­bling a com­mit­tee, etc.

Agonistic Machine Vision Development

In the mean­time, I am also men­tor­ing Lau­ra, anoth­er bril­liant mas­ter grad­u­a­tion stu­dent. Her project, titled “Ago­nis­tic Machine Vision Devel­op­ment,” builds on my pre­vi­ous research. In par­tic­u­lar, one of the chal­lenges I iden­ti­fied in Con­testable Cam­era Cars, that of the dif­fer­en­tial in infor­ma­tion posi­tion between cit­i­zens and experts when they col­lab­o­rate in par­tic­i­pa­to­ry machine learn­ing ses­sions. It’s very grat­i­fy­ing to see oth­ers do design work that push­es these ideas further.

Upcoming events

So yeah, like I already men­tioned, I will be speak­ing at CHI 2023, which takes place on 23–28 April in Ham­burg. The sched­ule says I am pre­sent­ing on April 25 as part of the ses­sion on “AI Trust, Trans­paren­cy and Fair­ness”, which includes some excel­lent-look­ing contributions.

And before that, I will be at ICT.OPEN in Utrecht on April 20 to present briefly on the Con­testable AI by Design frame­work as part of the CHI NL track. It should be fun.

That’s it for this update. Maybe, by the time the next one rolls around, I will be able to share a date for my defense. But let’s not jinx it.

PhD update – September 2022

Sev­en months since the last update. Much bet­ter than the gap of three years between the pre­vi­ous two. These past months I feel like I have begun to reap the rewards of the grunt work of the last cou­ple of years. Two papers final­ly saw the light of day, as well as a course syl­labus. Read on for some more details.

Things that hap­pened:

First, a pair of talks. In Feb­ru­ary I pre­sent­ed on “Con­testable AI & Civic Co-Design” as part of a pan­el chaired by Roy Ben­dor at Rein­vent­ing the City. A PDF of my slides is avail­able on the contestable.ai web­site, here. In March, I pre­sent­ed at the AiTech Ago­ra. The title of the talk is “Mean­ing­ful Human Con­trol Through Con­testa­bil­i­ty by Design” and the slides are avail­able here.

In Feb­ru­ary a short inter­view was pub­lished by Bold Cities, a smart city research cen­ter I am loose­ly affil­i­at­ed with.

Then, in March, came a big moment for me, with the pub­li­ca­tion of my first jour­nal arti­cle in AI & Soci­ety. Here’s the abstract, and ref­er­ence. It’s avail­able open access.

The increas­ing use of arti­fi­cial intel­li­gence (AI) by pub­lic actors has led to a push for more trans­paren­cy. Pre­vi­ous research has con­cep­tu­al­ized AI trans­paren­cy as knowl­edge that empow­ers cit­i­zens and experts to make informed choic­es about the use and gov­er­nance of AI. Con­verse­ly, in this paper, we crit­i­cal­ly exam­ine if trans­paren­cy-as-knowl­edge is an appro­pri­ate con­cept for a pub­lic realm where pri­vate inter­ests inter­sect with demo­c­ra­t­ic con­cerns. We con­duct a prac­tice-based design research study in which we pro­to­type and eval­u­ate a trans­par­ent smart elec­tric vehi­cle charge point, and inves­ti­gate experts’ and cit­i­zens’ under­stand­ing of AI trans­paren­cy. We find that cit­i­zens expe­ri­ence trans­paren­cy as bur­den­some; experts hope trans­paren­cy ensures accep­tance, while cit­i­zens are most­ly indif­fer­ent to AI; and with absent means of con­trol, cit­i­zens ques­tion transparency’s rel­e­vance. The ten­sions we iden­ti­fy sug­gest trans­paren­cy can­not be reduced to a prod­uct fea­ture, but should be seen as a medi­a­tor of debate between experts and citizens.

Alfrink, Kars, Ianus Keller, Neelke Doorn, and Gerd Kortuem. “Ten­sions in Trans­par­ent Urban AI: Design­ing a Smart Elec­tric Vehi­cle Charge Point.” AI & SOCIETY, March 31, 2022. https://doi.org/10/gpszwh.

In April, the Respon­si­ble Sens­ing Lab pub­lished a report on “Respon­si­ble Drones”, to which I con­tributed a lit­tle as par­tic­i­pant on work­shops that lead up to it.

A sec­ond big mile­stone for me was mak­ing pub­lic the syl­labus for indus­tri­al design engi­neer­ing mas­ter elec­tive course “AI & Soci­ety” (no rela­tion to the jour­nal) which I have been devel­op­ing under the guid­ance of my super­vi­sor Gerd Kortuem over the past cou­ple of years. The syl­labus con­tains a read­ing list, as well as many self-guid­ed design exer­cis­es. Here’s a short description:

Arti­fi­cial Intel­li­gence (AI) is increas­ing­ly used by a vari­ety of orga­ni­za­tions in ways that impact soci­ety at scale. This 6 EC mas­ter elec­tive course aims to equip stu­dents with tools and meth­ods for the respon­si­ble design of pub­lic AI. Dur­ing sev­en weeks stu­dents attend a full-day ses­sion of lec­tures and work­shops. Stu­dents col­lab­o­rate on a group design project through­out. At the end, stu­dents indi­vid­u­al­ly deliv­er a short paper.

ID5417 Arti­fi­cial Intel­li­gence and Society

The third big mile­stone was the pub­li­ca­tion of my sec­ond jour­nal arti­cle in Minds & Machines. It is the the­o­ret­i­cal cor­ner­stone of my the­sis, a pro­vi­sion­al frame­work for design­ing con­testa­bil­i­ty into AI sys­tems. Abstract and ref­er­ence fol­low. This one is also open access.

As the use of AI sys­tems con­tin­ues to increase, so do con­cerns over their lack of fair­ness, legit­i­ma­cy and account­abil­i­ty. Such harm­ful auto­mat­ed deci­sion-mak­ing can be guard­ed against by ensur­ing AI sys­tems are con­testable by design: respon­sive to human inter­ven­tion through­out the sys­tem life­cy­cle. Con­testable AI by design is a small but grow­ing field of research. How­ev­er, most avail­able knowl­edge requires a sig­nif­i­cant amount of trans­la­tion to be applic­a­ble in prac­tice. A proven way of con­vey­ing inter­me­di­ate-lev­el, gen­er­a­tive design knowl­edge is in the form of frame­works. In this arti­cle we use qual­i­ta­tive-inter­pre­ta­tive meth­ods and visu­al map­ping tech­niques to extract from the lit­er­a­ture sociotech­ni­cal fea­tures and prac­tices that con­tribute to con­testable AI, and syn­the­size these into a design framework. 

Alfrink, Kars, Ianus Keller, Gerd Kortuem, and Neelke Doorn. “Con­testable AI by Design: Towards a Frame­work.” Minds and Machines, August 13, 2022. https://doi.org/10/gqnjcs.

Around the same time in August, Fabi­an Geis­er, whom I had been men­tor­ing for some time, grad­u­at­ed with a fas­ci­nat­ing mas­ter the­sis and project with the title “Reimag­in­ing the smart allo­ca­tion of road space in Ams­ter­dam for fair­ness”.

And final­ly, as these things were going on, I have been qui­et­ly chip­ping away at a third paper that applies the con­testable AI by design frame­work to the phe­nom­e­non of cam­era cars used by munic­i­pal­i­ties. My aim was to cre­ate an exam­ple of what I mean by con­testable AI, and use the exam­ple to inter­view civ­il ser­vants about their views on the chal­lenges fac­ing imple­men­ta­tion of con­testa­bil­i­ty in the pub­lic AI sys­tems they are involved with. I’ve sub­mit­ted the man­u­script, titled “Con­testable Cam­era Cars: A spec­u­la­tive design explo­ration of pub­lic AI that is open and respon­sive to dis­pute”, to CHI, and will hear back ear­ly Novem­ber. Fin­gers crossed for that one.

Look­ing ahead:

So what’s next? Well, I have lit­tle under a year left on my PhD con­tract, so I should real­ly begin wrap­ping up. I am con­sid­er­ing a final pub­li­ca­tion, but have not set­tled on any top­ic in par­tic­u­lar yet. Cur­rent inter­ests include AI sys­tem mon­i­tor­ing, visu­al meth­ods, and more besides. Once that final paper is in the can I will turn my atten­tion to putting togeth­er the the­sis itself, which is paper-based, so most­ly requires writ­ing an over­all intro­duc­tion and con­clu­sion to book­end the includ­ed pub­li­ca­tions. Should be a piece of cake, right?

And after the PhD? I am not sure yet, but I hope to remain involved in research and teach­ing, while at the same time per­haps get­ting a bit more back into design prac­tice besides. If at all pos­si­ble, hope­ful­ly in the domain of pub­lic sec­tor appli­ca­tions of AI.

That’s it for this update. I will be back at some point when there is more news to share.

PhD update – January 2022

It has been three years since I last wrote an update on my PhD. I guess anoth­er post is in order. 

My PhD plan was for­mal­ly green-lit in Octo­ber 2019. I am now over three years into this thing. There are rough­ly two more years left on the clock. I update my plans on a rolling basis. By my lat­est esti­ma­tion, I should be ready to request a date for my defense in May 2023. 

Of course, the pan­dem­ic forced me to adjust course. I am lucky enough not to be locked into par­tic­u­lar meth­ods or cas­es that are fun­da­men­tal­ly incom­pat­i­ble with our cur­rent predica­ment. But still, I had to change up my meth­ods, and recon­sid­er the sequenc­ing of my planned studies. 

The con­fer­ence paper I men­tioned in the pre­vi­ous update, using the MX3D bridge to explore smart cities’ log­ic of con­trol and city­ness, was reject­ed by DIS. I per­formed a rewrite, but then came to the con­clu­sion it was kind of a false start. These kinds of things are all in the game, of course.

The sec­ond paper I wrote uses the Trans­par­ent Charg­ing Sta­tion to inves­ti­gate how notions of trans­par­ent AI dif­fer between experts and cit­i­zens. It was final­ly accept­ed late last year and should see pub­li­ca­tion in AI & Soci­ety soon. It is titled Ten­sions in Trans­par­ent Urban AI: Design­ing A Smart Elec­tric Vehi­cle Charge Point. This piece went through mul­ti­ple major revi­sions and was pre­vi­ous­ly reject­ed by DIS and CHI.

A third paper, Con­testable AI by Design: Towards A Frame­work, uses a sys­tem­at­ic lit­er­a­ture review of AI con­testa­bil­i­ty to con­struct a pre­lim­i­nary design frame­work, is cur­rent­ly under review at a major phi­los­o­phy of tech­nol­o­gy jour­nal. Fin­gers crossed.

And cur­rent­ly, I am work­ing on my fourth pub­li­ca­tion, tan­gen­tial­ly titled Con­testable Cam­era Cars: A Spec­u­la­tive Design Explo­ration of Pub­lic AI Sys­tems Respon­sive to Val­ue Change, which will be based on empir­i­cal work that uses spec­u­la­tive design as a way to devel­op guide­lines and exam­ples for the afore­men­tioned design frame­work, and to inves­ti­gate civ­il ser­vants’ views on the path­ways towards con­testable AI sys­tems in pub­lic administration.

Once that one is done, I intend to do one more study, prob­a­bly look­ing into mon­i­tor­ing and trace­abil­i­ty as poten­tial lever­age points for con­testa­bil­i­ty, after which I will turn my atten­tion to com­plet­ing my thesis. 

Aside from my research, in 2021 was allowed to devel­op and teach a mas­ter elec­tive cen­tered around my PhD top­ic, titled AI & Soci­ety. In it, stu­dents are equipped with tech­ni­cal knowl­edge of AI, and tools for think­ing about AI ethics. They apply these to a design stu­dio project focused on con­cep­tu­al­iz­ing a respon­si­ble AI-enabled ser­vice that address­es a social issue the city of Ams­ter­dam might con­ceiv­ably strug­gle with. Stu­dents also write a brief paper reflect­ing on and cri­tiquing their group design work. You can see me on Vimeo do a brief video intro­duc­tion for stu­dents who are con­sid­er­ing the course. I will be run­ning the course again this year start­ing end of February.

I also men­tored a num­ber of bril­liant mas­ter grad­u­a­tion stu­dents: Xueyao Wang (with Jacky Bour­geois as chair) Jooy­oung Park, Loes Sloet­jes (both with Roy Ben­dor as chair) and cur­rent­ly Fabi­an Geis­er (with Euiy­oung Kim as chair). Work­ing with stu­dents is one of the best parts of being in academia.

All of the above would not have been pos­si­ble with­out the great sup­port from my super­vi­so­ry team: Ianus Keller, Neelke Doorn and Gerd Kortuem. I should also give spe­cial men­tion to Thi­js Turel at AMS Institute’s Respon­si­ble Sens­ing Lab, where most of my empir­i­cal work is situated.

If you want to dig a lit­tle deep­er into some of this, I recent­ly set up a web­site for my PhD project over at contestable.ai.

Geen transparantie zonder tegenspraak” — betoog voor première documentaire transparante laadpaal

Het onder­staande korte betoog sprak ik uit tij­dens de online pre­miere van de doc­u­men­taire over de transparante laad­paal op don­derdag 18 maart 2021.

Ik had laatst con­tact met een inter­na­tionale “thought leader” op het gebied van “tech ethics”. Hij vertelde mij dat hij heel dankbaar is voor het bestaan van de transparante laad­paal omdat het zo’n goed voor­beeld is van hoe design kan bij­dra­gen aan eerlijke technologie.

Dat is natu­urlijk ontzettend leuk om te horen. En het past in een bredere trend in de indus­trie gericht op het transparant en uitleg­baar mak­en van algo­ritmes. Inmid­dels is het zelfs zo ver dat wet­gev­ing uitleg­baarheid (in som­mige gevallen) ver­plicht stelt.

In de doc­u­men­taire hoor je meerdere mensen vertellen (mijzelf inbe­grepen) waarom het belan­grijk is dat stedelijke algo­ritmes transparant zijn. Thi­js benoemt heel mooi twee rede­nen: Enerz­i­jds het col­lec­tieve belang om democ­ra­tis­che con­t­role op de ontwik­kel­ing van stedelijke algo­ritmes mogelijk te mak­en. Anderz­i­jds is er het indi­vidu­ele belang om je recht te kun­nen halen als een sys­teem een besliss­ing maakt waarmee je het (om wat voor reden dan ook) niet eens bent.

En inder­daad, in bei­de gevallen (col­lec­tieve con­t­role en indi­vidu­ele reme­die) is transparantie een rand­voor­waarde. Ik denk dat we met dit project een hoop prob­le­men qua design en tech­niek hebben opgelost die daar­bij komen kijken. Tegelijk­er­ti­jd doemt er een nieuwe vraag aan de hori­zon op: Als we begri­jpen hoe een slim sys­teem werkt, en we zijn het er niet mee eens, wat dan? Hoe kri­jg je ver­vol­gens daad­w­erke­lijk invloed op de werk­ing van het systeem? 

Ik denk dat we onze focus zullen moeten gaan ver­leggen van transparantie naar wat ik tegen­spraak of in goed Engels “con­testa­bil­i­ty” noem.

Ontwer­pen voor tegen­spraak betekent dat we na moeten gaan denken over de mid­de­len die mensen nodig hebben voor het uitoe­fe­nen van hun recht op menselijke inter­ven­tie. Ja, dit betekent dat we infor­matie moeten aan­lev­eren over het hoe en waarom van indi­vidu­ele beslissin­gen. Transparantie dus. Maar het betekent ook dat we nieuwe kanalen en processen moeten inricht­en waarmee mensen ver­zoeken kun­nen indi­enen voor het herzien van een besliss­ing. We zullen na moeten gaan denken over hoe we dergelijke ver­zoeken beo­orde­len, en hoe we er voor zor­gen dat het slimme sys­teem in kwest­ie “leert” van de sig­nalen die we op deze manier oppikken uit de samenleving.

Je zou kun­nen zeggen dat ontwer­pen van transparantie een­richt­ingsver­keer is. Infor­matie stroomt van de ontwikke­lende par­tij, naar de eindge­bruik­er. Bij het ontwer­pen voor tegen­spraak gaat het om het creëren van een dialoog tussen ontwikke­laars en burgers.

Ik zeg burg­ers want niet alleen klassieke eindge­bruik­ers wor­den ger­aakt door slimme sys­te­men. Aller­lei andere groepen wor­den ook, vaak indi­rect beïnvloed.

Dat is ook een nieuwe ontwerp uitdag­ing. Hoe ontwerp je niet alleen voor de eindge­bruik­er (zoals bij de transparante laad­paal de EV bestu­ur­der) maar ook voor zoge­naamde indi­recte belanghebben­den, bijvoor­beeld bewon­ers van strat­en waar laad­palen geplaatst wor­den, die geen EV rij­den, of zelfs geen auto, maar even­goed een belang hebben bij hoe stoepen en strat­en wor­den ingericht.

Deze ver­bred­ing van het blikveld betekent dat we bij het ontwer­pen voor tegen­spraak nóg een stap verder kun­nen en zelfs moeten gaan dan het mogelijk mak­en van reme­die bij indi­vidu­ele beslissingen.

Want ontwer­pen voor tegen­spraak bij indi­vidu­ele beslissin­gen van een reeds uit­gerold sys­teem is noodza­ke­lijk­er­wi­js post-hoc en reac­tief, en beperkt zich tot één enkele groep belanghebbenden.

Zoals Thi­js ook min of meer benoemt in de doc­u­men­taire beïn­vloed slimme stedelijke infra­struc­tu­ur de lev­ens van ons alle­maal, en je zou kun­nen zeggen dat de design en tech­nis­che keuzes die bij de ontwik­kel­ing daar­van gemaakt wor­den intrin­siek ook poli­tieke keuzes zijn. 

Daarom denk ik dat we er niet omheen kun­nen om het pro­ces dat ten grond­slag ligt aan deze sys­te­men zelf, ook zo in te richt­en dat er ruimte is voor tegen­spraak. In mijn ide­ale wereld is de ontwik­kel­ing van een vol­gende gen­er­atie slimme laad­palen daarom par­tic­i­patief, plu­ri­form en inclusief, net als onze democ­ra­tie dat zelf ook streeft te zijn. 

Hoe we dit soort “con­testable” algo­ritmes pre­cies vorm moeten geven, hoe ontwer­pen voor tegen­spraak moeten gaan werken, is een open vraag. Maar een aan­tal jaren gele­den wist nie­mand nog hoe een transparante laad­paal er uit zou moeten zien, en dat hebben we ook voor elka­ar gekregen.

Update (2021–03-31 16:43): Een opname van het gehele event is nu ook beschik­baar. Het boven­staande betoog start rond 25:14.

Contestable Infrastructures” at Beyond Smart Cities Today

I’ll be at Beyond Smart Cities Today the next cou­ple of days (18–19 Sep­tem­ber). Below is the abstract I sub­mit­ted, plus a bib­li­og­ra­phy of some of the stuff that went into my think­ing for this and relat­ed mat­ters that I won’t have the time to get into.

In the actu­al­ly exist­ing smart city, algo­rith­mic sys­tems are increas­ing­ly used for the pur­pos­es of auto­mat­ed deci­sion-mak­ing, includ­ing as part of pub­lic infra­struc­ture. Algo­rith­mic sys­tems raise a range of eth­i­cal con­cerns, many of which stem from their opac­i­ty. As a result, pre­scrip­tions for improv­ing the account­abil­i­ty, trust­wor­thi­ness and legit­i­ma­cy of algo­rith­mic sys­tems are often based on a trans­paren­cy ide­al. The think­ing goes that if the func­tion­ing and own­er­ship of an algo­rith­mic sys­tem is made per­ceiv­able, peo­ple under­stand them and are in turn able to super­vise them. How­ev­er, there are lim­its to this approach. Algo­rith­mic sys­tems are com­plex and ever-chang­ing socio-tech­ni­cal assem­blages. Ren­der­ing them vis­i­ble is not a straight­for­ward design and engi­neer­ing task. Fur­ther­more such trans­paren­cy does not nec­es­sar­i­ly lead to under­stand­ing or, cru­cial­ly, the abil­i­ty to act on this under­stand­ing. We believe legit­i­mate smart pub­lic infra­struc­ture needs to include the pos­si­bil­i­ty for sub­jects to artic­u­late objec­tions to pro­ce­dures and out­comes. The result­ing “con­testable infra­struc­ture” would cre­ate spaces that open up the pos­si­bil­i­ty for express­ing con­flict­ing views on the smart city. Our project is to explore the design impli­ca­tions of this line of rea­son­ing for the phys­i­cal assets that cit­i­zens encounter in the city. Because after all, these are the per­ceiv­able ele­ments of the larg­er infra­struc­tur­al sys­tems that recede from view.

  • Alkhat­ib, A., & Bern­stein, M. (2019). Street-Lev­el Algo­rithms. 1–13. https://doi.org/10.1145/3290605.3300760
  • Anan­ny, M., & Craw­ford, K. (2018). See­ing with­out know­ing: Lim­i­ta­tions of the trans­paren­cy ide­al and its appli­ca­tion to algo­rith­mic account­abil­i­ty. New Media and Soci­ety, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
  • Cen­ti­vany, A., & Glushko, B. (2016). “Pop­corn tastes good”: Par­tic­i­pa­to­ry pol­i­cy­mak­ing and Reddit’s “AMAged­don.” Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems — Pro­ceed­ings, 1126–1137. https://doi.org/10.1145/2858036.2858516
  • Craw­ford, K. (2016). Can an Algo­rithm be Ago­nis­tic? Ten Scenes from Life in Cal­cu­lat­ed Publics. Sci­ence Tech­nol­o­gy and Human Val­ues, 41(1), 77–92. https://doi.org/10.1177/0162243915589635
  • DiS­al­vo, C. (2010). Design, Democ­ra­cy and Ago­nis­tic Plu­ral­ism. Pro­ceed­ings of the Design Research Soci­ety Con­fer­ence, 366–371.
  • Hilde­brandt, M. (2017). Pri­va­cy As Pro­tec­tion of the Incom­putable Self: Ago­nis­tic Machine Learn­ing. SSRN Elec­tron­ic Jour­nal, 1–33. https://doi.org/10.2139/ssrn.3081776
  • Jack­son, S. J., Gille­spie, T., & Payette, S. (2014). The Pol­i­cy Knot: Re-inte­grat­ing Pol­i­cy, Prac­tice and Design. CSCW Stud­ies of Social Com­put­ing, 588–602. https://doi.org/10.1145/2531602.2531674
  • Jew­ell, M. (2018). Con­test­ing the deci­sion: liv­ing in (and liv­ing with) the smart city. Inter­na­tion­al Review of Law, Com­put­ers and Tech­nol­o­gy. https://doi.org/10.1080/13600869.2018.1457000
  • Lind­blom, L. (2019). Con­sent, Con­testa­bil­i­ty, and Unions. Busi­ness Ethics Quar­ter­ly. https://doi.org/10.1017/beq.2018.25
  • Mit­tel­stadt, B. D., Allo, P., Tad­deo, M., Wachter, S., & Flori­di, L. (2016). The ethics of algo­rithms: Map­ping the debate. Big Data & Soci­ety, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
  • Van de Poel, I. (2016). An eth­i­cal frame­work for eval­u­at­ing exper­i­men­tal tech­nol­o­gy. Sci­ence and Engi­neer­ing Ethics, 22(3), 667–686. https://doi.org/10.1007/s11948-015‑9724‑3

Contestable Infrastructures: Designing for Dissent in Smart Public Objects” at We Make the City 2019

Thi­js Turèl of AMS Insti­tute and myself pre­sent­ed a ver­sion of the talk below at the Cities for Dig­i­tal Rights con­fer­ence on June 19 in Ams­ter­dam dur­ing the We Make the City fes­ti­val. The talk is an attempt to artic­u­late some of the ideas we both have been devel­op­ing for some time around con­testa­bil­i­ty in smart pub­lic infra­struc­ture. As always with this sort of thing, this is intend­ed as a con­ver­sa­tion piece so I wel­come any thoughts you may have.


The basic mes­sage of the talk is that when we start to do auto­mat­ed deci­sion-mak­ing in pub­lic infra­struc­ture using algo­rith­mic sys­tems, we need to design for the inevitable dis­agree­ments that may arise and fur­ther­more, we sug­gest there is an oppor­tu­ni­ty to focus on design­ing for such dis­agree­ments in the phys­i­cal objects that peo­ple encounter in urban space as they make use of infrastructure.

We set the scene by show­ing a num­ber of exam­ples of smart pub­lic infra­struc­ture. A cyclist cross­ing that adapts to weath­er con­di­tions. If it’s rain­ing cyclists more fre­quent­ly get a green light. A pedes­tri­an cross­ing in Tilburg where elder­ly can use their mobile to get more time to cross. And final­ly, the case we are involved with our­selves: smart EV charg­ing in the city of Ams­ter­dam, about which more later.

Image cred­its: Vat­ten­fall, Fietsfan010, De Nieuwe Draai

We iden­ti­fy three trends in smart pub­lic infra­struc­ture: (1) where pre­vi­ous­ly algo­rithms were used to inform pol­i­cy, now they are employed to per­form auto­mat­ed deci­sion-mak­ing on an indi­vid­ual case basis. This rais­es the stakes; (2) dis­trib­uted own­er­ship of these sys­tems as the result of pub­lic-pri­vate part­ner­ships and oth­er com­plex col­lab­o­ra­tion schemes leads to unclear respon­si­bil­i­ty; and final­ly (3) the increas­ing use of machine learn­ing leads to opaque decision-making.

These trends, and algo­rith­mic sys­tems more gen­er­al­ly, raise a num­ber of eth­i­cal con­cerns. They include but are not lim­it­ed to: the use of induc­tive cor­re­la­tions (for exam­ple in the case of machine learn­ing) leads to unjus­ti­fied results; lack of access to and com­pre­hen­sion of a system’s inner work­ings pro­duces opac­i­ty, which in turn leads to a lack of trust in the sys­tems them­selves and the organ­i­sa­tions that use them; bias is intro­duced by a num­ber of fac­tors, includ­ing devel­op­ment team prej­u­dices, tech­ni­cal flaws, bad data and unfore­seen inter­ac­tions with oth­er sys­tems; and final­ly the use of pro­fil­ing, nudg­ing and per­son­al­i­sa­tion leads to dimin­ished human agency. (We high­ly rec­om­mend the arti­cle by Mit­tel­stadt et al. for a com­pre­hen­sive overview of eth­i­cal con­cerns raised by algorithms.)

So for us, the ques­tion that emerges from all this is: How do we organ­ise the super­vi­sion of smart pub­lic infra­struc­ture in a demo­c­ra­t­ic and law­ful way?

There are a num­ber of exist­ing approach­es to this ques­tion. These include legal and reg­u­la­to­ry (e.g. the right to expla­na­tion in the GDPR); audit­ing (e.g. KPMG’s AI in Con­trol” method, BKZ’s transparantielab); pro­cure­ment (e.g. open source claus­es); insourc­ing (e.g. GOV.UK) and design and engi­neer­ing (e.g. our own work on the trans­par­ent charg­ing sta­tion).

We feel there are two impor­tant lim­i­ta­tions with these exist­ing approach­es. The first is a focus on pro­fes­sion­als and the sec­ond is a focus on pre­dic­tion. We’ll dis­cuss each in turn.

Image cred­its: Cities Today

First of all, many solu­tions tar­get a pro­fes­sion­al class, be it accoun­tants, civ­il ser­vants, super­vi­so­ry boards, as well as tech­nol­o­gists, design­ers and so on. But we feel there is a role for the cit­i­zen as well, because the super­vi­sion of these sys­tems is sim­ply too impor­tant to be left to a priv­i­leged few. This role would include iden­ti­fy­ing wrong­do­ing, and sug­gest­ing alternatives. 

There is a ten­sion here, which is that from the per­spec­tive of the pub­lic sec­tor one should only ask cit­i­zens for their opin­ion when you have the inten­tion and the resources to actu­al­ly act on their sug­ges­tions. It can also be a chal­lenge to iden­ti­fy legit­i­mate con­cerns in the flood of feed­back that can some­times occur. From our point of view though, such con­cerns should not be used as an excuse to not engage the pub­lic. If cit­i­zen par­tic­i­pa­tion is con­sid­ered nec­es­sary, the focus should be on free­ing up resources and set­ting up struc­tures that make it fea­si­ble and effective.

The sec­ond lim­i­ta­tion is pre­dic­tion. This is best illus­trat­ed with the Collinridge dilem­ma: in the ear­ly phas­es of new tech­nol­o­gy, when a tech­nol­o­gy and its social embed­ding are still mal­leable, there is uncer­tain­ty about the social effects of that tech­nol­o­gy. In lat­er phas­es, social effects may be clear but then often the tech­nol­o­gy has become so well entrenched in soci­ety that it is hard to over­come neg­a­tive social effects. (This sum­ma­ry is tak­en from an excel­lent van de Poel arti­cle on the ethics of exper­i­men­tal technology.) 

Many solu­tions dis­re­gard the Collingridge dilem­ma and try to pre­dict and pre­vent adverse effects of new sys­tems at design-time. One exam­ple of this approach would be val­ue-sen­si­tive design. Our focus in stead is on use-time. Con­sid­er­ing the fact that smart pub­lic infra­struc­ture tends to be devel­oped on an ongo­ing basis, the ques­tion becomes how to make cit­i­zens a part­ner in this process. And even more specif­i­cal­ly we are inter­est­ed in how this can be made part of the design of the “touch­points” peo­ple actu­al­ly encounter in the streets, as well as their back­stage processes.

Why do we focus on these phys­i­cal objects? Because this is where peo­ple actu­al­ly meet the infra­struc­tur­al sys­tems, of which large parts recede from view. These are the places where they become aware of their pres­ence. They are the prover­bial tip of the iceberg. 

Image cred­its: Sagar Dani

The use of auto­mat­ed deci­sion-mak­ing in infra­struc­ture reduces people’s agency. For this rea­son, resources for agency need to be designed back into these sys­tems. Fre­quent­ly the answer to this ques­tion is premised on a trans­paren­cy ide­al. This may be a pre­req­ui­site for agency, but it is not suf­fi­cient. Trans­paren­cy may help you become aware of what is going on, but it will not nec­es­sar­i­ly help you to act on that knowl­edge. This is why we pro­pose a shift from trans­paren­cy to con­testa­bil­i­ty. (We can high­ly rec­om­mend Anan­ny and Crawford’s arti­cle for more on why trans­paren­cy is insufficient.)

To clar­i­fy what we mean by con­testa­bil­i­ty, con­sid­er the fol­low­ing three exam­ples: When you see the lights on your router blink in the mid­dle of the night when no-one in your house­hold is using the inter­net you can act on this knowl­edge by yank­ing out the device’s pow­er cord. You may nev­er use the emer­gency brake in a train but its pres­ence does give you a sense of con­trol. And final­ly, the cash reg­is­ter receipt pro­vides you with a view into both the pro­ce­dure and the out­come of the super­mar­ket check­out pro­ce­dure and it offers a resource with which you can dis­pute them if some­thing appears to be wrong.

Image cred­its: Aangifte­doen, source unknown for remainder

None of these exam­ples is a per­fect illus­tra­tion of con­testa­bil­i­ty but they hint at some­thing more than trans­paren­cy, or per­haps even some­thing whol­ly sep­a­rate from it. We’ve been inves­ti­gat­ing what their equiv­a­lents would be in the con­text of smart pub­lic infrastructure.

To illus­trate this point fur­ther let us come back to the smart EV charg­ing project we men­tioned ear­li­er. In Ams­ter­dam, pub­lic EV charg­ing sta­tions are becom­ing “smart” which in this case means they auto­mat­i­cal­ly adapt the speed of charg­ing to a num­ber of fac­tors. These include grid capac­i­ty, and the avail­abil­i­ty of solar ener­gy. Addi­tion­al fac­tors can be added in future, one of which under con­sid­er­a­tion is to give pri­or­i­ty to shared cars over pri­vate­ly owned cars. We are involved with an ongo­ing effort to con­sid­er how such charg­ing sta­tions can be redesigned so that peo­ple under­stand what’s going on behind the scenes and can act on this under­stand­ing. The moti­va­tion for this is that if not designed care­ful­ly, the opac­i­ty of smart EV charg­ing infra­struc­ture may be detri­men­tal to social accep­tance of the tech­nol­o­gy. (A first out­come of these efforts is the Trans­par­ent Charg­ing Sta­tion designed by The Incred­i­ble Machine. A fol­low-up project is ongoing.)

Image cred­its: The Incred­i­ble Machine, Kars Alfrink

We have iden­ti­fied a num­ber of dif­fer­ent ways in which peo­ple may object to smart EV charg­ing. They are list­ed in the table below. These types of objec­tions can lead us to fea­ture require­ments for mak­ing the sys­tem contestable. 

Because the list is pre­lim­i­nary, we asked the audi­ence if they could imag­ine addi­tion­al objec­tions, if those exam­ples rep­re­sent­ed new cat­e­gories, and if they would require addi­tion­al fea­tures for peo­ple to be able to act on them. One par­tic­u­lar­ly inter­est­ing sug­ges­tion that emerged was to give local com­mu­ni­ties con­trol over the poli­cies enact­ed by the charge points in their vicin­i­ty. That’s some­thing to fur­ther con­sid­er the impli­ca­tions of.

And that’s where we left it. So to summarise: 

  1. Algo­rith­mic sys­tems are becom­ing part of pub­lic infrastructure.
  2. Smart pub­lic infra­struc­ture rais­es new eth­i­cal concerns.
  3. Many solu­tions to eth­i­cal con­cerns are premised on a trans­paren­cy ide­al, but do not address the issue of dimin­ished agency.
  4. There are dif­fer­ent cat­e­gories of objec­tions peo­ple may have to an algo­rith­mic system’s workings.
  5. Mak­ing a sys­tem con­testable means cre­at­ing resources for peo­ple to object, open­ing up a space for the explo­ration of mean­ing­ful alter­na­tives to its cur­rent implementation.

Research Through Design Reading List

After post­ing the list of engi­neer­ing ethics read­ings it occurred to me I also have a real­ly nice col­lec­tion of things to read from a course on research through design taught by Pieter Jan Stap­pers, which I took ear­li­er this year. I fig­ured some might get some use out of it and I like hav­ing it for my own ref­er­ence here as well. 

The back­bone for this course is the chap­ter on research through design by Stap­pers and Giac­car­di in the ency­clo­pe­dia of human-com­put­er inter­ac­tion, which I high­ly recommend. 

All of the read­ings below are ref­er­enced in that chap­ter. I’ve read some, quick­ly gut­ted oth­ers for mean­ing and the remain­der is still on my to-read list. For me per­son­al­ly, the things on anno­tat­ed port­fo­lios and inter­me­di­ate-lev­el knowl­edge by Gaver and Löw­gren were the most imme­di­ate­ly use­ful and applic­a­ble. I’d read the Zim­mer­man paper ear­li­er and although it’s pret­ty con­crete in its pre­scrip­tions I did not real­ly latch on to it.

  1. Brandt, Eva, and Thomas Binder. “Exper­i­men­tal design research: geneal­o­gy, inter­ven­tion, argu­ment.” Inter­na­tion­al Asso­ci­a­tion of Soci­eties of Design Research, Hong Kong 10 (2007).
  2. Gaver, Bill, and John Bow­ers. “Anno­tat­ed port­fo­lios.” inter­ac­tions 19.4 (2012): 40–49.
  3. Gaver, William. “What should we expect from research through design?.” Pro­ceed­ings of the SIGCHI con­fer­ence on human fac­tors in com­put­ing sys­tems. ACM, 2012.
  4. Löw­gren, Jonas. “Anno­tat­ed port­fo­lios and oth­er forms of inter­me­di­ate-lev­el knowl­edge.” Inter­ac­tions 20.1 (2013): 30–34.
  5. Stap­pers, Pieter Jan, F. Sleeswijk Viss­er, and A. I. Keller. “The role of pro­to­types and frame­works for struc­tur­ing explo­rations by research through design.” The Rout­ledge Com­pan­ion to Design Research (2014): 163–174.
  6. Stap­pers, Pieter Jan. “Meta-lev­els in Design Research.”
  7. Stap­pers, Pieter Jan. “Pro­to­types as cen­tral vein for knowl­edge devel­op­ment.” Pro­to­type: Design and craft in the 21st cen­tu­ry (2013): 85–97.
  8. Wensveen, Stephan, and Ben Matthews. “Pro­to­types and pro­to­typ­ing in design research.” The Rout­ledge Com­pan­ion to Design Research. Tay­lor & Fran­cis (2015).
  9. Zim­mer­man, John, Jodi For­l­izzi, and Shel­ley Even­son. “Research through design as a method for inter­ac­tion design research in HCI.” Pro­ceed­ings of the SIGCHI con­fer­ence on Human fac­tors in com­put­ing sys­tems. ACM, 2007.

Bonus lev­el: sev­er­al items relat­ed to “mud­dling through”…

  1. Flach, John M., and Fred Voorhorst. “What mat­ters?: Putting com­mon sense to work.” (2016).
  2. Lind­blom, Charles E. “Still Mud­dling, Not Yet Through.” Pub­lic Admin­is­tra­tion Review 39.6 (1979): 517–26.
  3. Lind­blom, Charles E. “The sci­ence of mud­dling through.” Pub­lic Admin­is­tra­tion Review 19.2 (1959): 79–88.

Engineering Ethics Reading List

I recent­ly fol­lowed an excel­lent three-day course on engi­neer­ing ethics. It was offered by the TU Delft grad­u­ate school and taught by Behnam Taibi with guest lec­tures from sev­er­al of our faculty.

I found it par­tic­u­lar­ly help­ful to get some sug­ges­tions for fur­ther read­ing that rep­re­sent some of the foun­da­tion­al ideas in the field. I fig­ured it would be use­ful to oth­ers as well to have a point­er to them. 

So here they are. I’ve quick­ly gut­ted these for their mean­ing. The one by Van de Poel I did read entire­ly and can high­ly rec­om­mend for any­one who’s doing design of emerg­ing tech­nolo­gies and wants to escape from the informed con­sent conundrum. 

I intend to dig into the Doorn one, not just because she’s one of my pro­mot­ers but also because resilience is a con­cept that is close­ly relat­ed to my own inter­ests. I’ll also get into the Flori­di one in detail but the con­cept of infor­ma­tion qual­i­ty and the care ethics per­spec­tive on the prob­lem of infor­ma­tion abun­dance and atten­tion scarci­ty I found imme­di­ate­ly applic­a­ble in inter­ac­tion design.

  1. Stil­goe, Jack, Richard Owen, and Phil Mac­naght­en. “Devel­op­ing a frame­work for respon­si­ble inno­va­tion.” Research Pol­i­cy 42.9 (2013): 1568–1580.
  2. Van den Hov­en, Jeroen. “Val­ue sen­si­tive design and respon­si­ble inno­va­tion.” Respon­si­ble inno­va­tion (2013): 75–83.
  3. Hans­son, Sven Ove. “Eth­i­cal cri­te­ria of risk accep­tance.” Erken­nt­nis 59.3 (2003): 291–309.
  4. Van de Poel, Ibo. “An eth­i­cal frame­work for eval­u­at­ing exper­i­men­tal tech­nol­o­gy.” Sci­ence and engi­neer­ing ethics22.3 (2016): 667–686.
  5. Hans­son, Sven Ove. “Philo­soph­i­cal prob­lems in cost–benefit analy­sis.” Eco­nom­ics & Phi­los­o­phy 23.2 (2007): 163–183.
  6. Flori­di, Luciano. “Big Data and infor­ma­tion qual­i­ty.” The phi­los­o­phy of infor­ma­tion qual­i­ty. Springer, Cham, 2014. 303–315.
  7. Doorn, Neelke, Pao­lo Gar­doni, and Colleen Mur­phy. “A mul­ti­dis­ci­pli­nary def­i­n­i­tion and eval­u­a­tion of resilience: The role of social jus­tice in defin­ing resilience.” Sus­tain­able and Resilient Infra­struc­ture (2018): 1–12.

We also got a draft of the intro chap­ter to a book on engi­neer­ing and ethics that Behnam is writ­ing. That looks very promis­ing as well but I can’t share yet for obvi­ous reasons.

ThingsCon 2018 workshop ‘Seeing Like a Bridge’

Work­shop in progress with a view of Rot­ter­dam’s Willems­brug across the Maas.

Ear­ly Decem­ber of last year Alec Shuldin­er and myself ran a work­shop at ThingsCon 2018 in Rotterdam.

Here’s the descrip­tion as it was list­ed on the con­fer­ence web­site:

In this work­shop we will take a deep dive into some of the chal­lenges of design­ing smart pub­lic infrastructure.

Smart city ideas are mov­ing from hype into real­i­ty. The every­day things that our con­tem­po­rary world runs on, such as roads, rail­ways and canals are not immune to this devel­op­ment. Basic, “hard” infra­struc­ture is being aug­ment­ed with inter­net-con­nect­ed sens­ing, pro­cess­ing and actu­at­ing capa­bil­i­ties. We are involved as prac­ti­tion­ers and researchers in one such project: the MX3D smart bridge, a pedes­tri­an bridge 3D print­ed from stain­less steel and equipped with a net­work of sensors.

The ques­tion fac­ing every­one involved with these devel­op­ments, from cit­i­zens to pro­fes­sion­als to pol­i­cy mak­ers is how to reap the poten­tial ben­e­fits of these tech­nolo­gies, with­out degrad­ing the urban fab­ric. For this to hap­pen, infor­ma­tion tech­nol­o­gy needs to become more like the city: open-end­ed, flex­i­ble and adapt­able. And we need meth­ods and tools for the diverse range of stake­hold­ers to come togeth­er and col­lab­o­rate on the design of tru­ly intel­li­gent pub­lic infrastructure.

We will explore these ques­tions in this work­shop by first walk­ing you through the archi­tec­ture of the MX3D smart bridge—offering a unique­ly con­crete and prag­mat­ic view into a cut­ting edge smart city project. Sub­se­quent­ly we will togeth­er explore the ques­tion: What should a smart pedes­tri­an bridge that is aware of itself and its sur­round­ings be able to tell us? We will con­clude by shar­ing some of the high­lights from our con­ver­sa­tion, and make note of par­tic­u­lar­ly thorny ques­tions that require fur­ther work.

The work­shop’s struc­ture was quite sim­ple. After a round of intro­duc­tions, Alec intro­duced the MX3D bridge to the par­tic­i­pants. For a sense of what that intro­duc­tion talk was like, I rec­om­mend view­ing this record­ing of a pre­sen­ta­tion he deliv­ered at a recent Pakhuis de Zwi­jger event.

We then ran three rounds of group dis­cus­sion in the style of world cafe. each dis­cus­sion was guid­ed by one ques­tion. Par­tic­i­pants were asked to write, draw and doo­dle on the large sheets of paper cov­er­ing each table. At the end of each round, peo­ple moved to anoth­er table while one per­son remained to share the pre­ced­ing round’s dis­cus­sion with the new group.

The dis­cus­sion ques­tions were inspired by val­ue-sen­si­tive design. I was inter­est­ed to see if peo­ple could come up with alter­na­tive uses for a sen­sor-equipped 3D-print­ed foot­bridge if they first con­sid­ered what in their opin­ion made a city worth liv­ing in. 

The ques­tions we used were:

  1. What spe­cif­ic things do you like about your town? (Places, things to do, etc. Be specific.)
  2. What val­ues under­ly those things? (A val­ue is what a per­son or group of peo­ple con­sid­er impor­tant in life.)
  3. How would you redesign the bridge to sup­port those values?

At the end of the three dis­cus­sion rounds we went around to each table and shared the high­lights of what was pro­duced. We then had a bit of a back and forth about the out­comes and the work­shop approach, after which we wrapped up.

We did get to some inter­est­ing val­ues by start­ing from per­son­al expe­ri­ence. Par­tic­i­pants came from a vari­ety of coun­tries and that was reflect­ed in the range of exam­ples and relat­ed val­ues. The design ideas for the bridge remained some­what abstract. It turned out to be quite a chal­lenge to make the jump from val­ues to dif­fer­ent types of smart bridges. Despite this, we did get nice ideas such as hav­ing the bridge report on water qual­i­ty of the canal it cross­es, derived from the val­ue of care for the environment.

The response from par­tic­i­pants after­wards was pos­i­tive. Peo­ple found it thought-pro­vok­ing, which was def­i­nite­ly the point. Peo­ple were also eager to learn even more about the bridge project. It remains a thing that cap­tures peo­ple’s imag­i­na­tion. For that rea­son alone, it con­tin­ues to be a very pro­duc­tive case to use for the ground­ing of these sorts of discussions.