On mapping AI value chains

At CSCW 2024, back in Novem­ber of last year, we* ran a work­shop titled “From Stem to Stern: Con­testa­bil­i­ty Along AI Val­ue Chains.” With it, we want­ed to address a gap in con­testable AI research. Cur­rent work focus­es main­ly on con­test­ing spe­cif­ic AI deci­sions or out­puts (for exam­ple, appeal­ing a deci­sion made by an auto­mat­ed con­tent mod­er­a­tion sys­tem). But we should also look at con­testa­bil­i­ty across the entire AI val­ue chain—from raw mate­r­i­al extrac­tion to deploy­ment and impact (think, for exam­ple, of data cen­ter activists oppos­ing the con­struc­tion of new hyper­scales). We aimed to explore how dif­fer­ent stake­hold­ers can con­test AI sys­tems at var­i­ous points in this chain, con­sid­er­ing issues like labor con­di­tions, envi­ron­men­tal impact, and data col­lec­tion prac­tices often over­looked in con­testa­bil­i­ty discussions.

The work­shop mixed pre­sen­ta­tions with hands-on activ­i­ties. In the morn­ing, researchers shared their work through short talks, both in per­son and online. The after­noon focused on map­ping out where and how peo­ple can con­test AI sys­tems, from data col­lec­tion to deploy­ment, fol­lowed by detailed dis­cus­sions of the prac­ti­cal chal­lenges involved. We had both in-per­son and online par­tic­i­pants, requir­ing care­ful coor­di­na­tion between facil­i­ta­tors. We wrapped up by syn­the­siz­ing key insights and out­lin­ing future research directions. 

I was respon­si­ble for being a remote facil­i­ta­tor most of the day. But Mireia and I also pre­pared and ran the first group activ­i­ty, in which we mapped a typ­i­cal AI val­ue chain. I fig­ured I might as well share the can­vas we used for that here. It’s not rock­et sci­ence, but it held up pret­ty well, so maybe some oth­er peo­ple will get some use out of it. The can­vas was designed to offer a fair bit of scaf­fold­ing for think­ing through what deci­sion points there are along the chain that are poten­tial­ly value-laden.

AI val­ue chain map­ping can­vas (licensed CC-BY 4.0 Mireia Yur­ri­ta & Kars Alfrink, 2024). Down­load PDF.

Here’s how the activ­i­ty worked: We cov­ered about 50 min­utes doing a struc­tured map­ping exer­cise where par­tic­i­pants iden­ti­fied poten­tial con­tes­ta­tion points along an AI val­ue chain, using Chat­G­PT as an exam­ple case. The activ­i­ty used a Miro board with a pre­lim­i­nary map show­ing dif­fer­ent stages of AI devel­op­ment (infra­struc­ture set­up, data man­age­ment, AI devel­op­ment, etc.). Par­tic­i­pants first brain­stormed indi­vid­u­al­ly for 10 min­utes, adding val­ue-laden deci­sions and not­ing stake­hold­ers, harms, ben­e­fits, and val­ues at stake. They then col­lab­o­rat­ed to reor­ga­nize and dis­cuss the map for 15 min­utes. The activ­i­ty con­clud­ed with par­tic­i­pants using dot vot­ing (3 votes each) to iden­ti­fy the most impact­ful con­tes­ta­tion sites, which were then clus­tered and named to feed into the next group activity.

The activ­i­ty design drew from two main influ­ences: typ­i­cal val­ue chain map­ping method­olo­gies (e.g., Map­ping Actors along Val­ue Chains, 2017), which usu­al­ly empha­size track­ing actors, flows, and con­tex­tu­al fac­tors, and Ward­ley map­ping (Ward­ley, 2022), which is char­ac­ter­ized by the idea of a struc­tured pro­gres­sion along an x‑axis with an addi­tion­al dimen­sion on the y‑axis.

The can­vas design aimed to make AI sys­tem devel­op­ment more tan­gi­ble by break­ing it into clear phas­es (from infra­struc­ture through gov­er­nance) while con­sid­er­ing vis­i­bil­i­ty and mate­ri­al­i­ty through the y‑axis. We ulti­mate­ly chose to use a famil­iar sys­tem (Chat­G­PT). This, com­bined with the activ­i­ty’s struc­tured approach, helped par­tic­i­pants iden­ti­fy con­crete oppor­tu­ni­ties for inter­ven­tion and con­tes­ta­tion along the AI val­ue chain, which we could build on dur­ing the rest of the workshop.

I got a lot out of this work­shop. Some of the key take­aways that emerged out of the activ­i­ties and dis­cus­sions include:

  • There’s a dis­con­nect between legal and tech­ni­cal com­mu­ni­ties, from basic ter­mi­nol­o­gy dif­fer­ences to vary­ing con­cep­tions of key con­cepts like explain­abil­i­ty, high­light­ing the need for trans­la­tion work between disciplines. 
  • We need to move beyond indi­vid­ual griev­ance mod­els to con­sid­er col­lec­tive con­tes­ta­tion and upstream inter­ven­tions in the AI sup­ply chain. 
  • We also need to shift from reac­tive con­tes­ta­tion to proac­tive design approach­es that build in con­testa­bil­i­ty from the start. 
  • By virtue of being hybrid, we were lucky enough to have par­tic­i­pants from across the globe. This helped dri­ve home to me the impor­tance of includ­ing Glob­al South per­spec­tives and con­sid­er­ing con­testa­bil­i­ty beyond West­ern legal frame­works. We des­per­ate­ly need a more inclu­sive and glob­al­ly-mind­ed approach to AI governance.

Many thanks to all the work­shop co-orga­niz­ers for hav­ing me as part of the team and to Agathe and Yulu, in par­tic­u­lar, for lead­ing the effort.


* The full work­shop team con­sist­ed of Agathe Bal­ayn, Yulu Pi, David Gray Wid­der, Mireia Yur­ri­ta, Sohi­ni Upad­hyay, Naveena Karusala, Hen­ri­et­ta Lyons, Cagatay Turkay, Chris­telle Tes­sono, Blair Attard-Frost, Ujw­al Gadi­ra­ju, and myself.

On autonomy, design, and AI

In my the­sis, I use auton­o­my to build the nor­ma­tive case for con­testa­bil­i­ty. It so hap­pens that this year’s theme at the Delft Design for Val­ues Insti­tute is also auton­o­my. On Octo­ber 15, 2024, I par­tic­i­pat­ed in a pan­el dis­cus­sion on auton­o­my to kick things off. I col­lect­ed some notes on auton­o­my that go beyond the con­cep­tu­al­iza­tion I used in my the­sis. I thought it might be help­ful and inter­est­ing to col­lect some of them here in adapt­ed form.

The notes I brought includ­ed, first of all, a sum­ma­ry of the ecu­meni­cal con­cep­tu­al­iza­tion of auton­o­my con­cern­ing auto­mat­ed deci­sion-mak­ing sys­tems offered by Alan Rubel, Clin­ton Cas­tro, and Adam Pham (2021). They con­ceive of auton­o­my as effec­tive self-gov­er­nance. To be autonomous, we need authen­tic beliefs about our cir­cum­stances and the agency to act on our plans. Regard­ing algo­rith­mic sys­tems, they offer this notion of a rea­son­able endorse­ment test—the degree to which a sys­tem can be said to respect auton­o­my depends on its reli­a­bil­i­ty, the stakes of its out­puts, the degree to which sub­jects can be held respon­si­ble for inputs, and the dis­tri­b­u­tion of bur­dens across groups.

Sec­ond, I col­lect­ed some notes from sev­er­al pieces by James Mul­doon, which get into notions of free­dom and auton­o­my that were devel­oped in social­ist repub­li­can thought by the likes of Lux­em­burg, Kaut­sky, and Cas­to­ri­adis ( 2020, 2021a, 2021b). This sto­ry of auton­o­my is sociopo­lit­i­cal rather than moral. This approach is quite appeal­ing for some­one inter­est­ed in non-ide­al the­o­ry in a real­ist mode like myself. The account of auton­o­my Mul­doon offers is one where indi­vid­ual auton­o­my hinges on greater group auton­o­my and stronger bonds of asso­ci­a­tion between those pro­duc­ing and con­sum­ing tech­nolo­gies. Free­dom is con­ceived of as col­lec­tive self-determination.

And then third and final­ly, there’s this con­nect­ed idea of rela­tion­al auton­o­my, which to a degree is part of the account offered by Rubel et al., but in the con­cep­tions here more rad­i­cal in how it seeks to cre­ate dis­tance from lib­er­al indi­vid­u­al­ism (e.g., Christ­man, 2004; Mhlam­bi & Tiri­bel­li, 2023; West­lund, 2009). In this, indi­vid­ual capac­i­ty for autonomous choice is shaped by social struc­tures. So free­dom becomes real­ized through net­works of care, respon­si­bil­i­ty, and interdependence.

That’s what I am inter­est­ed in: accounts of auton­o­my that are not premised on lib­er­al indi­vid­u­al­ism and that give us some alter­na­tive han­dle on the prob­lem of the social con­trol of tech­nol­o­gy in gen­er­al and of AI in particular.

From my point of view, the impli­ca­tions of all this for design and AI include the following.

First, to make a fair­ly obvi­ous but often over­looked point, the degree to which a giv­en sys­tem impacts people’s auton­o­my depends on var­i­ous fac­tors. It makes lit­tle sense to make blan­ket state­ments about AI destroy­ing our auton­o­my and so on.

Sec­ond, in val­ue-sen­si­tive design terms, you can think about auton­o­my as a val­ue to be bal­anced against others—in the case where you take the posi­tion that all val­ues can be con­sid­ered equal­ly impor­tant, at least in prin­ci­ple. Or you can con­sid­er auton­o­my more like a pre­con­di­tion for peo­ple to live with tech­nol­o­gy in con­cor­dance with their val­ues, mak­ing auton­o­my take prece­dence over oth­er val­ues. The sociopo­lit­i­cal and rela­tion­al accounts above point in this direction.

Third, sup­pose you buy into the rad­i­cal demo­c­ra­t­ic idea of tech­nol­o­gy and auton­o­my. In that case, it fol­lows that it makes lit­tle sense to admon­ish indi­vid­ual design­ers about respect­ing oth­ers’ auton­o­my. They may be asked to priv­i­lege tech­nolo­gies in their designs that afford indi­vid­ual and group auton­o­my. But design­ers also need orga­ni­za­tion and eman­ci­pa­tion more often than not. So it’s about build­ing pow­er. The pow­er of work­ers inside the orga­ni­za­tions that devel­op tech­nolo­gies and the pow­er of com­mu­ni­ties that “con­sume” those same technologies. 

With AI, the fact is that, in real­i­ty, in the cas­es I look at, the com­mu­ni­ties that AI is brought to bear on have lit­tle say in the mat­ter. The buy­ers and deploy­ers of AI could and should be made more account­able to the peo­ple sub­ject­ed to AI.

Towards a realist AI design practice?

This is a ver­sion of the open­ing state­ment I con­tributed to the pan­el “Evolv­ing Per­spec­tives on AI and Design” at the Design & AI sym­po­sium that was part of Dutch Design Week 2024. I had the plea­sure of join­ing Iohan­na Nicen­boim and Jesse Ben­jamin on stage to explore what could be called the post-GenAI pos­si­bil­i­ty space for design. Thanks also to Math­ias Funk for moderating. 

The slide I displayed:

My state­ment:

  1. There’s a lot of mag­i­cal think­ing in the AI field today. It assumes intel­li­gence is latent in the struc­ture of the inter­net. Metaphors like AGI and super­in­tel­li­gence are mag­i­cal in nature. AI prac­tice is also very secre­tive. It relies on demon­stra­tions. This leads to a lack of rig­or and polit­i­cal account­abil­i­ty (cf. Gilbert & Lam­bert in Ven­ture­Beat, 2023).
  2. Design in its ide­al­ist mode is eas­i­ly fooled by such mag­ic. For exam­ple, in a recent report, the Dutch Court of Audit states that 35% of gov­ern­ment AI sys­tems are not known to meet expec­ta­tions (cf. Raji et al., 2022).
  3. What is need­ed is design in a real­ist mode. Real­ism focus­es on who does what to whom in whose inter­est (cf. Geuss, 2008, 23 in von Busch & Palmås, 2023). Applied to AI the ques­tion becomes who gets to do AI to whom? This isn’t to say we should con­sid­er AI tech­nolo­gies com­plete­ly inert. It medi­ates our being in the world (Ver­beek, 2021). But we should also not con­sid­er it an inde­pen­dent force that’s just drag­ging us along.
  4. The chal­lenge is to steer a path between, on the one hand, whole­sale cyn­i­cal rejec­tion and naive, opti­mistic, uncon­di­tion­al embrace, on the oth­er hand.
  5. In my own work, what that looks like is to use design to make things that allow me to go into sit­u­a­tions where peo­ple are build­ing and using AI sys­tems. And to use those things as instru­ments to ask ques­tions relat­ed to human auton­o­my, social con­trol, and col­lec­tive free­dom in the face of AI.
  6. The exam­ple shown is an ani­mat­ed short depict­ing a design fic­tion sce­nario involv­ing intel­li­gent cam­era cars used for pol­i­cy exe­cu­tion in urban pub­lic space. I used this video to talk to civ­il ser­vants about the chal­lenges fac­ing gov­ern­ments who want to ensure cit­i­zens remain in con­trol of the AI sys­tems they deploy (cf. Alfrink et al., 2023).
  7. Why is this real­ist? Because the work looks at how some groups of peo­ple use par­tic­u­lar forms of actu­al­ly exist­ing AI to do things to oth­er peo­ple. The work also fore­grounds the com­pet­ing inter­ests that are at stake. And it frames AI as nei­ther ful­ly autonomous nor ful­ly pas­sive, but as a thing that medi­ates peo­ples’ per­cep­tions and actions.
  8. There are more exam­ples besides this. But I will stop here. I just want to reit­er­ate that I think we need a real­ist approach to the design of AI.

Participatory AI and ML engineering

In the first half of this year, I’ve pre­sent­ed sev­er­al ver­sions of a brief talk on par­tic­i­pa­to­ry AI. I fig­ured I would post an amal­gam of these to the blog for future ref­er­ence. (Pre­vi­ous­ly, on the blog, I post­ed a brief lit review on the same top­ic; this talk builds on that.)

So, to start, the main point of this talk is that many par­tic­i­pa­to­ry approach­es to AI don’t engage deeply with the specifics of the tech­nol­o­gy. One such spe­cif­ic is the trans­la­tion work engi­neers do to make a prob­lem “learn­able” by a machine (Kang, 2023). From this per­spec­tive, the main ques­tion to ask becomes, how does trans­la­tion hap­pen in our spe­cif­ic projects? Should cit­i­zens be involved in this trans­la­tion work? If so, how to achieve this? 

Before we dig into the state of par­tic­i­pa­to­ry AI, let’s begin by clar­i­fy­ing why we might want to enable par­tic­i­pa­tion in the first place. A com­mon moti­va­tion is a lack of demo­c­ra­t­ic con­trol over AI sys­tems. (This is par­tic­u­lar­ly con­cern­ing when AI sys­tems are used for gov­ern­ment pol­i­cy exe­cu­tion. These are the sys­tems I most­ly look at in my own research.) And so the response is to bring the peo­ple into the devel­op­ment process, and to let them co-decide matters.

In these cas­es, par­tic­i­pa­tion can be under­stood as an enabler of demo­c­ra­t­ic agency, i.e., a way for sub­jects to legit­i­mate the use of AI sys­tems (cf. Peter, 2020 in Rubel et al., 2021). Peter dis­tin­guish­es two path­ways: a nor­ma­tive one and a demo­c­ra­t­ic one. Par­tic­i­pa­tion can be seen as an exam­ple of the demo­c­ra­t­ic path­way to legit­i­ma­tion. A cru­cial detail Peter men­tions here, which is often over­looked in par­tic­i­pa­to­ry AI lit­er­a­ture, is that nor­ma­tive con­straints must lim­it the demo­c­ra­t­ic path­way to avoid arbitrariness.

So, what is the state of par­tic­i­pa­to­ry AI research and prac­tice? I will look at each in turn next.

As men­tioned, I pre­vi­ous­ly post­ed on the state of par­tic­i­pa­to­ry AI research, so I won’t repeat that in full here. (For the record, I reviewed Birhane et al. (2022), Brat­teteig & Verne (2018), Del­ga­do et al. (2023), Ehsan & Riedl (2020), Fef­fer et al. (2023), Gerdes (2022), Groves et al. (2023), Robert­son et al. (2023), Sloane et al. (2020), and Zytko et al. (2022).) Ele­ments that jump out include: 

  • Super­fi­cial and unrep­re­sen­ta­tive involvement.
  • Piece­meal approach­es that have min­i­mal impact on decision-making.
  • Par­tic­i­pants with a con­sul­ta­tive role rather than that of active decision-makers.
  • A lack of bridge-builders between stake­hold­er perspectives.
  • Par­tic­i­pa­tion wash­ing and exploita­tive com­mu­ni­ty involvement.
  • Strug­gles with the dynam­ic nature of tech­nol­o­gy over time.
  • Dis­crep­an­cies between the time scales for users to eval­u­ate design ideas ver­sus the pace at which sys­tems are developed.
  • A demand for par­tic­i­pa­tion to enhance com­mu­ni­ty knowl­edge and to actu­al­ly empow­er them.

Tak­ing a step back, if I were to eval­u­ate the state of the sci­en­tif­ic lit­er­a­ture on par­tic­i­pa­to­ry AI, it strikes me that many of these issues are not new to AI. They have been present in par­tic­i­pa­to­ry design more broad­ly for some time already. Many of these issues are also not nec­es­sar­i­ly spe­cif­ic to AI. The ones I would call out include the issues relat­ed to AI sys­tem dynamism, time scales of par­tic­i­pa­tion ver­sus devel­op­ment, and knowl­edge gaps between var­i­ous actors in par­tic­i­pa­to­ry process­es (and, relat­ed­ly, the lack of bridge-builders).

So, what about prac­tice? Let’s look at two reports that I feel are a good rep­re­sen­ta­tion of the broad­er field: Frame­work for Mean­ing­ful Stake­hold­er Involve­ment by ECNL & Soci­etyIn­side, and Democ­ra­tiz­ing AI: Prin­ci­ples for Mean­ing­ful Pub­lic Par­tic­i­pa­tion by Data & Society.

Frame­work for Mean­ing­ful Stake­hold­er Involve­ment is aimed at busi­ness­es, orga­ni­za­tions, and insti­tu­tions that use AI. It focus­es on human rights, eth­i­cal assess­ment, and com­pli­ance. It aims to be a tool for plan­ning, deliv­er­ing, and eval­u­at­ing stake­hold­er engage­ment effec­tive­ly, empha­siz­ing three core ele­ments: Shared Pur­pose, Trust­wor­thy Process, and Vis­i­ble Impact.

Democ­ra­tiz­ing AI frames pub­lic par­tic­i­pa­tion in AI devel­op­ment as a way to add legit­i­ma­cy and account­abil­i­ty and to help pre­vent harm­ful impacts. It out­lines risks asso­ci­at­ed with AI, includ­ing biased out­comes, opaque deci­sion-mak­ing process­es, and design­ers lack­ing real-world impact aware­ness. Caus­es for inef­fec­tive par­tic­i­pa­tion include uni­di­rec­tion­al com­mu­ni­ca­tion, socioe­co­nom­ic bar­ri­ers, super­fi­cial engage­ment, and inef­fec­tive third-par­ty involve­ment. The report uses envi­ron­men­tal law as a ref­er­ence point and offers eight guide­lines for mean­ing­ful pub­lic par­tic­i­pa­tion in AI.

Tak­ing stock of these reports, we can say that the build­ing blocks for the over­all process are avail­able to those seri­ous­ly look­ing. The chal­lenges fac­ing par­tic­i­pa­to­ry AI are, on the one hand, eco­nom­ic and polit­i­cal. On the oth­er hand, they are relat­ed to the specifics of the tech­nol­o­gy at hand. For the remain­der of this piece, let’s dig into the lat­ter a bit more.

Let’s focus on trans­la­tion work done by engi­neers dur­ing mod­el development.

For this, I build on work by Kang (2023), which focus­es on the qual­i­ta­tive analy­sis of how phe­nom­e­na are trans­lat­ed into ML-com­pat­i­ble forms, pay­ing spe­cif­ic atten­tion to the onto­log­i­cal trans­la­tions that occur in mak­ing a prob­lem learn­able. Trans­la­tion in ML means trans­form­ing com­plex qual­i­ta­tive phe­nom­e­na into quan­tifi­able and com­putable forms. Mul­ti­fac­eted prob­lems are con­vert­ed into a “usable quan­ti­ta­tive ref­er­ence” or “ground truth.” This trans­la­tion is not a mere rep­re­sen­ta­tion of real­i­ty but a refor­mu­la­tion of a prob­lem into math­e­mat­i­cal terms, mak­ing it under­stand­able and process­able by ML algo­rithms. This trans­for­ma­tion involves a sig­nif­i­cant amount of “onto­log­i­cal dis­so­nance,” as it medi­ates and often sim­pli­fies the com­plex­i­ty of real-world phe­nom­e­na into a tax­on­o­my or set of class­es for ML pre­dic­tion. The process of trans­lat­ing is based on assump­tions and stan­dards that may alter the nature of the ML task and intro­duce new social and tech­ni­cal problems. 

So what? I pro­pose we can use the notion of trans­la­tion as a frame for ML engi­neer­ing. Under­stand­ing ML mod­el engi­neer­ing as trans­la­tion is a poten­tial­ly use­ful way to ana­lyze what hap­pens at each step of the process: What gets select­ed for trans­la­tion, how the trans­la­tion is per­formed, and what the result­ing trans­la­tion con­sists of.

So, if we seek to make par­tic­i­pa­to­ry AI engage more with the tech­ni­cal par­tic­u­lar­i­ties of ML, we could begin by iden­ti­fy­ing trans­la­tions that have hap­pened or might hap­pen in our projects. We could then ask to what extent these acts of trans­la­tion are val­ue-laden. For those that are, we could think about how to com­mu­ni­cate these trans­la­tions to a lay audi­ence. A par­tic­u­lar chal­lenge I expect we will be faced with is what the mean­ing­ful lev­el of abstrac­tion for cit­i­zen par­tic­i­pa­tion dur­ing AI devel­op­ment is. We should also ask what the appro­pri­ate ‘vehi­cle’ for cit­i­zen par­tic­i­pa­tion will be. And we should seek to move beyond small-scale, one-off, often unrep­re­sen­ta­tive forms of direct participation.

Bibliography

  • Birhane, A., Isaac, W., Prab­hakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Pow­er to the Peo­ple? Oppor­tu­ni­ties and Chal­lenges for Par­tic­i­pa­to­ry AI. Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–8. https://doi.org/10/grnj99
  • Brat­teteig, T., & Verne, G. (2018). Does AI make PD obso­lete?: Explor­ing chal­lenges from arti­fi­cial intel­li­gence to par­tic­i­pa­to­ry design. Pro­ceed­ings of the 15th Par­tic­i­pa­to­ry Design Con­fer­ence: Short Papers, Sit­u­at­ed Actions, Work­shops and Tuto­r­i­al — Vol­ume 2, 1–5. https://doi.org/10/ghsn84
  • Del­ga­do, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Par­tic­i­pa­to­ry Turn in AI Design: The­o­ret­i­cal Foun­da­tions and the Cur­rent State of Prac­tice. Pro­ceed­ings of the 3rd ACM Con­fer­ence on Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–23. https://doi.org/10/gs8kvm
  • Ehsan, U., & Riedl, M. O. (2020). Human-Cen­tered Explain­able AI: Towards a Reflec­tive Sociotech­ni­cal Approach. In C. Stephani­dis, M. Kuro­su, H. Degen, & L. Rein­er­man-Jones (Eds.), HCI Inter­na­tion­al 2020—Late Break­ing Papers: Mul­ti­modal­i­ty and Intel­li­gence (pp. 449–466). Springer Inter­na­tion­al Pub­lish­ing. https://doi.org/10/gskmgf
  • Fef­fer, M., Skir­pan, M., Lip­ton, Z., & Hei­dari, H. (2023). From Pref­er­ence Elic­i­ta­tion to Par­tic­i­pa­to­ry ML: A Crit­i­cal Sur­vey & Guide­lines for Future Research. Pro­ceed­ings of the 2023 AAAI/ACM Con­fer­ence on AI, Ethics, and Soci­ety, 38–48. https://doi.org/10/gs8kvx
  • Gerdes, A. (2022). A par­tic­i­pa­to­ry data-cen­tric approach to AI Ethics by Design. Applied Arti­fi­cial Intel­li­gence, 36(1), 2009222. https://doi.org/10/gs8kt4
  • Groves, L., Pep­pin, A., Strait, A., & Bren­nan, J. (2023). Going pub­lic: The role of pub­lic par­tic­i­pa­tion approach­es in com­mer­cial AI labs. Pro­ceed­ings of the 2023 ACM Con­fer­ence on Fair­ness, Account­abil­i­ty, and Trans­paren­cy, 1162–1173. https://doi.org/10/gs8kvs
  • Kang, E. B. (2023). Ground truth trac­ings (GTT): On the epis­temic lim­its of machine learn­ing. Big Data & Soci­ety, 10(1), 1–12. https://doi.org/10/gtfgvx
  • Peter, F. (2020). The Grounds of Polit­i­cal Legit­i­ma­cy. Jour­nal of the Amer­i­can Philo­soph­i­cal Asso­ci­a­tion, 6(3), 372–390. https://doi.org/10/grqfhn
  • Robert­son, S., Nguyen, T., Hu, C., Albis­ton, C., Nikzad, A., & Sale­hi, N. (2023). Expres­sive­ness, Cost, and Col­lec­tivism: How the Design of Pref­er­ence Lan­guages Shapes Par­tic­i­pa­tion in Algo­rith­mic Deci­sion-Mak­ing. Pro­ceed­ings of the 2023 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–16. https://doi.org/10/gr6q2t
  • Rubel, A., Cas­tro, C., & Pham, A. K. (2021). Algo­rithms and auton­o­my: The ethics of auto­mat­ed deci­sion sys­tems. Cam­bridge Uni­ver­si­ty Press.
  • Sloane, M., Moss, E., Awom­o­lo, O., & For­lano, L. (2020). Par­tic­i­pa­tion is not a Design Fix for Machine Learn­ing. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  • Zytko, D., J. Wis­niews­ki, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Par­tic­i­pa­to­ry Design of AI Sys­tems: Oppor­tu­ni­ties and Chal­lenges Across Diverse Users, Rela­tion­ships, and Appli­ca­tion Domains. Extend­ed Abstracts of the 2022 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–4. https://doi.org/10/gs8kv6

Democratizing AI Through Continuous Adaptability: The Role of DevOps

Below are the abstract and slides for my con­tri­bu­tion to the TILT­ing Per­spec­tives 2024 pan­el “The mutu­al shap­ing of demo­c­ra­t­ic prac­tices & AI,” mod­er­at­ed by Mer­el Noorman.

Slides

Abstract

Contestability

This pre­sen­ta­tion delves into democ­ra­tiz­ing arti­fi­cial intel­li­gence (AI) sys­tems through con­testa­bil­i­ty. Con­testa­bil­i­ty refers to the abil­i­ty of AI sys­tems to remain open and respon­sive to dis­putes through­out their life­cy­cle. It approach­es AI sys­tems as are­nas where groups com­pete for pow­er over designs and outcomes. 

Autonomy, democratic agency, legitimation

We iden­ti­fy con­testa­bil­i­ty as a crit­i­cal sys­tem qual­i­ty for respect­ing peo­ple’s auton­o­my. This includes their demo­c­ra­t­ic agency: their abil­i­ty to legit­i­mate poli­cies. This includes poli­cies enact­ed by AI systems. 

For a deci­sion to be legit­i­mate, it must be demo­c­ra­t­i­cal­ly willed or rely on “nor­ma­tive author­i­ty.” The demo­c­ra­t­ic path­way should be con­strained by nor­ma­tive bounds to avoid arbi­trari­ness. The appeal to author­i­ty should meet the “access con­straint,” which ensures cit­i­zens can form beliefs about poli­cies with a suf­fi­cient degree of agency (Peter, 2020 in Rubel et al., 2021).

Con­testa­bil­i­ty is the qual­i­ty that ensures mech­a­nisms are in place for sub­jects to exer­cise their demo­c­ra­t­ic agency. In the case of an appeal to nor­ma­tive author­i­ty, con­testa­bil­i­ty mech­a­nisms are how sub­jects and their rep­re­sen­ta­tives gain access to the infor­ma­tion that will enable them to eval­u­ate its jus­ti­fi­a­bil­i­ty. In this way, con­testa­bil­i­ty sat­is­fies the access con­straint. In the case of demo­c­ra­t­ic will, con­testa­bil­i­ty-by-design prac­tices are how sys­tem devel­op­ment is democ­ra­tized. The auton­o­my account of legit­i­ma­tion adds the nor­ma­tive con­straints that should bind this demo­c­ra­t­ic pathway.

Him­mel­re­ich (2022) sim­i­lar­ly argues that only a “thick” con­cep­tion of democ­ra­cy will address some of the cur­rent short­com­ings of AI devel­op­ment. This is a path­way that not only allows for par­tic­i­pa­tion but also includes delib­er­a­tion over justifications.

The agonistic arena

Else­where, we have pro­posed the Ago­nis­tic Are­na as a metaphor for think­ing about the democ­ra­ti­za­tion of AI sys­tems (Alfrink et al., 2024). Con­testable AI embod­ies the gen­er­a­tive metaphor of the Are­na. This metaphor char­ac­ter­izes pub­lic AI as a space where inter­locu­tors embrace con­flict as pro­duc­tive. Seen through the lens of the Are­na, pub­lic AI prob­lems stem from a need for oppor­tu­ni­ties for adver­sar­i­al inter­ac­tion between stakeholders. 

This metaphor­i­cal fram­ing sug­gests pre­scrip­tions to make more con­tentious and open to dis­pute the norms and pro­ce­dures that shape:

  1. AI sys­tem design deci­sions on a glob­al lev­el, and
  2. human-AI sys­tem out­put deci­sions on a local lev­el (i.e., indi­vid­ual deci­sion out­comes), estab­lish­ing new dia­log­i­cal feed­back loops between stake­hold­ers that ensure con­tin­u­ous monitoring.

The Are­na metaphor encour­ages a design ethos of revis­abil­i­ty and reversibil­i­ty so that AI sys­tems embody the ago­nis­tic ide­al of contingency.

Post-deployment malleability, feedback-ladenness

Unlike phys­i­cal sys­tems, AI tech­nolo­gies exhib­it a unique mal­leabil­i­ty post-deployment. 

For exam­ple, LLM chat­bots opti­mize their per­for­mance based on a vari­ety of feed­back sources, includ­ing inter­ac­tions with users, as well as feed­back col­lect­ed through crowd-sourced data work.

Because of this open-end­ed­ness, demo­c­ra­t­ic con­trol and over­sight in the oper­a­tions phase of the sys­tem’s life­cy­cle become a par­tic­u­lar concern.

This is a con­cern because while AI sys­tems are dynam­ic and feed­back-laden (Gilbert et al., 2023), many of the exist­ing over­sight and con­trol mea­sures are sta­t­ic, one-off exer­cis­es that strug­gle to track sys­tems as they evolve over time.

DevOps

The field of DevOps is piv­otal in this con­text. DevOps focus­es on sys­tem instru­men­ta­tion for enhanced mon­i­tor­ing and con­trol for con­tin­u­ous improve­ment. Typ­i­cal­ly, met­rics for DevOps and their machine learn­ing-spe­cif­ic MLOps off­shoot empha­size tech­ni­cal per­for­mance and busi­ness objectives.

How­ev­er, there is scope to expand these to include mat­ters of pub­lic con­cern. The mat­ters-of-con­cern per­spec­tive shifts the focus on issues such as fair­ness or dis­crim­i­na­tion, view­ing them as chal­lenges that can­not be resolved through uni­ver­sal meth­ods with absolute cer­tain­ty. Rather, it high­lights how stan­dards are local­ly nego­ti­at­ed with­in spe­cif­ic insti­tu­tion­al con­texts, empha­siz­ing that such stan­dards are nev­er guar­an­teed (Lam­p­land & Star, 2009, Geiger et al., 2023).

MLOps Metrics

In the con­text of machine learn­ing sys­tems, tech­ni­cal met­rics focus on mod­el accu­ra­cy. For exam­ple, a finan­cial ser­vices com­pa­ny might use Area Under The Curve Receiv­er Oper­at­ing Char­ac­ter­is­tics (AUC-ROC) to con­tin­u­ous­ly mon­i­tor and main­tain the per­for­mance of their fraud detec­tion mod­el in production.

Busi­ness met­rics focus on cost-ben­e­fit analy­ses. For exam­ple, a bank might use a cost-ben­e­fit matrix to bal­ance the poten­tial rev­enue from approv­ing a loan against the risk of default, ensur­ing that the over­all prof­itabil­i­ty of their loan port­fo­lio is optimized.

Drift

These met­rics can be mon­i­tored over time to detect “drift” between a mod­el and the world. Train­ing sets are sta­t­ic. Real­i­ty is dynam­ic. It changes over time. Drift occurs when the nature of new input data diverges from the data a mod­el was trained on. A change in per­for­mance met­rics may be used to alert sys­tem oper­a­tors, who can then inves­ti­gate and decide on a course of action, e.g., retrain­ing a mod­el on updat­ed data. This, in effect, cre­ates a feed­back loop between the sys­tem in use and its ongo­ing development.

An expan­sion of these prac­tices in the inter­est of con­testa­bil­i­ty would require:

  1. set­ting dif­fer­ent metrics,
  2. expos­ing these met­rics to addi­tion­al audi­ences, and
  3. estab­lish­ing feed­back loops with the process­es that gov­ern mod­els and the sys­tems they are embed­ded in.

Example 1: Camera Cars

Let’s say a city gov­ern­ment uses a cam­era-equipped vehi­cle and a com­put­er vision mod­el to detect pot­holes in pub­lic roads. In addi­tion to accu­ra­cy and a favor­able cost-ben­e­fit ratio, cit­i­zens, and road users in par­tic­u­lar, may care about the time between a detect­ed pot­hole and its fix­ing. Or, they may care about the dis­tri­b­u­tion of pot­holes across the city. Fur­ther­more, when road main­te­nance appears to be degrad­ing, this should be tak­en up with depart­ment lead­er­ship, the respon­si­ble alder­per­son, and coun­cil members.

Example 2: EV Charching

Or, let’s say the same city gov­ern­ment uses an algo­rith­mic sys­tem to opti­mize pub­lic elec­tric vehi­cle (EV) charg­ing sta­tions for green ener­gy use by adapt­ing charg­ing speeds to expect­ed sun and wind. EV dri­vers may want to know how much ener­gy has been shift­ed to green­er time win­dows and its trends. With­out such vis­i­bil­i­ty on a sys­tem’s actu­al goal achieve­ment, cit­i­zens’ abil­i­ty to legit­i­mate its use suf­fers. As I have already men­tioned, demo­c­ra­t­ic agency, when enact­ed via the appeal to author­i­ty, depends on access to “nor­ma­tive facts” that under­pin poli­cies. And final­ly, pro­fessed sys­tem func­tion­al­i­ty must be demon­strat­ed as well (Raji et al., 2022).

DevOps as sociotechnical leverage point for democratizing AI

These brief exam­ples show that the DevOps approach is a poten­tial sociotech­ni­cal lever­age point. It offers path­ways for democ­ra­tiz­ing AI sys­tem design, devel­op­ment, and operations. 

DevOps can be adapt­ed to fur­ther con­testa­bil­i­ty. It cre­ates new chan­nels between human and machine actors. One of DevOp­s’s essen­tial activ­i­ties is mon­i­tor­ing (Smith, 2020), which pre­sup­pos­es fal­li­bil­i­ty, a nec­es­sary pre­con­di­tion for con­testa­bil­i­ty. Final­ly, it requires and pro­vides infra­struc­ture for tech­ni­cal flex­i­bil­i­ty so that recov­ery from error is low-cost and con­tin­u­ous improve­ment becomes prac­ti­cal­ly feasible.

The mutual shaping of democratic practices & AI

Zoom­ing out fur­ther, let’s reflect on this pan­el’s over­all theme, pick­ing out three ele­ments: legit­i­ma­tion, rep­re­sen­ta­tion of mar­gin­al­ized groups, and deal­ing with con­flict and con­tes­ta­tion after imple­men­ta­tion and dur­ing use.

Con­testa­bil­i­ty is a lever for demand­ing jus­ti­fi­ca­tions from oper­a­tors, which is a nec­es­sary input for legit­i­ma­tion by sub­jects (Henin & Le Métay­er, 2022). Con­testa­bil­i­ty frames dif­fer­ent actors’ stances as adver­sar­i­al posi­tions on a polit­i­cal field rather than “equal­ly valid” per­spec­tives (Scott, 2023). And final­ly, rela­tions, mon­i­tor­ing, and revis­abil­i­ty are all ways to give voice to and enable respon­sive­ness to con­tes­ta­tions (Genus & Stir­ling, 2018).

And again, all of these things can be fur­thered in the post-deploy­ment phase by adapt­ing the DevOps lens.

Bibliography

  • Alfrink, K., Keller, I., Kortuem, G., & Doorn, N. (2022). Con­testable AI by Design: Towards a Frame­work. Minds and Machines33(4), 613–639. https://doi.org/10/gqnjcs
  • Alfrink, K., Keller, I., Yur­ri­ta Sem­per­e­na, M., Buly­gin, D., Kortuem, G., & Doorn, N. (2024). Envi­sion­ing Con­testa­bil­i­ty Loops: Eval­u­at­ing the Ago­nis­tic Are­na as a Gen­er­a­tive Metaphor for Pub­lic AIShe Ji: The Jour­nal of Design, Eco­nom­ics, and Inno­va­tion10(1), 53–93. https://doi.org/10/gtzwft
  • Geiger, R. S., Tan­don, U., Gakhokidze, A., Song, L., & Irani, L. (2023). Mak­ing Algo­rithms Pub­lic: Reimag­in­ing Audit­ing From Mat­ters of Fact to Mat­ters of Con­cern. Inter­na­tion­al Jour­nal of Com­mu­ni­ca­tion18(0), Arti­cle 0.
  • Genus, A., & Stir­ling, A. (2018). Collingridge and the dilem­ma of con­trol: Towards respon­si­ble and account­able inno­va­tion. Research Pol­i­cy47(1), 61–69. https://doi.org/10/gcs7sn
  • Gilbert, T. K., Lam­bert, N., Dean, S., Zick, T., Snoswell, A., & Mehta, S. (2023). Reward Reports for Rein­force­ment Learn­ing. Pro­ceed­ings of the 2023 AAAI/ACM Con­fer­ence on AI, Ethics, and Soci­ety, 84–130. https://doi.org/10/gs9cnh
  • Henin, C., & Le Métay­er, D. (2022). Beyond explain­abil­i­ty: Jus­ti­fi­a­bil­i­ty and con­testa­bil­i­ty of algo­rith­mic deci­sion sys­tems. AI & SOCIETY37(4), 1397–1410. https://doi.org/10/gmg8pf
  • Him­mel­re­ich, J. (2022). Against “Democ­ra­tiz­ing AI.” AI & SOCIETYhttps://doi.org/10/gr95d5
  • Lam­p­land, M., & Star, S. L. (Eds.). (2008). Stan­dards and Their Sto­ries: How Quan­ti­fy­ing, Clas­si­fy­ing, and For­mal­iz­ing Prac­tices Shape Every­day Life (1st edi­tion). Cor­nell Uni­ver­si­ty Press.
  • Peter, F. (2020). The Grounds of Polit­i­cal Legit­i­ma­cy. Jour­nal of the Amer­i­can Philo­soph­i­cal Asso­ci­a­tion6(3), 372–390. https://doi.org/10/grqfhn
  • Raji, I. D., Kumar, I. E., Horowitz, A., & Selb­st, A. (2022). The Fal­la­cy of AI Func­tion­al­i­ty. 2022 ACM Con­fer­ence on Fair­ness, Account­abil­i­ty, and Trans­paren­cy, 959–972. https://doi.org/10/gqfvf5
  • Rubel, A., Cas­tro, C., & Pham, A. K. (2021). Algo­rithms and auton­o­my: The ethics of auto­mat­ed deci­sion sys­tems. Cam­bridge Uni­ver­si­ty Press.
  • Scott, D. (2023). Diver­si­fy­ing the Delib­er­a­tive Turn: Toward an Ago­nis­tic RRISci­ence, Tech­nol­o­gy, & Human Val­ues48(2), 295–318. https://doi.org/10/gpk2pr
  • Smith, J. D. (2020). Oper­a­tions anti-pat­terns, DevOps solu­tions. Man­ning Publications.
  • Treveil, M. (2020). Intro­duc­ing MLOps: How to scale machine learn­ing in the enter­prise (First edi­tion). O’Reilly.

PhD update – June 2024

I am writ­ing this final PhD update as a fresh­ly mint­ed doc­tor. On Thurs­day, May 23, 2024, I suc­cess­ful­ly defend­ed my the­sis, ‘Con­testable Arti­fi­cial Intel­li­gence: Con­struc­tive Design Research for Pub­lic Arti­fi­cial Intel­li­gence Sys­tems that are Open and Respon­sive to Dispute.’

I start­ed the PhD on Sep­tem­ber 1, 2018 (read the very first update post­ed on that day here). So, that’s five years, eight months, 23 days from start to fin­ish. It has been quite the jour­ney, and I feel hap­py and relieved to have com­plet­ed it. I am proud of the work embod­ied in the the­sis. Most of all, I am thank­ful for the trans­for­ma­tive learn­ing expe­ri­ence, none of which would have been pos­si­ble with­out the sup­port of my super­vi­sors Gerd, Ianus, and Neelke.

On the day itself, I was hon­ored to have as my exter­nal com­mit­tee mem­bers pro­fes­sors Dignum, Löw­gren, van Zoo­nen, and van de Poel, pro­fes­sor Voûte as the chair, and Joost and Mireia as my paranymphs.

The the­sis PDF can be down­loaded at the TU Delft repos­i­to­ry, and a video of the pro­ceed­ings is avail­able on YouTube.

Me, with a copy of the the­sis, short­ly before start­ing the layper­son­’s talk. Pho­to: Roy Borgh­outs.

Recent events

Review­ing my notes since the last update, below are some more notable things that hap­pened in the past eight months.

  • I ran a short work­shop on AI Ped­a­gogy Through A Design Lens, togeth­er with Hosana Morales, at the TU Delft spring sym­po­sium on AI edu­ca­tion. Read the post.
  • A sto­ry about my research was pub­lished on the TU Delft indus­tri­al design engi­neer­ing web­site in the run-up to my defense on May 14, 2024. Read the sto­ry.
  • I updat­ed and ran the fifth and final iter­a­tion of the AI & Soci­ety indus­tri­al design engi­neer­ing mas­ter elec­tive course from Feb­ru­ary 28 through April 10, 2024. A pre­vi­ous ver­sion is doc­u­ment­ed here, which I plan to update some­time in the near future.
  • I gave a talk titled Con­testable AI: Design­ing for Human Auton­o­my at the Ams­ter­dam UX meet­up on Feb­ru­ary 21, 2024. Down­load the slides.
  • The out­comes of a design sprint on tools for third-par­ty scruti­ny, orga­nized by the Respon­si­ble Sens­ing Lab, which took inspi­ra­tion from my research, were pub­lished on Decem­ber 7, 2023. Read the report.
  • I was inter­viewed by Mireia Yur­ri­ta Sem­per­e­na for a DCODE pod­cast episode titled Beyond Val­ues in Algo­rith­mic Design, pub­lished Novem­ber 6, 2023. Lis­ten to the episode.
  • Togeth­er with Clau­dio Sar­ra and Mar­co Alma­da, I host­ed an online sem­i­nar titled Build­ing Con­testable Sys­tems on Octo­ber 26, 2023. Read the thread.
  • I was a pan­elist at the Design & AI Sym­po­sium 2023 on Octo­ber 18, 2023.
  • A paper I co-authored titled When ‘Doing Ethics’ Meets Pub­lic Pro­cure­ment of Smart City Tech­nol­o­gy – an Ams­ter­dam Case Study, was pre­sent­ed by first author Mike de Kreek at IASDR 2023 on Octo­ber 9–13. Read the paper.

Looking ahead

I will con­tin­ue at TU Delft as a post­doc­tor­al researcher and will stay focused on design, AI, and pol­i­tics, but I will try to evolve my research into some­thing that builds on my the­sis work but adds a new angle.

The Envi­sion­ing Con­testa­bil­i­ty Loops arti­cle men­tioned in pre­vi­ous updates is now in press with She Ji, which I am very pleased about. It should be pub­lished “soon.”

Upcom­ing appear­ances include a brief talk on par­tic­i­pa­to­ry AI at a Cities Coali­tion for Dig­i­tal Rights event and a pre­sen­ta­tion as part of a pan­el on The Mutu­al Shap­ing Of Demo­c­ra­t­ic Prac­tices And AI at TILT­ing Per­spec­tives 2024.

That’s it for this final PhD update. I will prob­a­bly con­tin­ue these posts under a new title. We’ll see.

AI pedagogy through a design lens

At a TU Delft spring sym­po­sium on AI edu­ca­tion, Hosana and I ran a short work­shop titled “AI ped­a­gogy through a design lens.” In it, we iden­ti­fied some of the chal­lenges fac­ing AI teach­ing, par­tic­u­lar­ly out­side of com­put­er sci­ence, and explored how design ped­a­gogy, par­tic­u­lar­ly the prac­tices of stu­dios and mak­ing, may help to address them. The AI & Soci­ety mas­ter elec­tive I’ve been devel­op­ing and teach­ing over the past five years served as a case study. The ses­sion was punc­tu­at­ed by brief brain­storm­ing using an adapt­ed ver­sion of the SQUID gamestorm­ing tech­nique. Below are the slides we used.

Participatory AI literature review

I’ve been think­ing alot about civic par­tic­i­pa­tion in machine learn­ing sys­tems devel­op­ment. In par­tic­u­lar, involv­ing non-experts in the poten­tial­ly val­ue-laden trans­la­tion work from spec­i­fi­ca­tions that engi­neers do when they build their mod­els. Below is a sum­ma­ry of a selec­tion of lit­er­a­ture I found on the top­ic, which may serve as a jump­ing-off point for future research.

Abstract

The lit­er­a­ture on par­tic­i­pa­to­ry arti­fi­cial intel­li­gence (AI) reveals a com­plex land­scape marked by chal­lenges and evolv­ing method­olo­gies. Fef­fer et al. (2023) cri­tique the reduc­tion of par­tic­i­pa­tion to com­pu­ta­tion­al mech­a­nisms that only approx­i­mate nar­row moral val­ues. They also note that engage­ments with stake­hold­ers are often super­fi­cial and unrep­re­sen­ta­tive. Groves et al. (2023) iden­ti­fy sig­nif­i­cant bar­ri­ers in com­mer­cial AI labs, includ­ing high costs, frag­ment­ed approach­es, exploita­tion con­cerns, lack of trans­paren­cy, and con­tex­tu­al com­plex­i­ties. These bar­ri­ers lead to a piece­meal approach to par­tic­i­pa­tion with min­i­mal impact on deci­sion-mak­ing in AI labs. Del­ga­do et al. (2023) observe that par­tic­i­pa­to­ry AI involves stake­hold­ers most­ly in a con­sul­ta­tive role with­out inte­grat­ing them as active deci­sion-mak­ers through­out the AI design lifecycle.

Gerdes (2022) pro­pos­es a data-cen­tric approach to AI ethics and under­scores the need for inter­dis­ci­pli­nary bridge builders to rec­on­cile dif­fer­ent stake­hold­er per­spec­tives. Robert­son et al. (2023) explore par­tic­i­pa­to­ry algo­rithm design, empha­siz­ing the need for pref­er­ence lan­guages that bal­ance expres­sive­ness, cost, and collectivism—Sloane et al. (2020) cau­tion against “par­tic­i­pa­tion wash­ing” and the poten­tial for exploita­tive com­mu­ni­ty involve­ment. Brat­teteig & Verne (2018) high­light AI’s chal­lenges to tra­di­tion­al par­tic­i­pa­to­ry design (PD) meth­ods, includ­ing unpre­dictable tech­no­log­i­cal changes and a lack of user-ori­ent­ed eval­u­a­tion. Birhane et al. (2022) call for a clear­er under­stand­ing of mean­ing­ful par­tic­i­pa­tion, advo­cat­ing for a shift towards vibrant, con­tin­u­ous engage­ment that enhances com­mu­ni­ty knowl­edge and empow­er­ment. The lit­er­a­ture sug­gests a press­ing need for more effec­tive, inclu­sive, and empow­er­ing par­tic­i­pa­to­ry approach­es in AI development.

Bibliography

  1. Birhane, A., Isaac, W., Prab­hakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Pow­er to the Peo­ple? Oppor­tu­ni­ties and Chal­lenges for Par­tic­i­pa­to­ry AI. Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–8. https://doi.org/10/grnj99
  2. Brat­teteig, T., & Verne, G. (2018). Does AI make PD obso­lete?: Explor­ing chal­lenges from arti­fi­cial intel­li­gence to par­tic­i­pa­to­ry design. Pro­ceed­ings of the 15th Par­tic­i­pa­to­ry Design Con­fer­ence: Short Papers, Sit­u­at­ed Actions, Work­shops and Tuto­r­i­al — Vol­ume 2, 1–5. https://doi.org/10/ghsn84
  3. Del­ga­do, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Par­tic­i­pa­to­ry Turn in AI Design: The­o­ret­i­cal Foun­da­tions and the Cur­rent State of Prac­tice. Pro­ceed­ings of the 3rd ACM Con­fer­ence on Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–23. https://doi.org/10/gs8kvm
  4. Ehsan, U., & Riedl, M. O. (2020). Human-Cen­tered Explain­able AI: Towards a Reflec­tive Sociotech­ni­cal Approach. In C. Stephani­dis, M. Kuro­su, H. Degen, & L. Rein­er­man-Jones (Eds.), HCI Inter­na­tion­al 2020—Late Break­ing Papers: Mul­ti­modal­i­ty and Intel­li­gence (pp. 449–466). Springer Inter­na­tion­al Pub­lish­ing. https://doi.org/10/gskmgf
  5. Fef­fer, M., Skir­pan, M., Lip­ton, Z., & Hei­dari, H. (2023). From Pref­er­ence Elic­i­ta­tion to Par­tic­i­pa­to­ry ML: A Crit­i­cal Sur­vey & Guide­lines for Future Research. Pro­ceed­ings of the 2023 AAAI/ACM Con­fer­ence on AI, Ethics, and Soci­ety, 38–48. https://doi.org/10/gs8kvx
  6. Gerdes, A. (2022). A par­tic­i­pa­to­ry data-cen­tric approach to AI Ethics by Design. Applied Arti­fi­cial Intel­li­gence, 36(1), 2009222. https://doi.org/10/gs8kt4
  7. Groves, L., Pep­pin, A., Strait, A., & Bren­nan, J. (2023). Going pub­lic: The role of pub­lic par­tic­i­pa­tion approach­es in com­mer­cial AI labs. Pro­ceed­ings of the 2023 ACM Con­fer­ence on Fair­ness, Account­abil­i­ty, and Trans­paren­cy, 1162–1173. https://doi.org/10/gs8kvs
  8. Robert­son, S., Nguyen, T., Hu, C., Albis­ton, C., Nikzad, A., & Sale­hi, N. (2023). Expres­sive­ness, Cost, and Col­lec­tivism: How the Design of Pref­er­ence Lan­guages Shapes Par­tic­i­pa­tion in Algo­rith­mic Deci­sion-Mak­ing. Pro­ceed­ings of the 2023 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–16. https://doi.org/10/gr6q2t
  9. Sloane, M., Moss, E., Awom­o­lo, O., & For­lano, L. (2020). Par­tic­i­pa­tion is not a Design Fix for Machine Learn­ing. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  10. Zytko, D., J. Wis­niews­ki, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Par­tic­i­pa­to­ry Design of AI Sys­tems: Oppor­tu­ni­ties and Chal­lenges Across Diverse Users, Rela­tion­ships, and Appli­ca­tion Domains. Extend­ed Abstracts of the 2022 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–4. https://doi.org/10/gs8kv6

PhD update – September 2023

I’m back again with anoth­er Ph.D. update. Five years after I start­ed in Delft, we are near­ing the fin­ish line on this whole thing. But before we look ahead, let’s review notable events since the pre­vi­ous update in March 2023.

Occurrences

  1. I pre­sent­ed our frame­work, Con­testable AI by Design, at the annu­al NWO ICT Open con­fer­ence, which, for the first time, had an entire track ded­i­cat­ed to HCI research in the Nether­lands. It was an excel­lent oppor­tu­ni­ty to meet fel­low researchers from oth­er Dutch insti­tu­tions. The slides are avail­able as PDF at contestable.ai.
  2. I vis­it­ed Ham­burg to present our paper, Con­testable Cam­era Cars, at CHI 2023. We also received a Best Paper award, which I am, of course, very pleased with. The con­fer­ence was equal parts inspir­ing and over­whelm­ing. The best part of it was meet­ing in-per­son researchers who shared my interests.
  3. Also, at CHI, I was inter­viewed about my research by Mike Green for his pod­cast Under­stand­ing Users. You can lis­ten to it here. It is always good prac­tice to try and lay out some of my argu­ments spon­ta­neous­ly live.
  4. In June, I joined a pan­el at a BOLD Cities “talk show” to dis­cuss the design of smart city sys­tems for con­testa­bil­i­ty. It was quite an hon­or to be on the same pan­el as Eef­je Cup­pen, direc­tor of the Rathenau Insti­tute. This event was great because we had tech­no­log­i­cal, design, polit­i­cal, and pol­i­cy per­spec­tives. Sev­er­al guests argued for the need to rein­vig­o­rate rep­re­sen­ta­tive democ­ra­cy and give a more promi­nent role to elect­ed politi­cians in set­ting tech­nol­o­gy pol­i­cy. A report is avail­able here.
  5. In August, the BRIDE project had its clos­ing event. This is the NWO research project that par­tial­ly fund­ed my Ph.D. The event was an excel­lent oppor­tu­ni­ty to reflect on our work togeth­er over the past years. I took the oppor­tu­ni­ty to revis­it the work of Sask­ia Sassen on city­ness and to think through some of the impli­ca­tions of my work on con­testa­bil­i­ty for the field of smart urban­ism. The slides are avail­able at contestable.ai.
  6. Final­ly, last week, a short opin­ion piece that lays out the argu­ment for con­testable AI in what I hope is a rea­son­ably acces­si­ble man­ner, was pub­lished on the TU Delft website.
Photo of Eefje Cuppen and I being interviewed by Inge Janse at the BOLD Cities talk show on June 22, 2023—photo by Tiffany Konings.

Eef­je Cup­pen and I being inter­viewed by Inge Janse at the BOLD Cities talk show on June 22, 2023—photo by Tiffany Konings.

Envisioning Contestability Loops

Through­out this, I have been dili­gent­ly chip­ping away at my final pub­li­ca­tion, “Envi­sion­ing Con­testa­bil­i­ty Loops: Eval­u­at­ing the Ago­nis­tic Are­na as a Gen­er­a­tive Metaphor for Pub­lic AI.” I had a great time col­lab­o­rat­ing with Leon de Korte on an info­graph­ic of part of my design framework.

We took this info­graph­ic on a tour of Dutch inter­ac­tion design agen­cies and con­duct­ed con­cept design work­shops. I enjoyed return­ing to prac­tice and shar­ing the work of the past cou­ple of years with peers in prac­tice. My friends at Eend wrote a nice blog post about it.

The analy­sis of the out­comes of these work­shops forms the basis for the arti­cle, in which I explore the degree to which the guid­ing con­cept (gen­er­a­tive metaphor) behind con­testable AI, which I have dubbed the “Ago­nis­tic Are­na” is a pro­duc­tive one for design prac­ti­tion­ers. Spoil­ers: It is, but com­pet­ing metaphors are also at play in the pub­lic AI design space.

The man­u­script is close to com­ple­tion. As usu­al, putting some­thing like this togeth­er is a heavy but grat­i­fy­ing lift. I look for­ward to shar­ing the results and the under­ly­ing info­graph­ic with the broad­er world.

Are we there yet?

Look­ing ahead, I will be on a pan­el along­side the great Julian Bleeck­er and a host of oth­ers at the annu­al TU Delft Design & AI sym­po­sium in October.

But aside from that, I will keep my head down and focus on com­plet­ing my the­sis. The aim is to hand it in by the end of Novem­ber. So, two more months on the clock. Will I make it? Let’s find out!

PhD update – March 2023

Hel­lo again, and wel­come to anoth­er update on my Ph.D. research progress. I will briefly run down the things that hap­pened since the last update, what I am cur­rent­ly work­ing on, and some notable events on the horizon.

Recent happenings

CHI 2023 paper

Stills from Con­testable Cam­era Cars con­cept video.

First off, the big news is that the paper I sub­mit­ted to CHI 2023 was accept­ed. This is a big deal for me because HCI is the core field I aim to con­tribute to, and CHI is its flag­ship conference.

Here’s the full citation:

Alfrink, K., Keller, I., Doorn, N., & Kortuem, G. (2023). Con­testable Cam­era Cars: A Spec­u­la­tive Design Explo­ration of Pub­lic AI That Is Open and Respon­sive to Dis­pute. https://doi.org/10/jwrx

I have had sev­er­al papers reject­ed in the past (CHI is noto­ri­ous­ly hard to get accept­ed at), so I feel vin­di­cat­ed. The paper is already avail­able as an arX­iv preprint, as is the con­cept video that forms the core of the study I report on (many thanks to my pal Simon for col­lab­o­rat­ing on this with me). CHI 2023 hap­pens in late April. I will be rid­ing a train over there to present the paper in per­son. Very much look­ing for­ward to that.

Con­testable Cam­era Cars con­cept video.

Responsible Sensing Lab anniversary event

I briefly pre­sent­ed my research at the Respon­si­ble Sens­ing Lab anniver­sary event on Feb­ru­ary 16. The whole event was quite enjoy­able, and I got some encour­ag­ing respons­es to my ideas after­ward which is always nice. The event was record­ed in full. My appear­ance starts around the 1:47:00 mark.

It me. (Cred­it: Respon­si­ble Sens­ing Lab.)
Video of my con­tri­bu­tion. (Pakhuis de Zwi­jger / Respon­si­ble Sens­ing Lab.)

Tweeting, tooting, blogging

I have been get­ting back into the habit of tweet­ing, toot­ing, and even the occa­sion­al spot of blog­ging on this web­site again. As the end of my Ph.D. nears, I fig­ured it might be worth it to engage more active­ly with “the dis­course,” as they say. I most­ly share stuff I read that is relat­ed to my research and that I find inter­est­ing. Although, of course, posts relat­ed to my twin sons’ music taste and strug­gles with uni­ver­si­ty bureau­cra­cy always win out in the end. (Yes, I am aware my tim­ing is ter­ri­ble, see­ing as how we have basi­cal­ly final­ly con­clud­ed social media was a bad idea after all.)

Current activities

Envisioning Contestability Loops

At the moment, the major­i­ty of my time is tak­en up by con­duct­ing a final study (work­ing title: “Envi­sion­ing Con­testa­bil­i­ty Loops”). I am excit­ed about this one because I get to once again col­lab­o­rate with a pro­fes­sion­al design­er on an arti­fact, in this case, a visu­al expla­na­tion of my frame­work, and use the result as a research instru­ment to dig into, in this case, the strengths and weak­ness­es of con­testa­bil­i­ty as a gen­er­a­tive metaphor for the design of pub­lic AI.

Thesis

In par­al­lel, I have begun to put togeth­er my the­sis. It is paper-based, but of course, the intro­duc­to­ry and con­clud­ing chap­ters require some thought still.

The aim is to have both the final arti­cle and the­sis fin­ished by the end of sum­mer and then begin the ardu­ous process of get­ting a date for my defense, assem­bling a com­mit­tee, etc.

Agonistic Machine Vision Development

In the mean­time, I am also men­tor­ing Lau­ra, anoth­er bril­liant mas­ter grad­u­a­tion stu­dent. Her project, titled “Ago­nis­tic Machine Vision Devel­op­ment,” builds on my pre­vi­ous research. In par­tic­u­lar, one of the chal­lenges I iden­ti­fied in Con­testable Cam­era Cars, that of the dif­fer­en­tial in infor­ma­tion posi­tion between cit­i­zens and experts when they col­lab­o­rate in par­tic­i­pa­to­ry machine learn­ing ses­sions. It’s very grat­i­fy­ing to see oth­ers do design work that push­es these ideas further.

Upcoming events

So yeah, like I already men­tioned, I will be speak­ing at CHI 2023, which takes place on 23–28 April in Ham­burg. The sched­ule says I am pre­sent­ing on April 25 as part of the ses­sion on “AI Trust, Trans­paren­cy and Fair­ness”, which includes some excel­lent-look­ing contributions.

And before that, I will be at ICT.OPEN in Utrecht on April 20 to present briefly on the Con­testable AI by Design frame­work as part of the CHI NL track. It should be fun.

That’s it for this update. Maybe, by the time the next one rolls around, I will be able to share a date for my defense. But let’s not jinx it.