On mapping AI value chains

At CSCW 2024, back in Novem­ber of last year, we* ran a work­shop titled “From Stem to Stern: Con­testa­bil­i­ty Along AI Val­ue Chains.” With it, we want­ed to address a gap in con­testable AI research. Cur­rent work focus­es main­ly on con­test­ing spe­cif­ic AI deci­sions or out­puts (for exam­ple, appeal­ing a deci­sion made by an auto­mat­ed con­tent mod­er­a­tion sys­tem). But we should also look at con­testa­bil­i­ty across the entire AI val­ue chain—from raw mate­r­i­al extrac­tion to deploy­ment and impact (think, for exam­ple, of data cen­ter activists oppos­ing the con­struc­tion of new hyper­scales). We aimed to explore how dif­fer­ent stake­hold­ers can con­test AI sys­tems at var­i­ous points in this chain, con­sid­er­ing issues like labor con­di­tions, envi­ron­men­tal impact, and data col­lec­tion prac­tices often over­looked in con­testa­bil­i­ty discussions.

The work­shop mixed pre­sen­ta­tions with hands-on activ­i­ties. In the morn­ing, researchers shared their work through short talks, both in per­son and online. The after­noon focused on map­ping out where and how peo­ple can con­test AI sys­tems, from data col­lec­tion to deploy­ment, fol­lowed by detailed dis­cus­sions of the prac­ti­cal chal­lenges involved. We had both in-per­son and online par­tic­i­pants, requir­ing care­ful coor­di­na­tion between facil­i­ta­tors. We wrapped up by syn­the­siz­ing key insights and out­lin­ing future research directions. 

I was respon­si­ble for being a remote facil­i­ta­tor most of the day. But Mireia and I also pre­pared and ran the first group activ­i­ty, in which we mapped a typ­i­cal AI val­ue chain. I fig­ured I might as well share the can­vas we used for that here. It’s not rock­et sci­ence, but it held up pret­ty well, so maybe some oth­er peo­ple will get some use out of it. The can­vas was designed to offer a fair bit of scaf­fold­ing for think­ing through what deci­sion points there are along the chain that are poten­tial­ly value-laden.

AI val­ue chain map­ping can­vas (licensed CC-BY 4.0 Mireia Yur­ri­ta & Kars Alfrink, 2024). Down­load PDF.

Here’s how the activ­i­ty worked: We cov­ered about 50 min­utes doing a struc­tured map­ping exer­cise where par­tic­i­pants iden­ti­fied poten­tial con­tes­ta­tion points along an AI val­ue chain, using Chat­G­PT as an exam­ple case. The activ­i­ty used a Miro board with a pre­lim­i­nary map show­ing dif­fer­ent stages of AI devel­op­ment (infra­struc­ture set­up, data man­age­ment, AI devel­op­ment, etc.). Par­tic­i­pants first brain­stormed indi­vid­u­al­ly for 10 min­utes, adding val­ue-laden deci­sions and not­ing stake­hold­ers, harms, ben­e­fits, and val­ues at stake. They then col­lab­o­rat­ed to reor­ga­nize and dis­cuss the map for 15 min­utes. The activ­i­ty con­clud­ed with par­tic­i­pants using dot vot­ing (3 votes each) to iden­ti­fy the most impact­ful con­tes­ta­tion sites, which were then clus­tered and named to feed into the next group activity.

The activ­i­ty design drew from two main influ­ences: typ­i­cal val­ue chain map­ping method­olo­gies (e.g., Map­ping Actors along Val­ue Chains, 2017), which usu­al­ly empha­size track­ing actors, flows, and con­tex­tu­al fac­tors, and Ward­ley map­ping (Ward­ley, 2022), which is char­ac­ter­ized by the idea of a struc­tured pro­gres­sion along an x‑axis with an addi­tion­al dimen­sion on the y‑axis.

The can­vas design aimed to make AI sys­tem devel­op­ment more tan­gi­ble by break­ing it into clear phas­es (from infra­struc­ture through gov­er­nance) while con­sid­er­ing vis­i­bil­i­ty and mate­ri­al­i­ty through the y‑axis. We ulti­mate­ly chose to use a famil­iar sys­tem (Chat­G­PT). This, com­bined with the activ­i­ty’s struc­tured approach, helped par­tic­i­pants iden­ti­fy con­crete oppor­tu­ni­ties for inter­ven­tion and con­tes­ta­tion along the AI val­ue chain, which we could build on dur­ing the rest of the workshop.

I got a lot out of this work­shop. Some of the key take­aways that emerged out of the activ­i­ties and dis­cus­sions include:

  • There’s a dis­con­nect between legal and tech­ni­cal com­mu­ni­ties, from basic ter­mi­nol­o­gy dif­fer­ences to vary­ing con­cep­tions of key con­cepts like explain­abil­i­ty, high­light­ing the need for trans­la­tion work between disciplines. 
  • We need to move beyond indi­vid­ual griev­ance mod­els to con­sid­er col­lec­tive con­tes­ta­tion and upstream inter­ven­tions in the AI sup­ply chain. 
  • We also need to shift from reac­tive con­tes­ta­tion to proac­tive design approach­es that build in con­testa­bil­i­ty from the start. 
  • By virtue of being hybrid, we were lucky enough to have par­tic­i­pants from across the globe. This helped dri­ve home to me the impor­tance of includ­ing Glob­al South per­spec­tives and con­sid­er­ing con­testa­bil­i­ty beyond West­ern legal frame­works. We des­per­ate­ly need a more inclu­sive and glob­al­ly-mind­ed approach to AI governance.

Many thanks to all the work­shop co-orga­niz­ers for hav­ing me as part of the team and to Agathe and Yulu, in par­tic­u­lar, for lead­ing the effort.


* The full work­shop team con­sist­ed of Agathe Bal­ayn, Yulu Pi, David Gray Wid­der, Mireia Yur­ri­ta, Sohi­ni Upad­hyay, Naveena Karusala, Hen­ri­et­ta Lyons, Cagatay Turkay, Chris­telle Tes­sono, Blair Attard-Frost, Ujw­al Gadi­ra­ju, and myself.