Participatory AI and ML engineering

In the first half of this year, I’ve pre­sent­ed sev­er­al ver­sions of a brief talk on par­tic­i­pa­to­ry AI. I fig­ured I would post an amal­gam of these to the blog for future ref­er­ence. (Pre­vi­ous­ly, on the blog, I post­ed a brief lit review on the same top­ic; this talk builds on that.)

So, to start, the main point of this talk is that many par­tic­i­pa­to­ry approach­es to AI don’t engage deeply with the specifics of the tech­nol­o­gy. One such spe­cif­ic is the trans­la­tion work engi­neers do to make a prob­lem “learn­able” by a machine (Kang, 2023). From this per­spec­tive, the main ques­tion to ask becomes, how does trans­la­tion hap­pen in our spe­cif­ic projects? Should cit­i­zens be involved in this trans­la­tion work? If so, how to achieve this? 

Before we dig into the state of par­tic­i­pa­to­ry AI, let’s begin by clar­i­fy­ing why we might want to enable par­tic­i­pa­tion in the first place. A com­mon moti­va­tion is a lack of demo­c­ra­t­ic con­trol over AI sys­tems. (This is par­tic­u­lar­ly con­cern­ing when AI sys­tems are used for gov­ern­ment pol­i­cy exe­cu­tion. These are the sys­tems I most­ly look at in my own research.) And so the response is to bring the peo­ple into the devel­op­ment process, and to let them co-decide matters.

In these cas­es, par­tic­i­pa­tion can be under­stood as an enabler of demo­c­ra­t­ic agency, i.e., a way for sub­jects to legit­i­mate the use of AI sys­tems (cf. Peter, 2020 in Rubel et al., 2021). Peter dis­tin­guish­es two path­ways: a nor­ma­tive one and a demo­c­ra­t­ic one. Par­tic­i­pa­tion can be seen as an exam­ple of the demo­c­ra­t­ic path­way to legit­i­ma­tion. A cru­cial detail Peter men­tions here, which is often over­looked in par­tic­i­pa­to­ry AI lit­er­a­ture, is that nor­ma­tive con­straints must lim­it the demo­c­ra­t­ic path­way to avoid arbitrariness.

So, what is the state of par­tic­i­pa­to­ry AI research and prac­tice? I will look at each in turn next.

As men­tioned, I pre­vi­ous­ly post­ed on the state of par­tic­i­pa­to­ry AI research, so I won’t repeat that in full here. (For the record, I reviewed Birhane et al. (2022), Brat­teteig & Verne (2018), Del­ga­do et al. (2023), Ehsan & Riedl (2020), Fef­fer et al. (2023), Gerdes (2022), Groves et al. (2023), Robert­son et al. (2023), Sloane et al. (2020), and Zytko et al. (2022).) Ele­ments that jump out include: 

  • Super­fi­cial and unrep­re­sen­ta­tive involvement.
  • Piece­meal approach­es that have min­i­mal impact on decision-making.
  • Par­tic­i­pants with a con­sul­ta­tive role rather than that of active decision-makers.
  • A lack of bridge-builders between stake­hold­er perspectives.
  • Par­tic­i­pa­tion wash­ing and exploita­tive com­mu­ni­ty involvement.
  • Strug­gles with the dynam­ic nature of tech­nol­o­gy over time.
  • Dis­crep­an­cies between the time scales for users to eval­u­ate design ideas ver­sus the pace at which sys­tems are developed.
  • A demand for par­tic­i­pa­tion to enhance com­mu­ni­ty knowl­edge and to actu­al­ly empow­er them.

Tak­ing a step back, if I were to eval­u­ate the state of the sci­en­tif­ic lit­er­a­ture on par­tic­i­pa­to­ry AI, it strikes me that many of these issues are not new to AI. They have been present in par­tic­i­pa­to­ry design more broad­ly for some time already. Many of these issues are also not nec­es­sar­i­ly spe­cif­ic to AI. The ones I would call out include the issues relat­ed to AI sys­tem dynamism, time scales of par­tic­i­pa­tion ver­sus devel­op­ment, and knowl­edge gaps between var­i­ous actors in par­tic­i­pa­to­ry process­es (and, relat­ed­ly, the lack of bridge-builders).

So, what about prac­tice? Let’s look at two reports that I feel are a good rep­re­sen­ta­tion of the broad­er field: Frame­work for Mean­ing­ful Stake­hold­er Involve­ment by ECNL & Soci­etyIn­side, and Democ­ra­tiz­ing AI: Prin­ci­ples for Mean­ing­ful Pub­lic Par­tic­i­pa­tion by Data & Society.

Frame­work for Mean­ing­ful Stake­hold­er Involve­ment is aimed at busi­ness­es, orga­ni­za­tions, and insti­tu­tions that use AI. It focus­es on human rights, eth­i­cal assess­ment, and com­pli­ance. It aims to be a tool for plan­ning, deliv­er­ing, and eval­u­at­ing stake­hold­er engage­ment effec­tive­ly, empha­siz­ing three core ele­ments: Shared Pur­pose, Trust­wor­thy Process, and Vis­i­ble Impact.

Democ­ra­tiz­ing AI frames pub­lic par­tic­i­pa­tion in AI devel­op­ment as a way to add legit­i­ma­cy and account­abil­i­ty and to help pre­vent harm­ful impacts. It out­lines risks asso­ci­at­ed with AI, includ­ing biased out­comes, opaque deci­sion-mak­ing process­es, and design­ers lack­ing real-world impact aware­ness. Caus­es for inef­fec­tive par­tic­i­pa­tion include uni­di­rec­tion­al com­mu­ni­ca­tion, socioe­co­nom­ic bar­ri­ers, super­fi­cial engage­ment, and inef­fec­tive third-par­ty involve­ment. The report uses envi­ron­men­tal law as a ref­er­ence point and offers eight guide­lines for mean­ing­ful pub­lic par­tic­i­pa­tion in AI.

Tak­ing stock of these reports, we can say that the build­ing blocks for the over­all process are avail­able to those seri­ous­ly look­ing. The chal­lenges fac­ing par­tic­i­pa­to­ry AI are, on the one hand, eco­nom­ic and polit­i­cal. On the oth­er hand, they are relat­ed to the specifics of the tech­nol­o­gy at hand. For the remain­der of this piece, let’s dig into the lat­ter a bit more.

Let’s focus on trans­la­tion work done by engi­neers dur­ing mod­el development.

For this, I build on work by Kang (2023), which focus­es on the qual­i­ta­tive analy­sis of how phe­nom­e­na are trans­lat­ed into ML-com­pat­i­ble forms, pay­ing spe­cif­ic atten­tion to the onto­log­i­cal trans­la­tions that occur in mak­ing a prob­lem learn­able. Trans­la­tion in ML means trans­form­ing com­plex qual­i­ta­tive phe­nom­e­na into quan­tifi­able and com­putable forms. Mul­ti­fac­eted prob­lems are con­vert­ed into a “usable quan­ti­ta­tive ref­er­ence” or “ground truth.” This trans­la­tion is not a mere rep­re­sen­ta­tion of real­i­ty but a refor­mu­la­tion of a prob­lem into math­e­mat­i­cal terms, mak­ing it under­stand­able and process­able by ML algo­rithms. This trans­for­ma­tion involves a sig­nif­i­cant amount of “onto­log­i­cal dis­so­nance,” as it medi­ates and often sim­pli­fies the com­plex­i­ty of real-world phe­nom­e­na into a tax­on­o­my or set of class­es for ML pre­dic­tion. The process of trans­lat­ing is based on assump­tions and stan­dards that may alter the nature of the ML task and intro­duce new social and tech­ni­cal problems. 

So what? I pro­pose we can use the notion of trans­la­tion as a frame for ML engi­neer­ing. Under­stand­ing ML mod­el engi­neer­ing as trans­la­tion is a poten­tial­ly use­ful way to ana­lyze what hap­pens at each step of the process: What gets select­ed for trans­la­tion, how the trans­la­tion is per­formed, and what the result­ing trans­la­tion con­sists of.

So, if we seek to make par­tic­i­pa­to­ry AI engage more with the tech­ni­cal par­tic­u­lar­i­ties of ML, we could begin by iden­ti­fy­ing trans­la­tions that have hap­pened or might hap­pen in our projects. We could then ask to what extent these acts of trans­la­tion are val­ue-laden. For those that are, we could think about how to com­mu­ni­cate these trans­la­tions to a lay audi­ence. A par­tic­u­lar chal­lenge I expect we will be faced with is what the mean­ing­ful lev­el of abstrac­tion for cit­i­zen par­tic­i­pa­tion dur­ing AI devel­op­ment is. We should also ask what the appro­pri­ate ‘vehi­cle’ for cit­i­zen par­tic­i­pa­tion will be. And we should seek to move beyond small-scale, one-off, often unrep­re­sen­ta­tive forms of direct participation.

Bibliography

  • Birhane, A., Isaac, W., Prab­hakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Pow­er to the Peo­ple? Oppor­tu­ni­ties and Chal­lenges for Par­tic­i­pa­to­ry AI. Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–8. https://doi.org/10/grnj99
  • Brat­teteig, T., & Verne, G. (2018). Does AI make PD obso­lete?: Explor­ing chal­lenges from arti­fi­cial intel­li­gence to par­tic­i­pa­to­ry design. Pro­ceed­ings of the 15th Par­tic­i­pa­to­ry Design Con­fer­ence: Short Papers, Sit­u­at­ed Actions, Work­shops and Tuto­r­i­al — Vol­ume 2, 1–5. https://doi.org/10/ghsn84
  • Del­ga­do, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Par­tic­i­pa­to­ry Turn in AI Design: The­o­ret­i­cal Foun­da­tions and the Cur­rent State of Prac­tice. Pro­ceed­ings of the 3rd ACM Con­fer­ence on Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–23. https://doi.org/10/gs8kvm
  • Ehsan, U., & Riedl, M. O. (2020). Human-Cen­tered Explain­able AI: Towards a Reflec­tive Sociotech­ni­cal Approach. In C. Stephani­dis, M. Kuro­su, H. Degen, & L. Rein­er­man-Jones (Eds.), HCI Inter­na­tion­al 2020—Late Break­ing Papers: Mul­ti­modal­i­ty and Intel­li­gence (pp. 449–466). Springer Inter­na­tion­al Pub­lish­ing. https://doi.org/10/gskmgf
  • Fef­fer, M., Skir­pan, M., Lip­ton, Z., & Hei­dari, H. (2023). From Pref­er­ence Elic­i­ta­tion to Par­tic­i­pa­to­ry ML: A Crit­i­cal Sur­vey & Guide­lines for Future Research. Pro­ceed­ings of the 2023 AAAI/ACM Con­fer­ence on AI, Ethics, and Soci­ety, 38–48. https://doi.org/10/gs8kvx
  • Gerdes, A. (2022). A par­tic­i­pa­to­ry data-cen­tric approach to AI Ethics by Design. Applied Arti­fi­cial Intel­li­gence, 36(1), 2009222. https://doi.org/10/gs8kt4
  • Groves, L., Pep­pin, A., Strait, A., & Bren­nan, J. (2023). Going pub­lic: The role of pub­lic par­tic­i­pa­tion approach­es in com­mer­cial AI labs. Pro­ceed­ings of the 2023 ACM Con­fer­ence on Fair­ness, Account­abil­i­ty, and Trans­paren­cy, 1162–1173. https://doi.org/10/gs8kvs
  • Kang, E. B. (2023). Ground truth trac­ings (GTT): On the epis­temic lim­its of machine learn­ing. Big Data & Soci­ety, 10(1), 1–12. https://doi.org/10/gtfgvx
  • Peter, F. (2020). The Grounds of Polit­i­cal Legit­i­ma­cy. Jour­nal of the Amer­i­can Philo­soph­i­cal Asso­ci­a­tion, 6(3), 372–390. https://doi.org/10/grqfhn
  • Robert­son, S., Nguyen, T., Hu, C., Albis­ton, C., Nikzad, A., & Sale­hi, N. (2023). Expres­sive­ness, Cost, and Col­lec­tivism: How the Design of Pref­er­ence Lan­guages Shapes Par­tic­i­pa­tion in Algo­rith­mic Deci­sion-Mak­ing. Pro­ceed­ings of the 2023 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–16. https://doi.org/10/gr6q2t
  • Rubel, A., Cas­tro, C., & Pham, A. K. (2021). Algo­rithms and auton­o­my: The ethics of auto­mat­ed deci­sion sys­tems. Cam­bridge Uni­ver­si­ty Press.
  • Sloane, M., Moss, E., Awom­o­lo, O., & For­lano, L. (2020). Par­tic­i­pa­tion is not a Design Fix for Machine Learn­ing. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  • Zytko, D., J. Wis­niews­ki, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Par­tic­i­pa­to­ry Design of AI Sys­tems: Oppor­tu­ni­ties and Chal­lenges Across Diverse Users, Rela­tion­ships, and Appli­ca­tion Domains. Extend­ed Abstracts of the 2022 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–4. https://doi.org/10/gs8kv6