On autonomy, design, and AI

In my the­sis, I use auton­o­my to build the nor­ma­tive case for con­testa­bil­i­ty. It so hap­pens that this year’s theme at the Delft Design for Val­ues Insti­tute is also auton­o­my. On Octo­ber 15, 2024, I par­tic­i­pat­ed in a pan­el dis­cus­sion on auton­o­my to kick things off. I col­lect­ed some notes on auton­o­my that go beyond the con­cep­tu­al­iza­tion I used in my the­sis. I thought it might be help­ful and inter­est­ing to col­lect some of them here in adapt­ed form.

The notes I brought includ­ed, first of all, a sum­ma­ry of the ecu­meni­cal con­cep­tu­al­iza­tion of auton­o­my con­cern­ing auto­mat­ed deci­sion-mak­ing sys­tems offered by Alan Rubel, Clin­ton Cas­tro, and Adam Pham (2021). They con­ceive of auton­o­my as effec­tive self-gov­er­nance. To be autonomous, we need authen­tic beliefs about our cir­cum­stances and the agency to act on our plans. Regard­ing algo­rith­mic sys­tems, they offer this notion of a rea­son­able endorse­ment test—the degree to which a sys­tem can be said to respect auton­o­my depends on its reli­a­bil­i­ty, the stakes of its out­puts, the degree to which sub­jects can be held respon­si­ble for inputs, and the dis­tri­b­u­tion of bur­dens across groups.

Sec­ond, I col­lect­ed some notes from sev­er­al pieces by James Mul­doon, which get into notions of free­dom and auton­o­my that were devel­oped in social­ist repub­li­can thought by the likes of Lux­em­burg, Kaut­sky, and Cas­to­ri­adis ( 2020, 2021a, 2021b). This sto­ry of auton­o­my is sociopo­lit­i­cal rather than moral. This approach is quite appeal­ing for some­one inter­est­ed in non-ide­al the­o­ry in a real­ist mode like myself. The account of auton­o­my Mul­doon offers is one where indi­vid­ual auton­o­my hinges on greater group auton­o­my and stronger bonds of asso­ci­a­tion between those pro­duc­ing and con­sum­ing tech­nolo­gies. Free­dom is con­ceived of as col­lec­tive self-determination.

And then third and final­ly, there’s this con­nect­ed idea of rela­tion­al auton­o­my, which to a degree is part of the account offered by Rubel et al., but in the con­cep­tions here more rad­i­cal in how it seeks to cre­ate dis­tance from lib­er­al indi­vid­u­al­ism (e.g., Christ­man, 2004; Mhlam­bi & Tiri­bel­li, 2023; West­lund, 2009). In this, indi­vid­ual capac­i­ty for autonomous choice is shaped by social struc­tures. So free­dom becomes real­ized through net­works of care, respon­si­bil­i­ty, and interdependence.

That’s what I am inter­est­ed in: accounts of auton­o­my that are not premised on lib­er­al indi­vid­u­al­ism and that give us some alter­na­tive han­dle on the prob­lem of the social con­trol of tech­nol­o­gy in gen­er­al and of AI in particular.

From my point of view, the impli­ca­tions of all this for design and AI include the following.

First, to make a fair­ly obvi­ous but often over­looked point, the degree to which a giv­en sys­tem impacts people’s auton­o­my depends on var­i­ous fac­tors. It makes lit­tle sense to make blan­ket state­ments about AI destroy­ing our auton­o­my and so on.

Sec­ond, in val­ue-sen­si­tive design terms, you can think about auton­o­my as a val­ue to be bal­anced against others—in the case where you take the posi­tion that all val­ues can be con­sid­ered equal­ly impor­tant, at least in prin­ci­ple. Or you can con­sid­er auton­o­my more like a pre­con­di­tion for peo­ple to live with tech­nol­o­gy in con­cor­dance with their val­ues, mak­ing auton­o­my take prece­dence over oth­er val­ues. The sociopo­lit­i­cal and rela­tion­al accounts above point in this direction.

Third, sup­pose you buy into the rad­i­cal demo­c­ra­t­ic idea of tech­nol­o­gy and auton­o­my. In that case, it fol­lows that it makes lit­tle sense to admon­ish indi­vid­ual design­ers about respect­ing oth­ers’ auton­o­my. They may be asked to priv­i­lege tech­nolo­gies in their designs that afford indi­vid­ual and group auton­o­my. But design­ers also need orga­ni­za­tion and eman­ci­pa­tion more often than not. So it’s about build­ing pow­er. The pow­er of work­ers inside the orga­ni­za­tions that devel­op tech­nolo­gies and the pow­er of com­mu­ni­ties that “con­sume” those same technologies. 

With AI, the fact is that, in real­i­ty, in the cas­es I look at, the com­mu­ni­ties that AI is brought to bear on have lit­tle say in the mat­ter. The buy­ers and deploy­ers of AI could and should be made more account­able to the peo­ple sub­ject­ed to AI.