In my thesis, I use autonomy to build the normative case for contestability. It so happens that this year’s theme at the Delft Design for Values Institute is also autonomy. On October 15, 2024, I participated in a panel discussion on autonomy to kick things off. I collected some notes on autonomy that go beyond the conceptualization I used in my thesis. I thought it might be helpful and interesting to collect some of them here in adapted form.
The notes I brought included, first of all, a summary of the ecumenical conceptualization of autonomy concerning automated decision-making systems offered by Alan Rubel, Clinton Castro, and Adam Pham (2021). They conceive of autonomy as effective self-governance. To be autonomous, we need authentic beliefs about our circumstances and the agency to act on our plans. Regarding algorithmic systems, they offer this notion of a reasonable endorsement test—the degree to which a system can be said to respect autonomy depends on its reliability, the stakes of its outputs, the degree to which subjects can be held responsible for inputs, and the distribution of burdens across groups.
Second, I collected some notes from several pieces by James Muldoon, which get into notions of freedom and autonomy that were developed in socialist republican thought by the likes of Luxemburg, Kautsky, and Castoriadis ( 2020, 2021a, 2021b). This story of autonomy is sociopolitical rather than moral. This approach is quite appealing for someone interested in non-ideal theory in a realist mode like myself. The account of autonomy Muldoon offers is one where individual autonomy hinges on greater group autonomy and stronger bonds of association between those producing and consuming technologies. Freedom is conceived of as collective self-determination.
And then third and finally, there’s this connected idea of relational autonomy, which to a degree is part of the account offered by Rubel et al., but in the conceptions here more radical in how it seeks to create distance from liberal individualism (e.g., Christman, 2004; Mhlambi & Tiribelli, 2023; Westlund, 2009). In this, individual capacity for autonomous choice is shaped by social structures. So freedom becomes realized through networks of care, responsibility, and interdependence.
That’s what I am interested in: accounts of autonomy that are not premised on liberal individualism and that give us some alternative handle on the problem of the social control of technology in general and of AI in particular.
From my point of view, the implications of all this for design and AI include the following.
First, to make a fairly obvious but often overlooked point, the degree to which a given system impacts people’s autonomy depends on various factors. It makes little sense to make blanket statements about AI destroying our autonomy and so on.
Second, in value-sensitive design terms, you can think about autonomy as a value to be balanced against others—in the case where you take the position that all values can be considered equally important, at least in principle. Or you can consider autonomy more like a precondition for people to live with technology in concordance with their values, making autonomy take precedence over other values. The sociopolitical and relational accounts above point in this direction.
Third, suppose you buy into the radical democratic idea of technology and autonomy. In that case, it follows that it makes little sense to admonish individual designers about respecting others’ autonomy. They may be asked to privilege technologies in their designs that afford individual and group autonomy. But designers also need organization and emancipation more often than not. So it’s about building power. The power of workers inside the organizations that develop technologies and the power of communities that “consume” those same technologies.
With AI, the fact is that, in reality, in the cases I look at, the communities that AI is brought to bear on have little say in the matter. The buyers and deployers of AI could and should be made more accountable to the people subjected to AI.