On the design and regulation of technology

The following is a section from a manuscript in press on the similarities and differences in approaches to explainable and contestable AI in design and law (Schmude et al., 2025). It ended up on the cutting room floor, but it is the kind of thing I find handy to refer back to, so I chose to share it here.

The responsible design of AI, including practices that seek to make AI systems more explainable and contestable, must somehow relate to legislation and regulations. Writing about responsible research and innovation (RRI) more broadly, Stilgoe et al. (2013) assert that RRI, which we would say includes design, must be embedded in regulation. But does it really make sense to think of the relationship between design and regulation in this way? Understood abstractly, there are in fact at least four ways in which we can think about the relationship between the design and regulation of technology (Figure 1).

Figure 1: We see four possible ways that the relationship between (D) the design and (R) regulation of technology can be conceptualized: (1) design and regulation are independent spheres, (2) design and regulation partially overlap, (3) design is embedded inside of regulation, or (4) regulation is embedded inside design. In all cases, we assume an interactional relation between the two spheres.

To establish the relationship between design and regulation, we first need to establish how we should think about regulation, and related concepts such as governance and policymaking more generally. One straightforward definition would be that regulation entails formal rules and enforcement mechanisms that constrain behavior. These are backed by authority—typically state authority, but increasingly also non-stake actors. Regulation and governance are interactive and mutually constitutive. Regulation is one mechanism within governance systems. Governance frameworks establish contexts for regulation. Policymaking connects politics to governance by translating political contestation into actionable frameworks. Politics, then, influences all these domains: policymaking, governance, and regulation. And they, in turn, operate within and reciprocally shape society writ large. See Table 1 for working definitions of ‘regulation’ and associated concepts.

ConceptDefinition
RegulationFormal rules and enforcement mechanisms that constrain behavior, typically state-backed but increasingly emerging from non-state actors (industry self-regulation, transnational regulatory bodies).
GovernanceBroader arrangements for coordinating social action across public, private, and civil society spheres through both formal and informal mechanisms.
PolicymakingProcess of formulating courses of action to address public problems.
PoliticsContestation of power, interests, and values that shapes governance arrangements.
SocietyBroader context of social relations, norms, and institutions.
Table 1: Working definitions of ‘regulation’ and associated concepts.

What about design? Scholars of regulation have adopted the notion of ‘regulation by design’ (RBD) to refer to the inscribing of rules into the world through the creation and implementation of technological artifacts. Prifti (2024) identifies two prevailing approaches to RBD: The essentialist view treats RBD as policy enactments, or “rules for design.” By contrast, the functionalist view treats design as a mere instrument, or “ruling by design.” We agree with Prifti when he states that both approaches are limited. Essentialism neglects the complexity of regulatory environments, while functionalism neglects the autonomy and complexity of design as a practice.

Prifti proposes a pragmatist reconstruction that views regulation as a rule-making activity (“regulativity”) performed through social practices including design (the “rule of design”). Design is conceptualized as a contextual, situated social practice that performs changes in the environment, rather than just a tool or set of rules. Unlike the law, markets, or social norms, which rely on incentives and sanctions, design can simply disable the possibility of non-compliance, making it a uniquely powerful form of regulation. The pragmatist approach distinguishes between regulation and governance, with governance being a meta-regulative activity that steers how other practices (like design) regulate. This reconceptualization helps address legitimacy concerns by allowing for greater accountability for design practices that might bypass democratic processes.

Returning to the opening question then, out of the four basic ways in which the relationship between design and regulation can be drawn (Figure 1), if we were to adopt Prifti’s pragmatist view, Type 3 would most accurately capture the relationship, with design being one of a variety of more specific ways in which regulation (understood as regulativity) actually makes changes in the world. These other forms of regulatory practice are not depicted in the figure. This seems to align with Stilgoe et al.’s aforementioned position that responsible design must be embedded within regulation. Although there is a slight nuance to our position: Design is conceived of as a form of regulation always, regardless of active work on the part of designers to ‘embed’ their work inside regulatory practices. Stilgoe et al.’s admonition can be better understood as a normative claim: Responsible designers would do well to understand and align their design work with extant laws and regulations. Furthermore, following Prifti, design is beholden to governance and must be reflexively aware of how governance steers its practices (cf. Figure 2).

Figure 2: Conceptual model of the relationship between design, ‘classical’ regulation (i.e., law-making), and governance. Both design and law-making are forms of regulation (i.e., ‘regulativity’). Governance steers how design and law-making regulate, and design and law-making are both accountable to (and reflexively aware of) governance.

Bibliography

  • Prifti, K. (2024). The theory of ‘Regulation By Design’: Towards a pragmatist reconstruction. Technology and Regulation2024, 152–166. https://doi.org/10/g9dr24
  • Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy42(9), 1568–1580. https://doi.org/10/f5gv8h

On mapping AI value chains

At CSCW 2024, back in November of last year, we* ran a workshop titled “From Stem to Stern: Contestability Along AI Value Chains.” With it, we wanted to address a gap in contestable AI research. Current work focuses mainly on contesting specific AI decisions or outputs (for example, appealing a decision made by an automated content moderation system). But we should also look at contestability across the entire AI value chain—from raw material extraction to deployment and impact (think, for example, of data center activists opposing the construction of new hyperscales). We aimed to explore how different stakeholders can contest AI systems at various points in this chain, considering issues like labor conditions, environmental impact, and data collection practices often overlooked in contestability discussions.

The workshop mixed presentations with hands-on activities. In the morning, researchers shared their work through short talks, both in person and online. The afternoon focused on mapping out where and how people can contest AI systems, from data collection to deployment, followed by detailed discussions of the practical challenges involved. We had both in-person and online participants, requiring careful coordination between facilitators. We wrapped up by synthesizing key insights and outlining future research directions.

I was responsible for being a remote facilitator most of the day. But Mireia and I also prepared and ran the first group activity, in which we mapped a typical AI value chain. I figured I might as well share the canvas we used for that here. It’s not rocket science, but it held up pretty well, so maybe some other people will get some use out of it. The canvas was designed to offer a fair bit of scaffolding for thinking through what decision points there are along the chain that are potentially value-laden.

AI value chain mapping canvas (licensed CC-BY 4.0 Mireia Yurrita & Kars Alfrink, 2024). Download PDF.

Here’s how the activity worked: We covered about 50 minutes doing a structured mapping exercise where participants identified potential contestation points along an AI value chain, using ChatGPT as an example case. The activity used a Miro board with a preliminary map showing different stages of AI development (infrastructure setup, data management, AI development, etc.). Participants first brainstormed individually for 10 minutes, adding value-laden decisions and noting stakeholders, harms, benefits, and values at stake. They then collaborated to reorganize and discuss the map for 15 minutes. The activity concluded with participants using dot voting (3 votes each) to identify the most impactful contestation sites, which were then clustered and named to feed into the next group activity.

The activity design drew from two main influences: typical value chain mapping methodologies (e.g., Mapping Actors along Value Chains, 2017), which usually emphasize tracking actors, flows, and contextual factors, and Wardley mapping (Wardley, 2022), which is characterized by the idea of a structured progression along an x-axis with an additional dimension on the y-axis.

The canvas design aimed to make AI system development more tangible by breaking it into clear phases (from infrastructure through governance) while considering visibility and materiality through the y-axis. We ultimately chose to use a familiar system (ChatGPT). This, combined with the activity’s structured approach, helped participants identify concrete opportunities for intervention and contestation along the AI value chain, which we could build on during the rest of the workshop.

I got a lot out of this workshop. Some of the key takeaways that emerged out of the activities and discussions include:

  • There’s a disconnect between legal and technical communities, from basic terminology differences to varying conceptions of key concepts like explainability, highlighting the need for translation work between disciplines.
  • We need to move beyond individual grievance models to consider collective contestation and upstream interventions in the AI supply chain.
  • We also need to shift from reactive contestation to proactive design approaches that build in contestability from the start.
  • By virtue of being hybrid, we were lucky enough to have participants from across the globe. This helped drive home to me the importance of including Global South perspectives and considering contestability beyond Western legal frameworks. We desperately need a more inclusive and globally-minded approach to AI governance.

Many thanks to all the workshop co-organizers for having me as part of the team and to Agathe and Yulu, in particular, for leading the effort.


* The full workshop team consisted of Agathe Balayn, Yulu Pi, David Gray Widder, Mireia Yurrita, Sohini Upadhyay, Naveena Karusala, Henrietta Lyons, Cagatay Turkay, Christelle Tessono, Blair Attard-Frost, Ujwal Gadiraju, and myself.