At CSCW 2024, back in November of last year, we* ran a workshop titled “From Stem to Stern: Contestability Along AI Value Chains.” With it, we wanted to address a gap in contestable AI research. Current work focuses mainly on contesting specific AI decisions or outputs (for example, appealing a decision made by an automated content moderation system). But we should also look at contestability across the entire AI value chain—from raw material extraction to deployment and impact (think, for example, of data center activists opposing the construction of new hyperscales). We aimed to explore how different stakeholders can contest AI systems at various points in this chain, considering issues like labor conditions, environmental impact, and data collection practices often overlooked in contestability discussions.
The workshop mixed presentations with hands-on activities. In the morning, researchers shared their work through short talks, both in person and online. The afternoon focused on mapping out where and how people can contest AI systems, from data collection to deployment, followed by detailed discussions of the practical challenges involved. We had both in-person and online participants, requiring careful coordination between facilitators. We wrapped up by synthesizing key insights and outlining future research directions.
I was responsible for being a remote facilitator most of the day. But Mireia and I also prepared and ran the first group activity, in which we mapped a typical AI value chain. I figured I might as well share the canvas we used for that here. It’s not rocket science, but it held up pretty well, so maybe some other people will get some use out of it. The canvas was designed to offer a fair bit of scaffolding for thinking through what decision points there are along the chain that are potentially value-laden.
Here’s how the activity worked: We covered about 50 minutes doing a structured mapping exercise where participants identified potential contestation points along an AI value chain, using ChatGPT as an example case. The activity used a Miro board with a preliminary map showing different stages of AI development (infrastructure setup, data management, AI development, etc.). Participants first brainstormed individually for 10 minutes, adding value-laden decisions and noting stakeholders, harms, benefits, and values at stake. They then collaborated to reorganize and discuss the map for 15 minutes. The activity concluded with participants using dot voting (3 votes each) to identify the most impactful contestation sites, which were then clustered and named to feed into the next group activity.
The activity design drew from two main influences: typical value chain mapping methodologies (e.g., Mapping Actors along Value Chains, 2017), which usually emphasize tracking actors, flows, and contextual factors, and Wardley mapping (Wardley, 2022), which is characterized by the idea of a structured progression along an x‑axis with an additional dimension on the y‑axis.
The canvas design aimed to make AI system development more tangible by breaking it into clear phases (from infrastructure through governance) while considering visibility and materiality through the y‑axis. We ultimately chose to use a familiar system (ChatGPT). This, combined with the activity’s structured approach, helped participants identify concrete opportunities for intervention and contestation along the AI value chain, which we could build on during the rest of the workshop.
I got a lot out of this workshop. Some of the key takeaways that emerged out of the activities and discussions include:
There’s a disconnect between legal and technical communities, from basic terminology differences to varying conceptions of key concepts like explainability, highlighting the need for translation work between disciplines.
We need to move beyond individual grievance models to consider collective contestation and upstream interventions in the AI supply chain.
We also need to shift from reactive contestation to proactive design approaches that build in contestability from the start.
By virtue of being hybrid, we were lucky enough to have participants from across the globe. This helped drive home to me the importance of including Global South perspectives and considering contestability beyond Western legal frameworks. We desperately need a more inclusive and globally-minded approach to AI governance.
Many thanks to all the workshop co-organizers for having me as part of the team and to Agathe and Yulu, in particular, for leading the effort.
Thijs Turèl of AMS Institute and myself presented a version of the talk below at the Cities for Digital Rights conference on June 19 in Amsterdam during the We Make the City festival. The talk is an attempt to articulate some of the ideas we both have been developing for some time around contestability in smart public infrastructure. As always with this sort of thing, this is intended as a conversation piece so I welcome any thoughts you may have.
The basic message of the talk is that when we start to do automated decision-making in public infrastructure using algorithmic systems, we need to design for the inevitable disagreements that may arise and furthermore, we suggest there is an opportunity to focus on designing for such disagreements in the physical objects that people encounter in urban space as they make use of infrastructure.
We set the scene by showing a number of examples of smart public infrastructure. A cyclist crossing that adapts to weather conditions. If it’s raining cyclists more frequently get a green light. A pedestrian crossing in Tilburg where elderly can use their mobile to get more time to cross. And finally, the case we are involved with ourselves: smart EV charging in the city of Amsterdam, about which more later.
We identify three trends in smart public infrastructure: (1) where previously algorithms were used to inform policy, now they are employed to perform automated decision-making on an individual case basis. This raises the stakes; (2) distributed ownership of these systems as the result of public-private partnerships and other complex collaboration schemes leads to unclear responsibility; and finally (3) the increasing use of machine learning leads to opaque decision-making.
These trends, and algorithmic systems more generally, raise a number of ethical concerns. They include but are not limited to: the use of inductive correlations (for example in the case of machine learning) leads to unjustified results; lack of access to and comprehension of a system’s inner workings produces opacity, which in turn leads to a lack of trust in the systems themselves and the organisations that use them; bias is introduced by a number of factors, including development team prejudices, technical flaws, bad data and unforeseen interactions with other systems; and finally the use of profiling, nudging and personalisation leads to diminished human agency. (We highly recommend the article by Mittelstadt et al. for a comprehensive overview of ethical concerns raised by algorithms.)
So for us, the question that emerges from all this is: How do we organise the supervision of smart public infrastructure in a democratic and lawful way?
There are a number of existing approaches to this question. These include legal and regulatory (e.g. the right to explanation in the GDPR); auditing (e.g. KPMG’s “AI in Control” method, BKZ’s transparantielab); procurement (e.g. open source clauses); insourcing (e.g. GOV.UK) and design and engineering (e.g. our own work on the transparent charging station).
We feel there are two important limitations with these existing approaches. The first is a focus on professionals and the second is a focus on prediction. We’ll discuss each in turn.
First of all, many solutions target a professional class, be it accountants, civil servants, supervisory boards, as well as technologists, designers and so on. But we feel there is a role for the citizen as well, because the supervision of these systems is simply too important to be left to a privileged few. This role would include identifying wrongdoing, and suggesting alternatives.
There is a tension here, which is that from the perspective of the public sector one should only ask citizens for their opinion when you have the intention and the resources to actually act on their suggestions. It can also be a challenge to identify legitimate concerns in the flood of feedback that can sometimes occur. From our point of view though, such concerns should not be used as an excuse to not engage the public. If citizen participation is considered necessary, the focus should be on freeing up resources and setting up structures that make it feasible and effective.
The second limitation is prediction. This is best illustrated with the Collinridge dilemma: in the early phases of new technology, when a technology and its social embedding are still malleable, there is uncertainty about the social effects of that technology. In later phases, social effects may be clear but then often the technology has become so well entrenched in society that it is hard to overcome negative social effects. (This summary is taken from an excellent van de Poel article on the ethics of experimental technology.)
Many solutions disregard the Collingridge dilemma and try to predict and prevent adverse effects of new systems at design-time. One example of this approach would be value-sensitive design. Our focus in stead is on use-time. Considering the fact that smart public infrastructure tends to be developed on an ongoing basis, the question becomes how to make citizens a partner in this process. And even more specifically we are interested in how this can be made part of the design of the “touchpoints” people actually encounter in the streets, as well as their backstage processes.
Why do we focus on these physical objects? Because this is where people actually meet the infrastructural systems, of which large parts recede from view. These are the places where they become aware of their presence. They are the proverbial tip of the iceberg.
The use of automated decision-making in infrastructure reduces people’s agency. For this reason, resources for agency need to be designed back into these systems. Frequently the answer to this question is premised on a transparency ideal. This may be a prerequisite for agency, but it is not sufficient. Transparency may help you become aware of what is going on, but it will not necessarily help you to act on that knowledge. This is why we propose a shift from transparency to contestability. (We can highly recommend Ananny and Crawford’s article for more on why transparency is insufficient.)
To clarify what we mean by contestability, consider the following three examples: When you see the lights on your router blink in the middle of the night when no-one in your household is using the internet you can act on this knowledge by yanking out the device’s power cord. You may never use the emergency brake in a train but its presence does give you a sense of control. And finally, the cash register receipt provides you with a view into both the procedure and the outcome of the supermarket checkout procedure and it offers a resource with which you can dispute them if something appears to be wrong.
Image credits: Aangiftedoen, source unknown for remainder
None of these examples is a perfect illustration of contestability but they hint at something more than transparency, or perhaps even something wholly separate from it. We’ve been investigating what their equivalents would be in the context of smart public infrastructure.
To illustrate this point further let us come back to the smart EV charging project we mentioned earlier. In Amsterdam, public EV charging stations are becoming “smart” which in this case means they automatically adapt the speed of charging to a number of factors. These include grid capacity, and the availability of solar energy. Additional factors can be added in future, one of which under consideration is to give priority to shared cars over privately owned cars. We are involved with an ongoing effort to consider how such charging stations can be redesigned so that people understand what’s going on behind the scenes and can act on this understanding. The motivation for this is that if not designed carefully, the opacity of smart EV charging infrastructure may be detrimental to social acceptance of the technology. (A first outcome of these efforts is the Transparent Charging Station designed by The Incredible Machine. A follow-up project is ongoing.)
We have identified a number of different ways in which people may object to smart EV charging. They are listed in the table below. These types of objections can lead us to feature requirements for making the system contestable.
Because the list is preliminary, we asked the audience if they could imagine additional objections, if those examples represented new categories, and if they would require additional features for people to be able to act on them. One particularly interesting suggestion that emerged was to give local communities control over the policies enacted by the charge points in their vicinity. That’s something to further consider the implications of.
And that’s where we left it. So to summarise:
Algorithmic systems are becoming part of public infrastructure.
Smart public infrastructure raises new ethical concerns.
Many solutions to ethical concerns are premised on a transparency ideal, but do not address the issue of diminished agency.
There are different categories of objections people may have to an algorithmic system’s workings.
Making a system contestable means creating resources for people to object, opening up a space for the exploration of meaningful alternatives to its current implementation.
I recently followed an excellent three-day course on engineering ethics. It was offered by the TU Delft graduate school and taught by Behnam Taibi with guest lectures from several of our faculty.
I found it particularly helpful to get some suggestions for further reading that represent some of the foundational ideas in the field. I figured it would be useful to others as well to have a pointer to them.
So here they are. I’ve quickly gutted these for their meaning. The one by Van de Poel I did read entirely and can highly recommend for anyone who’s doing design of emerging technologies and wants to escape from the informed consent conundrum.
I intend to dig into the Doorn one, not just because she’s one of my promoters but also because resilience is a concept that is closely related to my own interests. I’ll also get into the Floridi one in detail but the concept of information quality and the care ethics perspective on the problem of information abundance and attention scarcity I found immediately applicable in interaction design.
Stilgoe, Jack, Richard Owen, and Phil Macnaghten. “Developing a framework for responsible innovation.” Research Policy 42.9 (2013): 1568–1580.
Van den Hoven, Jeroen. “Value sensitive design and responsible innovation.” Responsible innovation (2013): 75–83.
Hansson, Sven Ove. “Ethical criteria of risk acceptance.” Erkenntnis 59.3 (2003): 291–309.
Van de Poel, Ibo. “An ethical framework for evaluating experimental technology.” Science and engineering ethics22.3 (2016): 667–686.
Hansson, Sven Ove. “Philosophical problems in cost–benefit analysis.” Economics & Philosophy 23.2 (2007): 163–183.
Floridi, Luciano. “Big Data and information quality.” The philosophy of information quality. Springer, Cham, 2014. 303–315.
Doorn, Neelke, Paolo Gardoni, and Colleen Murphy. “A multidisciplinary definition and evaluation of resilience: The role of social justice in defining resilience.” Sustainable and Resilient Infrastructure (2018): 1–12.
We also got a draft of the intro chapter to a book on engineering and ethics that Behnam is writing. That looks very promising as well but I can’t share yet for obvious reasons.
I’d like to talk about the future of our design practice and what I think we should focus our attention on. It is all related to this idea of complexity and opening up black boxes. We’re going to take the scenic route, though. So bear with me.
Software Design
Two years ago I spent about half a year in Singapore.
While there I worked as product strategist and designer at a startup called ARTO, an art recommendation service. It shows you a random sample of artworks, you tell it which ones you like, and it will then start recommending pieces it thinks you like. In case you were wondering: yes, swiping left and right was involved.
We had this interesting problem of ingesting art from many different sources (mostly online galleries) with metadata of wildly varying levels of quality. So, using metadata to figure out which art to show was a bit of a non-starter. It should come as no surprise then, that we started looking into machine learning—image processing in particular.
And so I found myself working with my engineering colleagues on an art recommendation stream which was driven at least in part by machine learning. And I quickly realised we had a problem. In terms of how we worked together on this part of the product, it felt like we had taken a bunch of steps back in time. Back to a way of collaborating that was less integrated and less responsive.
That’s because we have all these nice tools and techniques for designing traditional software products. But software is deterministic. Machine learning is fundamentally different in nature: it is probabilistic.
It was hard for me to take the lead in the design of this part of the product for two reasons. First of all, it was challenging to get a first-hand feel of the machine learning feature before it was implemented.
And second of all, it was hard for me to communicate or visualise the intended behaviour of the machine learning feature to the rest of the team.
So when I came back to the Netherlands I decided to dig into this problem of design for machine learning. Turns out I opened up quite the can of worms for myself. But that’s okay.
There are two reasons I care about this:
The first is that I think we need more design-led innovation in the machine learning space. At the moment it is engineering-dominated, which doesn’t necessarily lead to useful outcomes. But if you want to take the lead in the design of machine learning applications, you need a firm handle on the nature of the technology.
The second reason why I think we need to educate ourselves as designers on the nature of machine learning is that we need to take responsibility for the impact the technology has on the lives of people. There is a lot of talk about ethics in the design industry at the moment. Which I consider a positive sign. But I also see a reluctance to really grapple with what ethics is and what the relationship between technology and society is. We seem to want easy answers, which is understandable because we are all very busy people. But having spent some time digging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a feature. And we should embrace it.
Machine Learning
At the end of 2016 I attended ThingsCon here in Amsterdam and I was introduced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both interested in machine learning. So with encouragement from Ianus we decided to put together a workshop that would enable industrial design master students to tangle with it in a hands-on manner.
About a year later now, this has grown into a thing we call Prototyping the Useless Butler. During the workshop, you use machine learning algorithms to train a model that takes inputs from a network-connected arduino’s sensors and drives that same arduino’s actuators. In effect, you can create interactive behaviour without writing a single line of code. And you get a first hand feel for how common applications of machine learning work. Things like regression, classification and dynamic time warping.
The thing that makes this workshop tick is an open source software application called Wekinator. Which was created by Rebecca Fiebrink. It was originally aimed at performing artists so that they could build interactive instruments without writing code. But it takes inputs from anything and sends outputs to anything. So we appropriated it towards our own ends.
The thinking behind this workshop is that for us designers to be able to think creatively about applications of machine learning, we need a granular understanding of the nature of the technology. The thing with designers is, we can’t really learn about such things from books. A lot of design knowledge is tacit, it emerges from our physical engagement with the world. This is why things like sketching and prototyping are such essential parts of our way of working. And so with useless butler we aim to create an environment in which you as a designer can gain tacit knowledge about the workings of machine learning.
Simply put, for a lot of us, machine learning is a black box. With Useless Butler, we open the black box a bit and let you peer inside. This should improve the odds of design-led innovation happening in the machine learning space. And it should also help with ethics. But it’s definitely not enough. Knowledge about the technology isn’t the only issue here. There are more black boxes to open.
Values
Which brings me back to that other black box: ethics. Like I already mentioned there is a lot of talk in the tech industry about how we should “be more ethical”. But things are often reduced to this notion that designers should do no harm. As if ethics is a problem to be fixed in stead of a thing to be practiced.
So I started to talk about this to people I know in academia and more than once this thing called Value Sensitive Design was mentioned. It should be no surprise to anyone that scholars have been chewing on this stuff for quite a while. One of the earliest references I came across, an essay by Batya Friedman in Interactions is from 1996! This is a lesson to all of us I think. Pay more attention to what the academics are talking about.
So, at the end of last year I dove into this topic. Our host Iskander Smit, Rob Maijers and myself coordinate a grassroots community for tech workers called Tech Solidarity NL. We want to build technology that serves the needs of the many, not the few. Value Sensitive Design seemed like a good thing to dig into and so we did.
I’m not going to dive into the details here. There’s a report on the Tech Solidarity NL website if you’re interested. But I will highlight a few things that value sensitive design asks us to consider that I think help us unpack what it means to practice ethical design.
First of all, values. Here’s how it is commonly defined in the literature:
“A value refers to what a person or group of people consider important in life.”
I like it because it’s common sense, right? But it also makes clear that there can never be one monolithic definition of what ‘good’ is in all cases. As we designers like to say: “it depends” and when it comes to values things are no different.
“Person or group” implies there can be various stakeholders. Value sensitive design distinguishes between direct and indirect stakeholders. The former have direct contact with the technology, the latter don’t but are affected by it nonetheless. Value sensitive design means taking both into account. So this blows up the conventional notion of a single user to design for.
Various stakeholder groups can have competing values and so to design for them means to arrive at some sort of trade-off between values. This is a crucial point. There is no such thing as a perfect or objectively best solution to ethical conundrums. Not in the design of technology and not anywhere else.
Value sensitive design encourages you to map stakeholders and their values. These will be different for every design project. Another approach is to use lists like the one pictured here as an analytical tool to think about how a design impacts various values.
Furthermore, during your design process you might not only think about the short-term impact of a technology, but also think about how it will affect things in the long run.
And similarly, you might think about the effects of a technology not only when a few people are using it, but also when it becomes wildly successful and everybody uses it.
There are tools out there that can help you think through these things. But so far much of the work in this area is happening on the academic side. I think there is an opportunity for us to create tools and case studies that will help us educate ourselves on this stuff.
There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the technologies we work with, it helps to dig deeper into the nature of the relationship between technology and society. Yes, it complicates things. But that is exactly the point.
Privileging simple and scalable solutions over those adapted to local needs is socially, economically and ecologically unsustainable. So I hope you will join me in embracing complexity.
At a recent Tech Solidarity NL meetup we dove into Value Sensitive Design. This approach had been on my radar for a while so when we concluded that for our community it would be useful to talk about how to practice ethical design and development of technology, I figured we should check it out.
Value Sensitive Design has been around for ages. The earliest article I came across is by Batya Friedman in a 1996 edition of Interactions magazine. Ironically, or tragically, I must say I have only heard about the approach from academics and design theory nerds. In industry at large, Value Sensitive Design appears to be—to me at least—basically unknown. (A recent exception would be this interesting marriage of design sprints with Value Sensitive Design by Cennydd Bowles.)
For the meetup, I read a hand-full of papers and cobbled together a deck which attempts to summarise this ’framework’—the term favoured by its main proponents. I went through it and then we had a spirited discussion of how its ideas apply to our daily practice. A report of all of that can be found over at the Tech Solidarity NL website.
Below, I have attempted to pull together the most salient points from what is a rather dense twenty-plus-slides deck. I hope it is of some use to those professional designers and developers who are looking for better ways of building technology that serves the interest of the many, not the few.
What follows is mostly adapted from the chapter “Value Sensitive Design and Information Systems” in Human–computer interaction in management information systems: Foundations. All quotes are from there unless otherwise noted.
Background
The departure point is the observation that “there is a need for an overarching theoretical and methodological framework with which to handle the value dimensions of design work.” In other words, something that accounts for what we already know about how to deal with values in design work in terms of theory and concepts, as well as methods and techniques.
This is of course not a new concern. For example, famed cyberneticist Norbert Wiener argued that technology could help make us better human beings, and create a more just society. But for it to do so, he argued, we have to take control of the technology.
We have to reject the “worshiping [of] the new gadgets which are our own creation as if they were our masters.” (Wiener 1953)
We can find many more similar arguments throughout the history of information technology. Recently such concerns have flared up in industry as well as society at large. (Not always for the right reasons in my opinion, but that is something we will set aside for now.)
To address these concerns, Value Sensitive Design was developed. It is “a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process.” It has been applied successfully for over 20 years.
Defining Values
But what is a value? In the literature it is defined as “what a person or group of people consider important in life.” I like this definition because it is easy to grasp but also underlines the slippery nature of values. Some things to keep in mind when talking about values:
In a narrow sense, the word “value” refers simply to the economic worth of an object. This is not the meaning employed by Value Sensitive Design.
Values should not be conflated with facts (the “fact/value distinction”) especially insofar as facts do not logically entail value.
“Is” does not imply “ought” (the naturalistic fallacy).
Values cannot be motivated only by an empirical account of the external world, but depend substantively on the interests and desires of human beings within a cultural milieu. (So contrary to what some right-wingers like to say: “Facts do care about your feelings.”)
Investigations
Let’s dig into the way this all works. “Value Sensitive Design is an iterative methodology that integrates conceptual, empirical, and technical investigations.” So it distinguishes between three types of activities (“investigations”) and it prescribes cycling through these activities multiple times. Below are listed questions and notes that are relevant to each type of investigation. But in brief, this is how I understand them:
Defining the specific values at play in a project;
Observing, measuring, and documenting people’s behaviour and the context of use;
Analysing the ways in which a particular technology supports or hinders particular values.
Conceptual Investigations
Who are the direct and indirect stakeholders affected by the design at hand?
How are both classes of stakeholders affected?
What values are implicated?
How should we engage in trade-offs among competing values in the design, implementation, and use of information systems (e.g., autonomy vs. security, or anonymity vs. trust)?
Should moral values (e.g., a right to privacy) have greater weight than, or even trump, non-moral values (e.g., aesthetic preferences)?
Empirical Investigations
How do stakeholders apprehend individual values in the interactive context?
How do they prioritise competing values in design trade-offs?
How do they prioritise individual values and usability considerations?
Are there differences between espoused practice (what people say) compared with actual practice (what people do)?
And, specifically focusing on organisations:
What are organisations’ motivations, methods of training and dissemination, reward structures, and economic incentives?
Technical Investigations
Not a list of questions here, but some notes:
Value Sensitive Design takes the position that technologies in general, and information and computer technologies in particular, have properties that make them more or less suitable for certain activities. A given technology more readily supports certain values while rendering other activities and values more difficult to realise.
Technical investigations involve the proactive design of systems to support values identified in the conceptual investigation.
Technical investigations focus on the technology itself. Empirical investigations focus on the individuals, groups, or larger social systems that configure, use, or are otherwise affected by the technology.
Value Sensitive Design enlarges the arena in which values arise to include not only the work place
Value Sensitive Design contributes a unique methodology that employs conceptual, empirical, and technical investigations, applied iteratively and integratively
Value Sensitive Design enlarges the scope of human values beyond those of cooperation (CSCW) and participation and democracy (Participatory Design) to include all values, especially those with moral import.
Value Sensitive Design distinguishes between usability and human values with ethical import.
Value Sensitive Design identifies and takes seriously two classes of stakeholders: direct and indirect.
Value Sensitive Design is an interactional theory
Value Sensitive Design builds from the psychological proposition that certain values are universally held, although how such values play out in a particular culture at a particular point in time can vary considerably
[ad 4] “By moral, we refer to issues that pertain to fairness, justice, human welfare and virtue, […] Value Sensitive Design also accounts for conventions (e.g., standardisation of protocols) and personal values”
[ad 5] “Usability refers to characteristics of a system that make it work in a functional sense, […] not all highly usable systems support ethical values”
[ad 6] “Often, indirect stakeholders are ignored in the design process.”
[ad 7] “values are viewed neither as inscribed into technology (an endogenous theory), nor as simply transmitted by social forces (an exogenous theory). […] the interactional position holds that while the features or properties that people design into technologies more readily support certain values and hinder others, the technology’s actual use depends on the goals of the people interacting with it. […] through human interaction, technology itself changes over time.”
[ad 8] “the more concretely (act-based) one conceptualises a value, the more one will be led to recognising cultural variation; conversely, the more abstractly one conceptualises a value, the more one will be led to recognising universals”
How-To
Value Sensitive Design doesn’t prescribe a particular process, which is fine by me, because I believe strongly in tailoring your process to the particular project at hand. Part of being a thoughtful designer is designing a project’s process as well. However, some guidance is offered for how to proceed in most cases. Here’s a list, plus some notes.
Start with a value, technology, or context of use
Identify direct and indirect stakeholders
Identify benefits and harms for each stakeholder group
Map benefits and harms onto corresponding values
Conduct a conceptual investigation of key values
Identify potential value conflicts
Integrate value considerations into one’s organisational structure
[ad 1] “We suggest starting with the aspect that is most central to your work and interests.”
[ad 2] “direct stakeholders are those individuals who interact directly with the technology or with the technology’s output. Indirect stakeholders are those individuals who are also impacted by the system, though they never interact directly with it. […] Within each of these two overarching categories of stakeholders, there may be several subgroups. […] A single individual may be a member of more than one stakeholder group or subgroup. […] An organisational power structure is often orthogonal to the distinction between direct and indirect stakeholders.”
[ad 3] “one rule of thumb in the conceptual investigation is to give priority to indirect stakeholders who are strongly affected, or to large groups that are somewhat affected […] Attend to issues of technical, cognitive, and physical competency. […] personas have a tendency to lead to stereotypes because they require a list of “socially coherent” attributes to be associated with the “imagined individual.” […] we have deviated from the typical use of personas that maps a single persona onto a single user group, to allow for a single persona to map onto to multiple stakeholder groups”
[ad 4] “In some cases, the corresponding values will be obvious, but not always.”
[ad 5] “the philosophical ontological literature can help provide criteria for what a value is, and thereby how to assess it empirically.”
[ad 6] “value conflicts should usually not be conceived of as “either/or” situations, but as constraints on the design space.”
[ad 7] “In the real world, of course, human values (especially those with ethical import) may collide with economic objectives, power, and other factors. However, even in such situations, Value Sensitive Design should be able to make positive contributions, by showing alternate designs that better support enduring human values.”
Considering Values
Human values with ethical import often implicated in system design
This table is a useful heuristic tool for values that might be considered. The authors note that it is not intended as a complete list of human values that might be implicated. Another more elaborate tool of a similar sort are the Envisioning Cards.
For the ethics nerds, it may be interesting to note that most of the values in this table hinge on the deontological and consequentialist moral orientations. In addition, the authors have chose several other values related to system design.
Interviewing Stakeholders
When doing the empirical investigations you’ll probably rely on stakeholder interviews quite heavily. Stakeholder interviews shouldn’t be a new thing to any design professional worth their salt. But the authors do offer some practical pointers to keep in mind.
First of all, keep the interview somewhat open-ended. This means conducting a semi-structured interview. This will allow you to ask the things you want to know, but also creates the opportunity for new and unexpected insights to emerge.
Laddering—repeatedly asking the question “Why?” can get you quite far.
The most important thing, before interviewing stakeholders, is to have a good understanding of the subject at hand. Demarcate it using criteria that can be explained to outsiders. Use descriptions of issues or tasks for participants to engage in, so that the subject of the investigation becomes more concrete.
Technical Investigations
Two things I find interesting here. First of all, we are encouraged to map the relationship between design trade-offs, value conflicts and stakeholder groups. The goal of this exercise is to be able to see how stakeholder groups are affected in different ways.
The second useful suggestion for technical investigations is to build flexibility into a product or service’s technical infrastructure. The reason for this is that over time, new values and value conflicts can emerge. As designers we are not always around anymore once a system is deployed so it is good practice to enable the stakeholders to adapt our design to their evolving needs. (I was very much reminded of the approach advocated by Stewart Brand in How Buildings Learn.)
Conclusion
When discussing matters of ethics in design with peers I often notice a reluctance to widen the scope of our practice to include these issues. Frequently, folks argue that since it is impossible to foresee all the potential consequences of design choices, we can’t possibly be held accountable for all the terrible things that can happen as a result of a new technology being introduced into society.
I think that’s a misunderstanding of what ethical design is about. We may not always be directly responsible for the consequences of our design (both good and bad). But we are responsible for what we choose to make part of our concerns as we practice design. This should include the values considered important by the people impacted by our designs.
In the 1996 article mentioned at the start of this post, Friedman concludes as follows:
“As with the traditional criteria of reliability, efficiency, and correctness, we do not require perfection in value-sensitive design, but a commitment. And progress.” (Friedman 1996)
I think that is an apt place to end it here as well.
Friedman, Batya, Peter Kahn, and Alan Borning. “Value sensitive design: Theory and methods.” University of Washington technical report (2002): 02–12.
Le Dantec, Christopher A., Erika Shehan Poole, and Susan P. Wyche. “Values as lived experience: evolving value sensitive design in support of value discovery.” Proceedings of the SIGCHI conference on human factors in computing systems.ACM, 2009.
Borning, Alan, and Michael Muller. “Next steps for value sensitive design.” Proceedings of the SIGCHI conference on human factors in computing systems.ACM, 2012.
Freidman, B., P. Kahn, and A. Borning. “Value sensitive design and information systems.” Human–computer interaction in management information systems: Foundations (2006): 348–372.
Nobody does thoroughly argued presentations quite like Sebastian. This is good stuff on ethics and design.
I decided to share some thoughts it sparked via Twitter and ended up ranting a bit:
I recently talked about ethics to a bunch of “behavior designers” and found myself concluding that any designed system that does not allow for user appropriation is fundamentally unethical because as you rightly point out what is the good life is a personal matter. Imposing it is an inherently violent act. A lot of design is a form of technologically mediated violence. Getting people to do your bidding, however well intended. Which given my own vocation and work in the past is a kind of troubling thought to arrive at… Help?
Sebastian makes his best point on slides 113–114. Ethical design isn’t about doing the least harm, but about doing the most good. And, to come back to my Twitter rant, for me the ultimate good is for others to be free. Hence non-prescriptive design.