Contestable Infrastructures: Designing for Dissent in Smart Public Objects” at We Make the City 2019

Thi­js Turèl of AMS Insti­tute and myself pre­sent­ed a ver­sion of the talk below at the Cities for Dig­i­tal Rights con­fer­ence on June 19 in Ams­ter­dam dur­ing the We Make the City fes­ti­val. The talk is an attempt to artic­u­late some of the ideas we both have been devel­op­ing for some time around con­testa­bil­i­ty in smart pub­lic infra­struc­ture. As always with this sort of thing, this is intend­ed as a con­ver­sa­tion piece so I wel­come any thoughts you may have.


The basic mes­sage of the talk is that when we start to do auto­mat­ed deci­sion-mak­ing in pub­lic infra­struc­ture using algo­rith­mic sys­tems, we need to design for the inevitable dis­agree­ments that may arise and fur­ther­more, we sug­gest there is an oppor­tu­ni­ty to focus on design­ing for such dis­agree­ments in the phys­i­cal objects that peo­ple encounter in urban space as they make use of infrastructure.

We set the scene by show­ing a num­ber of exam­ples of smart pub­lic infra­struc­ture. A cyclist cross­ing that adapts to weath­er con­di­tions. If it’s rain­ing cyclists more fre­quent­ly get a green light. A pedes­tri­an cross­ing in Tilburg where elder­ly can use their mobile to get more time to cross. And final­ly, the case we are involved with our­selves: smart EV charg­ing in the city of Ams­ter­dam, about which more later.

Image cred­its: Vat­ten­fall, Fietsfan010, De Nieuwe Draai

We iden­ti­fy three trends in smart pub­lic infra­struc­ture: (1) where pre­vi­ous­ly algo­rithms were used to inform pol­i­cy, now they are employed to per­form auto­mat­ed deci­sion-mak­ing on an indi­vid­ual case basis. This rais­es the stakes; (2) dis­trib­uted own­er­ship of these sys­tems as the result of pub­lic-pri­vate part­ner­ships and oth­er com­plex col­lab­o­ra­tion schemes leads to unclear respon­si­bil­i­ty; and final­ly (3) the increas­ing use of machine learn­ing leads to opaque decision-making.

These trends, and algo­rith­mic sys­tems more gen­er­al­ly, raise a num­ber of eth­i­cal con­cerns. They include but are not lim­it­ed to: the use of induc­tive cor­re­la­tions (for exam­ple in the case of machine learn­ing) leads to unjus­ti­fied results; lack of access to and com­pre­hen­sion of a system’s inner work­ings pro­duces opac­i­ty, which in turn leads to a lack of trust in the sys­tems them­selves and the organ­i­sa­tions that use them; bias is intro­duced by a num­ber of fac­tors, includ­ing devel­op­ment team prej­u­dices, tech­ni­cal flaws, bad data and unfore­seen inter­ac­tions with oth­er sys­tems; and final­ly the use of pro­fil­ing, nudg­ing and per­son­al­i­sa­tion leads to dimin­ished human agency. (We high­ly rec­om­mend the arti­cle by Mit­tel­stadt et al. for a com­pre­hen­sive overview of eth­i­cal con­cerns raised by algorithms.)

So for us, the ques­tion that emerges from all this is: How do we organ­ise the super­vi­sion of smart pub­lic infra­struc­ture in a demo­c­ra­t­ic and law­ful way?

There are a num­ber of exist­ing approach­es to this ques­tion. These include legal and reg­u­la­to­ry (e.g. the right to expla­na­tion in the GDPR); audit­ing (e.g. KPMG’s AI in Con­trol” method, BKZ’s transparantielab); pro­cure­ment (e.g. open source claus­es); insourc­ing (e.g. GOV.UK) and design and engi­neer­ing (e.g. our own work on the trans­par­ent charg­ing sta­tion).

We feel there are two impor­tant lim­i­ta­tions with these exist­ing approach­es. The first is a focus on pro­fes­sion­als and the sec­ond is a focus on pre­dic­tion. We’ll dis­cuss each in turn.

Image cred­its: Cities Today

First of all, many solu­tions tar­get a pro­fes­sion­al class, be it accoun­tants, civ­il ser­vants, super­vi­so­ry boards, as well as tech­nol­o­gists, design­ers and so on. But we feel there is a role for the cit­i­zen as well, because the super­vi­sion of these sys­tems is sim­ply too impor­tant to be left to a priv­i­leged few. This role would include iden­ti­fy­ing wrong­do­ing, and sug­gest­ing alternatives. 

There is a ten­sion here, which is that from the per­spec­tive of the pub­lic sec­tor one should only ask cit­i­zens for their opin­ion when you have the inten­tion and the resources to actu­al­ly act on their sug­ges­tions. It can also be a chal­lenge to iden­ti­fy legit­i­mate con­cerns in the flood of feed­back that can some­times occur. From our point of view though, such con­cerns should not be used as an excuse to not engage the pub­lic. If cit­i­zen par­tic­i­pa­tion is con­sid­ered nec­es­sary, the focus should be on free­ing up resources and set­ting up struc­tures that make it fea­si­ble and effective.

The sec­ond lim­i­ta­tion is pre­dic­tion. This is best illus­trat­ed with the Collinridge dilem­ma: in the ear­ly phas­es of new tech­nol­o­gy, when a tech­nol­o­gy and its social embed­ding are still mal­leable, there is uncer­tain­ty about the social effects of that tech­nol­o­gy. In lat­er phas­es, social effects may be clear but then often the tech­nol­o­gy has become so well entrenched in soci­ety that it is hard to over­come neg­a­tive social effects. (This sum­ma­ry is tak­en from an excel­lent van de Poel arti­cle on the ethics of exper­i­men­tal technology.) 

Many solu­tions dis­re­gard the Collingridge dilem­ma and try to pre­dict and pre­vent adverse effects of new sys­tems at design-time. One exam­ple of this approach would be val­ue-sen­si­tive design. Our focus in stead is on use-time. Con­sid­er­ing the fact that smart pub­lic infra­struc­ture tends to be devel­oped on an ongo­ing basis, the ques­tion becomes how to make cit­i­zens a part­ner in this process. And even more specif­i­cal­ly we are inter­est­ed in how this can be made part of the design of the “touch­points” peo­ple actu­al­ly encounter in the streets, as well as their back­stage processes.

Why do we focus on these phys­i­cal objects? Because this is where peo­ple actu­al­ly meet the infra­struc­tur­al sys­tems, of which large parts recede from view. These are the places where they become aware of their pres­ence. They are the prover­bial tip of the iceberg. 

Image cred­its: Sagar Dani

The use of auto­mat­ed deci­sion-mak­ing in infra­struc­ture reduces people’s agency. For this rea­son, resources for agency need to be designed back into these sys­tems. Fre­quent­ly the answer to this ques­tion is premised on a trans­paren­cy ide­al. This may be a pre­req­ui­site for agency, but it is not suf­fi­cient. Trans­paren­cy may help you become aware of what is going on, but it will not nec­es­sar­i­ly help you to act on that knowl­edge. This is why we pro­pose a shift from trans­paren­cy to con­testa­bil­i­ty. (We can high­ly rec­om­mend Anan­ny and Crawford’s arti­cle for more on why trans­paren­cy is insufficient.)

To clar­i­fy what we mean by con­testa­bil­i­ty, con­sid­er the fol­low­ing three exam­ples: When you see the lights on your router blink in the mid­dle of the night when no-one in your house­hold is using the inter­net you can act on this knowl­edge by yank­ing out the device’s pow­er cord. You may nev­er use the emer­gency brake in a train but its pres­ence does give you a sense of con­trol. And final­ly, the cash reg­is­ter receipt pro­vides you with a view into both the pro­ce­dure and the out­come of the super­mar­ket check­out pro­ce­dure and it offers a resource with which you can dis­pute them if some­thing appears to be wrong.

Image cred­its: Aangifte­doen, source unknown for remainder

None of these exam­ples is a per­fect illus­tra­tion of con­testa­bil­i­ty but they hint at some­thing more than trans­paren­cy, or per­haps even some­thing whol­ly sep­a­rate from it. We’ve been inves­ti­gat­ing what their equiv­a­lents would be in the con­text of smart pub­lic infrastructure.

To illus­trate this point fur­ther let us come back to the smart EV charg­ing project we men­tioned ear­li­er. In Ams­ter­dam, pub­lic EV charg­ing sta­tions are becom­ing “smart” which in this case means they auto­mat­i­cal­ly adapt the speed of charg­ing to a num­ber of fac­tors. These include grid capac­i­ty, and the avail­abil­i­ty of solar ener­gy. Addi­tion­al fac­tors can be added in future, one of which under con­sid­er­a­tion is to give pri­or­i­ty to shared cars over pri­vate­ly owned cars. We are involved with an ongo­ing effort to con­sid­er how such charg­ing sta­tions can be redesigned so that peo­ple under­stand what’s going on behind the scenes and can act on this under­stand­ing. The moti­va­tion for this is that if not designed care­ful­ly, the opac­i­ty of smart EV charg­ing infra­struc­ture may be detri­men­tal to social accep­tance of the tech­nol­o­gy. (A first out­come of these efforts is the Trans­par­ent Charg­ing Sta­tion designed by The Incred­i­ble Machine. A fol­low-up project is ongoing.)

Image cred­its: The Incred­i­ble Machine, Kars Alfrink

We have iden­ti­fied a num­ber of dif­fer­ent ways in which peo­ple may object to smart EV charg­ing. They are list­ed in the table below. These types of objec­tions can lead us to fea­ture require­ments for mak­ing the sys­tem contestable. 

Because the list is pre­lim­i­nary, we asked the audi­ence if they could imag­ine addi­tion­al objec­tions, if those exam­ples rep­re­sent­ed new cat­e­gories, and if they would require addi­tion­al fea­tures for peo­ple to be able to act on them. One par­tic­u­lar­ly inter­est­ing sug­ges­tion that emerged was to give local com­mu­ni­ties con­trol over the poli­cies enact­ed by the charge points in their vicin­i­ty. That’s some­thing to fur­ther con­sid­er the impli­ca­tions of.

And that’s where we left it. So to summarise: 

  1. Algo­rith­mic sys­tems are becom­ing part of pub­lic infrastructure.
  2. Smart pub­lic infra­struc­ture rais­es new eth­i­cal concerns.
  3. Many solu­tions to eth­i­cal con­cerns are premised on a trans­paren­cy ide­al, but do not address the issue of dimin­ished agency.
  4. There are dif­fer­ent cat­e­gories of objec­tions peo­ple may have to an algo­rith­mic system’s workings.
  5. Mak­ing a sys­tem con­testable means cre­at­ing resources for peo­ple to object, open­ing up a space for the explo­ration of mean­ing­ful alter­na­tives to its cur­rent implementation.

Engineering Ethics Reading List

I recent­ly fol­lowed an excel­lent three-day course on engi­neer­ing ethics. It was offered by the TU Delft grad­u­ate school and taught by Behnam Taibi with guest lec­tures from sev­er­al of our faculty.

I found it par­tic­u­lar­ly help­ful to get some sug­ges­tions for fur­ther read­ing that rep­re­sent some of the foun­da­tion­al ideas in the field. I fig­ured it would be use­ful to oth­ers as well to have a point­er to them. 

So here they are. I’ve quick­ly gut­ted these for their mean­ing. The one by Van de Poel I did read entire­ly and can high­ly rec­om­mend for any­one who’s doing design of emerg­ing tech­nolo­gies and wants to escape from the informed con­sent conundrum. 

I intend to dig into the Doorn one, not just because she’s one of my pro­mot­ers but also because resilience is a con­cept that is close­ly relat­ed to my own inter­ests. I’ll also get into the Flori­di one in detail but the con­cept of infor­ma­tion qual­i­ty and the care ethics per­spec­tive on the prob­lem of infor­ma­tion abun­dance and atten­tion scarci­ty I found imme­di­ate­ly applic­a­ble in inter­ac­tion design.

  1. Stil­goe, Jack, Richard Owen, and Phil Mac­naght­en. “Devel­op­ing a frame­work for respon­si­ble inno­va­tion.” Research Pol­i­cy 42.9 (2013): 1568–1580.
  2. Van den Hov­en, Jeroen. “Val­ue sen­si­tive design and respon­si­ble inno­va­tion.” Respon­si­ble inno­va­tion (2013): 75–83.
  3. Hans­son, Sven Ove. “Eth­i­cal cri­te­ria of risk accep­tance.” Erken­nt­nis 59.3 (2003): 291–309.
  4. Van de Poel, Ibo. “An eth­i­cal frame­work for eval­u­at­ing exper­i­men­tal tech­nol­o­gy.” Sci­ence and engi­neer­ing ethics22.3 (2016): 667–686.
  5. Hans­son, Sven Ove. “Philo­soph­i­cal prob­lems in cost–benefit analy­sis.” Eco­nom­ics & Phi­los­o­phy 23.2 (2007): 163–183.
  6. Flori­di, Luciano. “Big Data and infor­ma­tion qual­i­ty.” The phi­los­o­phy of infor­ma­tion qual­i­ty. Springer, Cham, 2014. 303–315.
  7. Doorn, Neelke, Pao­lo Gar­doni, and Colleen Mur­phy. “A mul­ti­dis­ci­pli­nary def­i­n­i­tion and eval­u­a­tion of resilience: The role of social jus­tice in defin­ing resilience.” Sus­tain­able and Resilient Infra­struc­ture (2018): 1–12.

We also got a draft of the intro chap­ter to a book on engi­neer­ing and ethics that Behnam is writ­ing. That looks very promis­ing as well but I can’t share yet for obvi­ous reasons.

Unboxing’ at Behavior Design Amsterdam #16

Below is a write-up of the talk I gave at the Behav­ior Design Ams­ter­dam #16 meet­up on Thurs­day, Feb­ru­ary 15, 2018.

'Pandora' by John William Waterhouse (1896)
‘Pan­do­ra’ by John William Water­house (1896)

I’d like to talk about the future of our design prac­tice and what I think we should focus our atten­tion on. It is all relat­ed to this idea of com­plex­i­ty and open­ing up black box­es. We’re going to take the scenic route, though. So bear with me.

Software Design

Two years ago I spent about half a year in Singapore.

While there I worked as prod­uct strate­gist and design­er at a start­up called ARTO, an art rec­om­men­da­tion ser­vice. It shows you a ran­dom sam­ple of art­works, you tell it which ones you like, and it will then start rec­om­mend­ing pieces it thinks you like. In case you were won­der­ing: yes, swip­ing left and right was involved.

We had this inter­est­ing prob­lem of ingest­ing art from many dif­fer­ent sources (most­ly online gal­leries) with meta­da­ta of wild­ly vary­ing lev­els of qual­i­ty. So, using meta­da­ta to fig­ure out which art to show was a bit of a non-starter. It should come as no sur­prise then, that we start­ed look­ing into machine learning—image pro­cess­ing in particular.

And so I found myself work­ing with my engi­neer­ing col­leagues on an art rec­om­men­da­tion stream which was dri­ven at least in part by machine learn­ing. And I quick­ly realised we had a prob­lem. In terms of how we worked togeth­er on this part of the prod­uct, it felt like we had tak­en a bunch of steps back in time. Back to a way of col­lab­o­rat­ing that was less inte­grat­ed and less responsive.

That’s because we have all these nice tools and tech­niques for design­ing tra­di­tion­al soft­ware prod­ucts. But soft­ware is deter­min­is­tic. Machine learn­ing is fun­da­men­tal­ly dif­fer­ent in nature: it is probabilistic.

It was hard for me to take the lead in the design of this part of the prod­uct for two rea­sons. First of all, it was chal­leng­ing to get a first-hand feel of the machine learn­ing fea­ture before it was implemented.

And sec­ond of all, it was hard for me to com­mu­ni­cate or visu­alise the intend­ed behav­iour of the machine learn­ing fea­ture to the rest of the team.

So when I came back to the Nether­lands I decid­ed to dig into this prob­lem of design for machine learn­ing. Turns out I opened up quite the can of worms for myself. But that’s okay.

There are two rea­sons I care about this:

The first is that I think we need more design-led inno­va­tion in the machine learn­ing space. At the moment it is engi­neer­ing-dom­i­nat­ed, which doesn’t nec­es­sar­i­ly lead to use­ful out­comes. But if you want to take the lead in the design of machine learn­ing appli­ca­tions, you need a firm han­dle on the nature of the technology.

The sec­ond rea­son why I think we need to edu­cate our­selves as design­ers on the nature of machine learn­ing is that we need to take respon­si­bil­i­ty for the impact the tech­nol­o­gy has on the lives of peo­ple. There is a lot of talk about ethics in the design indus­try at the moment. Which I con­sid­er a pos­i­tive sign. But I also see a reluc­tance to real­ly grap­ple with what ethics is and what the rela­tion­ship between tech­nol­o­gy and soci­ety is. We seem to want easy answers, which is under­stand­able because we are all very busy peo­ple. But hav­ing spent some time dig­ging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a fea­ture. And we should embrace it.

Machine Learning

At the end of 2016 I attend­ed ThingsCon here in Ams­ter­dam and I was intro­duced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both inter­est­ed in machine learn­ing. So with encour­age­ment from Ianus we decid­ed to put togeth­er a work­shop that would enable indus­tri­al design mas­ter stu­dents to tan­gle with it in a hands-on manner.

About a year lat­er now, this has grown into a thing we call Pro­to­typ­ing the Use­less But­ler. Dur­ing the work­shop, you use machine learn­ing algo­rithms to train a mod­el that takes inputs from a net­work-con­nect­ed arduino’s sen­sors and dri­ves that same arduino’s actu­a­tors. In effect, you can cre­ate inter­ac­tive behav­iour with­out writ­ing a sin­gle line of code. And you get a first hand feel for how com­mon appli­ca­tions of machine learn­ing work. Things like regres­sion, clas­si­fi­ca­tion and dynam­ic time warping.

The thing that makes this work­shop tick is an open source soft­ware appli­ca­tion called Wek­ina­tor. Which was cre­at­ed by Rebec­ca Fiebrink. It was orig­i­nal­ly aimed at per­form­ing artists so that they could build inter­ac­tive instru­ments with­out writ­ing code. But it takes inputs from any­thing and sends out­puts to any­thing. So we appro­pri­at­ed it towards our own ends.

You can find every­thing relat­ed to Use­less But­ler on this GitHub repo.

The think­ing behind this work­shop is that for us design­ers to be able to think cre­ative­ly about appli­ca­tions of machine learn­ing, we need a gran­u­lar under­stand­ing of the nature of the tech­nol­o­gy. The thing with design­ers is, we can’t real­ly learn about such things from books. A lot of design knowl­edge is tac­it, it emerges from our phys­i­cal engage­ment with the world. This is why things like sketch­ing and pro­to­typ­ing are such essen­tial parts of our way of work­ing. And so with use­less but­ler we aim to cre­ate an envi­ron­ment in which you as a design­er can gain tac­it knowl­edge about the work­ings of machine learning.

Sim­ply put, for a lot of us, machine learn­ing is a black box. With Use­less But­ler, we open the black box a bit and let you peer inside. This should improve the odds of design-led inno­va­tion hap­pen­ing in the machine learn­ing space. And it should also help with ethics. But it’s def­i­nite­ly not enough. Knowl­edge about the tech­nol­o­gy isn’t the only issue here. There are more black box­es to open.

Values

Which brings me back to that oth­er black box: ethics. Like I already men­tioned there is a lot of talk in the tech indus­try about how we should “be more eth­i­cal”. But things are often reduced to this notion that design­ers should do no harm. As if ethics is a prob­lem to be fixed in stead of a thing to be practiced.

So I start­ed to talk about this to peo­ple I know in acad­e­mia and more than once this thing called Val­ue Sen­si­tive Design was men­tioned. It should be no sur­prise to any­one that schol­ars have been chew­ing on this stuff for quite a while. One of the ear­li­est ref­er­ences I came across, an essay by Batya Fried­man in Inter­ac­tions is from 1996! This is a les­son to all of us I think. Pay more atten­tion to what the aca­d­e­mics are talk­ing about.

So, at the end of last year I dove into this top­ic. Our host Iskan­der Smit, Rob Mai­jers and myself coor­di­nate a grass­roots com­mu­ni­ty for tech work­ers called Tech Sol­i­dar­i­ty NL. We want to build tech­nol­o­gy that serves the needs of the many, not the few. Val­ue Sen­si­tive Design seemed like a good thing to dig into and so we did.

I’m not going to dive into the details here. There’s a report on the Tech Sol­i­dar­i­ty NL web­site if you’re inter­est­ed. But I will high­light a few things that val­ue sen­si­tive design asks us to con­sid­er that I think help us unpack what it means to prac­tice eth­i­cal design.

First of all, val­ues. Here’s how it is com­mon­ly defined in the literature:

A val­ue refers to what a per­son or group of peo­ple con­sid­er impor­tant in life.”

I like it because it’s com­mon sense, right? But it also makes clear that there can nev­er be one mono­lith­ic def­i­n­i­tion of what ‘good’ is in all cas­es. As we design­ers like to say: “it depends” and when it comes to val­ues things are no different.

Per­son or group” implies there can be var­i­ous stake­hold­ers. Val­ue sen­si­tive design dis­tin­guish­es between direct and indi­rect stake­hold­ers. The for­mer have direct con­tact with the tech­nol­o­gy, the lat­ter don’t but are affect­ed by it nonethe­less. Val­ue sen­si­tive design means tak­ing both into account. So this blows up the con­ven­tion­al notion of a sin­gle user to design for.

Var­i­ous stake­hold­er groups can have com­pet­ing val­ues and so to design for them means to arrive at some sort of trade-off between val­ues. This is a cru­cial point. There is no such thing as a per­fect or objec­tive­ly best solu­tion to eth­i­cal conun­drums. Not in the design of tech­nol­o­gy and not any­where else.

Val­ue sen­si­tive design encour­ages you to map stake­hold­ers and their val­ues. These will be dif­fer­ent for every design project. Anoth­er approach is to use lists like the one pic­tured here as an ana­lyt­i­cal tool to think about how a design impacts var­i­ous values.

Fur­ther­more, dur­ing your design process you might not only think about the short-term impact of a tech­nol­o­gy, but also think about how it will affect things in the long run.

And sim­i­lar­ly, you might think about the effects of a tech­nol­o­gy not only when a few peo­ple are using it, but also when it becomes wild­ly suc­cess­ful and every­body uses it.

There are tools out there that can help you think through these things. But so far much of the work in this area is hap­pen­ing on the aca­d­e­m­ic side. I think there is an oppor­tu­ni­ty for us to cre­ate tools and case stud­ies that will help us edu­cate our­selves on this stuff.

There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the tech­nolo­gies we work with, it helps to dig deep­er into the nature of the rela­tion­ship between tech­nol­o­gy and soci­ety. Yes, it com­pli­cates things. But that is exact­ly the point.

Priv­i­leg­ing sim­ple and scal­able solu­tions over those adapt­ed to local needs is social­ly, eco­nom­i­cal­ly and eco­log­i­cal­ly unsus­tain­able. So I hope you will join me in embrac­ing complexity.

An Introduction to Value Sensitive Design

Phnom Bakheng
Phnom Bakheng

At a recent Tech Sol­i­dar­i­ty NL meet­up we dove into Val­ue Sen­si­tive Design. This approach had been on my radar for a while so when we con­clud­ed that for our com­mu­ni­ty it would be use­ful to talk about how to prac­tice eth­i­cal design and devel­op­ment of tech­nol­o­gy, I fig­ured we should check it out. 

Val­ue Sen­si­tive Design has been around for ages. The ear­li­est arti­cle I came across is by Batya Fried­man in a 1996 edi­tion of Inter­ac­tions mag­a­zine. Iron­i­cal­ly, or trag­i­cal­ly, I must say I have only heard about the approach from aca­d­e­mics and design the­o­ry nerds. In indus­try at large, Val­ue Sen­si­tive Design appears to be—to me at least—basically unknown. (A recent excep­tion would be this inter­est­ing mar­riage of design sprints with Val­ue Sen­si­tive Design by Cen­ny­dd Bowles.)

For the meet­up, I read a hand-full of papers and cob­bled togeth­er a deck which attempts to sum­marise this ’framework’—the term favoured by its main pro­po­nents. I went through it and then we had a spir­it­ed dis­cus­sion of how its ideas apply to our dai­ly prac­tice. A report of all of that can be found over at the Tech Sol­i­dar­i­ty NL website.

Below, I have attempt­ed to pull togeth­er the most salient points from what is a rather dense twen­ty-plus-slides deck. I hope it is of some use to those pro­fes­sion­al design­ers and devel­op­ers who are look­ing for bet­ter ways of build­ing tech­nol­o­gy that serves the inter­est of the many, not the few.

What fol­lows is most­ly adapt­ed from the chap­ter “Val­ue Sen­si­tive Design and Infor­ma­tion Sys­tems” in Human–computer inter­ac­tion in man­age­ment infor­ma­tion sys­tems: Foun­da­tions. All quotes are from there unless oth­er­wise noted.

Background

The depar­ture point is the obser­va­tion that “there is a need for an over­ar­ch­ing the­o­ret­i­cal and method­olog­i­cal frame­work with which to han­dle the val­ue dimen­sions of design work.” In oth­er words, some­thing that accounts for what we already know about how to deal with val­ues in design work in terms of the­o­ry and con­cepts, as well as meth­ods and techniques. 

This is of course not a new con­cern. For exam­ple, famed cyber­neti­cist Nor­bert Wiener argued that tech­nol­o­gy could help make us bet­ter human beings, and cre­ate a more just soci­ety. But for it to do so, he argued, we have to take con­trol of the technology.

We have to reject the “wor­ship­ing [of] the new gad­gets which are our own cre­ation as if they were our mas­ters.” (Wiener 1953)

We can find many more sim­i­lar argu­ments through­out the his­to­ry of infor­ma­tion tech­nol­o­gy. Recent­ly such con­cerns have flared up in indus­try as well as soci­ety at large. (Not always for the right rea­sons in my opin­ion, but that is some­thing we will set aside for now.) 

To address these con­cerns, Val­ue Sen­si­tive Design was devel­oped. It is “a the­o­ret­i­cal­ly ground­ed approach to the design of tech­nol­o­gy that accounts for human val­ues in a prin­ci­pled and com­pre­hen­sive man­ner through­out the design process.” It has been applied suc­cess­ful­ly for over 20 years. 

Defining Values

But what is a val­ue? In the lit­er­a­ture it is defined as “what a per­son or group of peo­ple con­sid­er impor­tant in life.” I like this def­i­n­i­tion because it is easy to grasp but also under­lines the slip­pery nature of val­ues. Some things to keep in mind when talk­ing about values:

  • In a nar­row sense, the word “val­ue” refers sim­ply to the eco­nom­ic worth of an object. This is not the mean­ing employed by Val­ue Sen­si­tive Design.
  • Val­ues should not be con­flat­ed with facts (the “fact/value dis­tinc­tion”) espe­cial­ly inso­far as facts do not log­i­cal­ly entail value.
  • Is” does not imply “ought” (the nat­u­ral­is­tic fallacy).
  • Val­ues can­not be moti­vat­ed only by an empir­i­cal account of the exter­nal world, but depend sub­stan­tive­ly on the inter­ests and desires of human beings with­in a cul­tur­al milieu. (So con­trary to what some right-wingers like to say: “Facts do care about your feelings.”)

Investigations

Let’s dig into the way this all works. “Val­ue Sen­si­tive Design is an iter­a­tive method­ol­o­gy that inte­grates con­cep­tu­al, empir­i­cal, and tech­ni­cal inves­ti­ga­tions.” So it dis­tin­guish­es between three types of activ­i­ties (“inves­ti­ga­tions”) and it pre­scribes cycling through these activ­i­ties mul­ti­ple times. Below are list­ed ques­tions and notes that are rel­e­vant to each type of inves­ti­ga­tion. But in brief, this is how I under­stand them: 

  1. Defin­ing the spe­cif­ic val­ues at play in a project;
  2. Observ­ing, mea­sur­ing, and doc­u­ment­ing people’s behav­iour and the con­text of use;
  3. Analysing the ways in which a par­tic­u­lar tech­nol­o­gy sup­ports or hin­ders par­tic­u­lar values.

Conceptual Investigations

  • Who are the direct and indi­rect stake­hold­ers affect­ed by the design at hand?
  • How are both class­es of stake­hold­ers affected?
  • What val­ues are implicated?
  • How should we engage in trade-offs among com­pet­ing val­ues in the design, imple­men­ta­tion, and use of infor­ma­tion sys­tems (e.g., auton­o­my vs. secu­ri­ty, or anonymi­ty vs. trust)?
  • Should moral val­ues (e.g., a right to pri­va­cy) have greater weight than, or even trump, non-moral val­ues (e.g., aes­thet­ic preferences)?

Empirical Investigations

  • How do stake­hold­ers appre­hend indi­vid­ual val­ues in the inter­ac­tive context?
  • How do they pri­ori­tise com­pet­ing val­ues in design trade-offs?
  • How do they pri­ori­tise indi­vid­ual val­ues and usabil­i­ty considerations?
  • Are there dif­fer­ences between espoused prac­tice (what peo­ple say) com­pared with actu­al prac­tice (what peo­ple do)?

And, specif­i­cal­ly focus­ing on organisations:

  • What are organ­i­sa­tions’ moti­va­tions, meth­ods of train­ing and dis­sem­i­na­tion, reward struc­tures, and eco­nom­ic incentives?

Technical Investigations

Not a list of ques­tions here, but some notes:

Val­ue Sen­si­tive Design takes the posi­tion that tech­nolo­gies in gen­er­al, and infor­ma­tion and com­put­er tech­nolo­gies in par­tic­u­lar, have prop­er­ties that make them more or less suit­able for cer­tain activ­i­ties. A giv­en tech­nol­o­gy more read­i­ly sup­ports cer­tain val­ues while ren­der­ing oth­er activ­i­ties and val­ues more dif­fi­cult to realise.

Tech­ni­cal inves­ti­ga­tions involve the proac­tive design of sys­tems to sup­port val­ues iden­ti­fied in the con­cep­tu­al investigation.

Tech­ni­cal inves­ti­ga­tions focus on the tech­nol­o­gy itself. Empir­i­cal inves­ti­ga­tions focus on the indi­vid­u­als, groups, or larg­er social sys­tems that con­fig­ure, use, or are oth­er­wise affect­ed by the technology. 

Significance

Below is a list of things that make Val­ue Sen­si­tive Design dif­fer­ent from oth­er approach­es, par­tic­u­lar­ly ones that pre­ced­ed it such as Com­put­er-Sup­port­ed Coop­er­a­tive Work and Par­tic­i­pa­to­ry Design.

  1. Val­ue Sen­si­tive Design seeks to be proac­tive
  2. Val­ue Sen­si­tive Design enlarges the are­na in which val­ues arise to include not only the work place
  3. Val­ue Sen­si­tive Design con­tributes a unique method­ol­o­gy that employs con­cep­tu­al, empir­i­cal, and tech­ni­cal inves­ti­ga­tions, applied iter­a­tive­ly and integratively
  4. Val­ue Sen­si­tive Design enlarges the scope of human val­ues beyond those of coop­er­a­tion (CSCW) and par­tic­i­pa­tion and democ­ra­cy (Par­tic­i­pa­to­ry Design) to include all val­ues, espe­cial­ly those with moral import.
  5. Val­ue Sen­si­tive Design dis­tin­guish­es between usabil­i­ty and human val­ues with eth­i­cal import.
  6. Val­ue Sen­si­tive Design iden­ti­fies and takes seri­ous­ly two class­es of stake­hold­ers: direct and indirect.
  7. Val­ue Sen­si­tive Design is an inter­ac­tion­al theory
  8. Val­ue Sen­si­tive Design builds from the psy­cho­log­i­cal propo­si­tion that cer­tain val­ues are uni­ver­sal­ly held, although how such val­ues play out in a par­tic­u­lar cul­ture at a par­tic­u­lar point in time can vary considerably

[ad 4] “By moral, we refer to issues that per­tain to fair­ness, jus­tice, human wel­fare and virtue, […] Val­ue Sen­si­tive Design also accounts for con­ven­tions (e.g., stan­dard­i­s­a­tion of pro­to­cols) and per­son­al values”

[ad 5] “Usabil­i­ty refers to char­ac­ter­is­tics of a sys­tem that make it work in a func­tion­al sense, […] not all high­ly usable sys­tems sup­port eth­i­cal values”

[ad 6] “Often, indi­rect stake­hold­ers are ignored in the design process.”

[ad 7] “val­ues are viewed nei­ther as inscribed into tech­nol­o­gy (an endoge­nous the­o­ry), nor as sim­ply trans­mit­ted by social forces (an exoge­nous the­o­ry). […] the inter­ac­tion­al posi­tion holds that while the fea­tures or prop­er­ties that peo­ple design into tech­nolo­gies more read­i­ly sup­port cer­tain val­ues and hin­der oth­ers, the technology’s actu­al use depends on the goals of the peo­ple inter­act­ing with it. […] through human inter­ac­tion, tech­nol­o­gy itself changes over time.”

[ad 8] “the more con­crete­ly (act-based) one con­cep­tu­alis­es a val­ue, the more one will be led to recog­nis­ing cul­tur­al vari­a­tion; con­verse­ly, the more abstract­ly one con­cep­tu­alis­es a val­ue, the more one will be led to recog­nis­ing universals”

How-To

Val­ue Sen­si­tive Design doesn’t pre­scribe a par­tic­u­lar process, which is fine by me, because I believe strong­ly in tai­lor­ing your process to the par­tic­u­lar project at hand. Part of being a thought­ful design­er is design­ing a project’s process as well. How­ev­er, some guid­ance is offered for how to pro­ceed in most cas­es. Here’s a list, plus some notes.

  1. Start with a val­ue, tech­nol­o­gy, or con­text of use
  2. Iden­ti­fy direct and indi­rect stakeholders
  3. Iden­ti­fy ben­e­fits and harms for each stake­hold­er group
  4. Map ben­e­fits and harms onto cor­re­spond­ing values
  5. Con­duct a con­cep­tu­al inves­ti­ga­tion of key values
  6. Iden­ti­fy poten­tial val­ue conflicts
  7. Inte­grate val­ue con­sid­er­a­tions into one’s organ­i­sa­tion­al structure

[ad 1] “We sug­gest start­ing with the aspect that is most cen­tral to your work and interests.” 

[ad 2] “direct stake­hold­ers are those indi­vid­u­als who inter­act direct­ly with the tech­nol­o­gy or with the technology’s out­put. Indi­rect stake­hold­ers are those indi­vid­u­als who are also impact­ed by the sys­tem, though they nev­er inter­act direct­ly with it. […] With­in each of these two over­ar­ch­ing cat­e­gories of stake­hold­ers, there may be sev­er­al sub­groups. […] A sin­gle indi­vid­ual may be a mem­ber of more than one stake­hold­er group or sub­group. […] An organ­i­sa­tion­al pow­er struc­ture is often orthog­o­nal to the dis­tinc­tion between direct and indi­rect stakeholders.”

[ad 3] “one rule of thumb in the con­cep­tu­al inves­ti­ga­tion is to give pri­or­i­ty to indi­rect stake­hold­ers who are strong­ly affect­ed, or to large groups that are some­what affect­ed […] Attend to issues of tech­ni­cal, cog­ni­tive, and phys­i­cal com­pe­ten­cy. […] per­sonas have a ten­den­cy to lead to stereo­types because they require a list of “social­ly coher­ent” attrib­ut­es to be asso­ci­at­ed with the “imag­ined indi­vid­ual.” […] we have devi­at­ed from the typ­i­cal use of per­sonas that maps a sin­gle per­sona onto a sin­gle user group, to allow for a sin­gle per­sona to map onto to mul­ti­ple stake­hold­er groups”

[ad 4] “In some cas­es, the cor­re­spond­ing val­ues will be obvi­ous, but not always.”

[ad 5] “the philo­soph­i­cal onto­log­i­cal lit­er­a­ture can help pro­vide cri­te­ria for what a val­ue is, and there­by how to assess it empirically.”

[ad 6] “val­ue con­flicts should usu­al­ly not be con­ceived of as “either/or” sit­u­a­tions, but as con­straints on the design space.”

[ad 7] “In the real world, of course, human val­ues (espe­cial­ly those with eth­i­cal import) may col­lide with eco­nom­ic objec­tives, pow­er, and oth­er fac­tors. How­ev­er, even in such sit­u­a­tions, Val­ue Sen­si­tive Design should be able to make pos­i­tive con­tri­bu­tions, by show­ing alter­nate designs that bet­ter sup­port endur­ing human values.”

Considering Values

Human values with ethical import often implicated in system design
Human val­ues with eth­i­cal import often impli­cat­ed in sys­tem design

This table is a use­ful heuris­tic tool for val­ues that might be con­sid­ered. The authors note that it is not intend­ed as a com­plete list of human val­ues that might be impli­cat­ed. Anoth­er more elab­o­rate tool of a sim­i­lar sort are the Envi­sion­ing Cards.

For the ethics nerds, it may be inter­est­ing to note that most of the val­ues in this table hinge on the deon­to­log­i­cal and con­se­quen­tial­ist moral ori­en­ta­tions. In addi­tion, the authors have chose sev­er­al oth­er val­ues relat­ed to sys­tem design.

Interviewing Stakeholders

When doing the empir­i­cal inves­ti­ga­tions you’ll prob­a­bly rely on stake­hold­er inter­views quite heav­i­ly. Stake­hold­er inter­views shouldn’t be a new thing to any design pro­fes­sion­al worth their salt. But the authors do offer some prac­ti­cal point­ers to keep in mind.

First of all, keep the inter­view some­what open-end­ed. This means con­duct­ing a semi-struc­tured inter­view. This will allow you to ask the things you want to know, but also cre­ates the oppor­tu­ni­ty for new and unex­pect­ed insights to emerge. 

Laddering—repeatedly ask­ing the ques­tion “Why?” can get you quite far.

The most impor­tant thing, before inter­view­ing stake­hold­ers, is to have a good under­stand­ing of the sub­ject at hand. Demar­cate it using cri­te­ria that can be explained to out­siders. Use descrip­tions of issues or tasks for par­tic­i­pants to engage in, so that the sub­ject of the inves­ti­ga­tion becomes more concrete. 

Technical Investigations

Two things I find inter­est­ing here. First of all, we are encour­aged to map the rela­tion­ship between design trade-offs, val­ue con­flicts and stake­hold­er groups. The goal of this exer­cise is to be able to see how stake­hold­er groups are affect­ed in dif­fer­ent ways.

The sec­ond use­ful sug­ges­tion for tech­ni­cal inves­ti­ga­tions is to build flex­i­bil­i­ty into a prod­uct or service’s tech­ni­cal infra­struc­ture. The rea­son for this is that over time, new val­ues and val­ue con­flicts can emerge. As design­ers we are not always around any­more once a sys­tem is deployed so it is good prac­tice to enable the stake­hold­ers to adapt our design to their evolv­ing needs. (I was very much remind­ed of the approach advo­cat­ed by Stew­art Brand in How Build­ings Learn.)

Conclusion

When dis­cussing mat­ters of ethics in design with peers I often notice a reluc­tance to widen the scope of our prac­tice to include these issues. Fre­quent­ly, folks argue that since it is impos­si­ble to fore­see all the poten­tial con­se­quences of design choic­es, we can’t pos­si­bly be held account­able for all the ter­ri­ble things that can hap­pen as a result of a new tech­nol­o­gy being intro­duced into society.

I think that’s a mis­un­der­stand­ing of what eth­i­cal design is about. We may not always be direct­ly respon­si­ble for the con­se­quences of our design (both good and bad). But we are respon­si­ble for what we choose to make part of our con­cerns as we prac­tice design. This should include the val­ues con­sid­ered impor­tant by the peo­ple impact­ed by our designs. 

In the 1996 arti­cle men­tioned at the start of this post, Fried­man con­cludes as follows:

As with the tra­di­tion­al cri­te­ria of reli­a­bil­i­ty, effi­cien­cy, and cor­rect­ness, we do not require per­fec­tion in val­ue-sen­si­tive design, but a com­mit­ment. And progress.” (Fried­man 1996)

I think that is an apt place to end it here as well.

References

  • Fried­man, Batya. “Val­ue-sen­si­tive design.” inter­ac­tions 3.6 (1996): 16–23.
  • Fried­man, Batya, Peter Kahn, and Alan Born­ing. “Val­ue sen­si­tive design: The­o­ry and meth­ods.” Uni­ver­si­ty of Wash­ing­ton tech­ni­cal report (2002): 02–12.
  • Le Dan­tec, Christo­pher A., Eri­ka She­han Poole, and Susan P. Wyche. “Val­ues as lived expe­ri­ence: evolv­ing val­ue sen­si­tive design in sup­port of val­ue dis­cov­ery.” Pro­ceed­ings of the SIGCHI con­fer­ence on human fac­tors in com­put­ing sys­tems. ACM, 2009.
  • Born­ing, Alan, and Michael Muller. “Next steps for val­ue sen­si­tive design.” Pro­ceed­ings of the SIGCHI con­fer­ence on human fac­tors in com­put­ing sys­tems. ACM, 2012.
  • Frei­d­man, B., P. Kahn, and A. Born­ing. “Val­ue sen­si­tive design and infor­ma­tion sys­tems.” Human–computer inter­ac­tion in man­age­ment infor­ma­tion sys­tems: Foun­da­tions (2006): 348–372.

Nobody does thor­ough­ly argued pre­sen­ta­tions quite like Sebas­t­ian. This is good stuff on ethics and design.

I decid­ed to share some thoughts it sparked via Twit­ter and end­ed up rant­i­ng a bit:

I recent­ly talked about ethics to a bunch of “behav­ior design­ers” and found myself con­clud­ing that any designed sys­tem that does not allow for user appro­pri­a­tion is fun­da­men­tal­ly uneth­i­cal because as you right­ly point out what is the good life is a per­son­al mat­ter. Impos­ing it is an inher­ent­ly vio­lent act. A lot of design is a form of tech­no­log­i­cal­ly medi­at­ed vio­lence. Get­ting peo­ple to do your bid­ding, how­ev­er well intend­ed. Which giv­en my own voca­tion and work in the past is a kind of trou­bling thought to arrive at… Help?

Sebas­t­ian makes his best point on slides 113–114. Eth­i­cal design isn’t about doing the least harm, but about doing the most good. And, to come back to my Twit­ter rant, for me the ulti­mate good is for oth­ers to be free. Hence non-pre­scrip­tive design.

(via Design­ing the Good Life: Ethics and User Expe­ri­ence Design)