Unboxing’ at Behavior Design Amsterdam #16

Below is a write-up of the talk I gave at the Behav­ior Design Ams­ter­dam #16 meet­up on Thurs­day, Feb­ru­ary 15, 2018.

'Pandora' by John William Waterhouse (1896)
‘Pan­do­ra’ by John William Water­house (1896)

I’d like to talk about the future of our design prac­tice and what I think we should focus our atten­tion on. It is all relat­ed to this idea of com­plex­i­ty and open­ing up black box­es. We’re going to take the scenic route, though. So bear with me.

Software Design

Two years ago I spent about half a year in Singapore.

While there I worked as prod­uct strate­gist and design­er at a start­up called ARTO, an art rec­om­men­da­tion ser­vice. It shows you a ran­dom sam­ple of art­works, you tell it which ones you like, and it will then start rec­om­mend­ing pieces it thinks you like. In case you were won­der­ing: yes, swip­ing left and right was involved.

We had this inter­est­ing prob­lem of ingest­ing art from many dif­fer­ent sources (most­ly online gal­leries) with meta­da­ta of wild­ly vary­ing lev­els of qual­i­ty. So, using meta­da­ta to fig­ure out which art to show was a bit of a non-starter. It should come as no sur­prise then, that we start­ed look­ing into machine learning—image pro­cess­ing in particular.

And so I found myself work­ing with my engi­neer­ing col­leagues on an art rec­om­men­da­tion stream which was dri­ven at least in part by machine learn­ing. And I quick­ly realised we had a prob­lem. In terms of how we worked togeth­er on this part of the prod­uct, it felt like we had tak­en a bunch of steps back in time. Back to a way of col­lab­o­rat­ing that was less inte­grat­ed and less responsive.

That’s because we have all these nice tools and tech­niques for design­ing tra­di­tion­al soft­ware prod­ucts. But soft­ware is deter­min­is­tic. Machine learn­ing is fun­da­men­tal­ly dif­fer­ent in nature: it is probabilistic.

It was hard for me to take the lead in the design of this part of the prod­uct for two rea­sons. First of all, it was chal­leng­ing to get a first-hand feel of the machine learn­ing fea­ture before it was implemented.

And sec­ond of all, it was hard for me to com­mu­ni­cate or visu­alise the intend­ed behav­iour of the machine learn­ing fea­ture to the rest of the team.

So when I came back to the Nether­lands I decid­ed to dig into this prob­lem of design for machine learn­ing. Turns out I opened up quite the can of worms for myself. But that’s okay.

There are two rea­sons I care about this:

The first is that I think we need more design-led inno­va­tion in the machine learn­ing space. At the moment it is engi­neer­ing-dom­i­nat­ed, which doesn’t nec­es­sar­i­ly lead to use­ful out­comes. But if you want to take the lead in the design of machine learn­ing appli­ca­tions, you need a firm han­dle on the nature of the technology.

The sec­ond rea­son why I think we need to edu­cate our­selves as design­ers on the nature of machine learn­ing is that we need to take respon­si­bil­i­ty for the impact the tech­nol­o­gy has on the lives of peo­ple. There is a lot of talk about ethics in the design indus­try at the moment. Which I con­sid­er a pos­i­tive sign. But I also see a reluc­tance to real­ly grap­ple with what ethics is and what the rela­tion­ship between tech­nol­o­gy and soci­ety is. We seem to want easy answers, which is under­stand­able because we are all very busy peo­ple. But hav­ing spent some time dig­ging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a fea­ture. And we should embrace it.

Machine Learning

At the end of 2016 I attend­ed ThingsCon here in Ams­ter­dam and I was intro­duced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both inter­est­ed in machine learn­ing. So with encour­age­ment from Ianus we decid­ed to put togeth­er a work­shop that would enable indus­tri­al design mas­ter stu­dents to tan­gle with it in a hands-on manner.

About a year lat­er now, this has grown into a thing we call Pro­to­typ­ing the Use­less But­ler. Dur­ing the work­shop, you use machine learn­ing algo­rithms to train a mod­el that takes inputs from a net­work-con­nect­ed arduino’s sen­sors and dri­ves that same arduino’s actu­a­tors. In effect, you can cre­ate inter­ac­tive behav­iour with­out writ­ing a sin­gle line of code. And you get a first hand feel for how com­mon appli­ca­tions of machine learn­ing work. Things like regres­sion, clas­si­fi­ca­tion and dynam­ic time warping.

The thing that makes this work­shop tick is an open source soft­ware appli­ca­tion called Wek­ina­tor. Which was cre­at­ed by Rebec­ca Fiebrink. It was orig­i­nal­ly aimed at per­form­ing artists so that they could build inter­ac­tive instru­ments with­out writ­ing code. But it takes inputs from any­thing and sends out­puts to any­thing. So we appro­pri­at­ed it towards our own ends.

You can find every­thing relat­ed to Use­less But­ler on this GitHub repo.

The think­ing behind this work­shop is that for us design­ers to be able to think cre­ative­ly about appli­ca­tions of machine learn­ing, we need a gran­u­lar under­stand­ing of the nature of the tech­nol­o­gy. The thing with design­ers is, we can’t real­ly learn about such things from books. A lot of design knowl­edge is tac­it, it emerges from our phys­i­cal engage­ment with the world. This is why things like sketch­ing and pro­to­typ­ing are such essen­tial parts of our way of work­ing. And so with use­less but­ler we aim to cre­ate an envi­ron­ment in which you as a design­er can gain tac­it knowl­edge about the work­ings of machine learning.

Sim­ply put, for a lot of us, machine learn­ing is a black box. With Use­less But­ler, we open the black box a bit and let you peer inside. This should improve the odds of design-led inno­va­tion hap­pen­ing in the machine learn­ing space. And it should also help with ethics. But it’s def­i­nite­ly not enough. Knowl­edge about the tech­nol­o­gy isn’t the only issue here. There are more black box­es to open.

Values

Which brings me back to that oth­er black box: ethics. Like I already men­tioned there is a lot of talk in the tech indus­try about how we should “be more eth­i­cal”. But things are often reduced to this notion that design­ers should do no harm. As if ethics is a prob­lem to be fixed in stead of a thing to be practiced.

So I start­ed to talk about this to peo­ple I know in acad­e­mia and more than once this thing called Val­ue Sen­si­tive Design was men­tioned. It should be no sur­prise to any­one that schol­ars have been chew­ing on this stuff for quite a while. One of the ear­li­est ref­er­ences I came across, an essay by Batya Fried­man in Inter­ac­tions is from 1996! This is a les­son to all of us I think. Pay more atten­tion to what the aca­d­e­mics are talk­ing about.

So, at the end of last year I dove into this top­ic. Our host Iskan­der Smit, Rob Mai­jers and myself coor­di­nate a grass­roots com­mu­ni­ty for tech work­ers called Tech Sol­i­dar­i­ty NL. We want to build tech­nol­o­gy that serves the needs of the many, not the few. Val­ue Sen­si­tive Design seemed like a good thing to dig into and so we did.

I’m not going to dive into the details here. There’s a report on the Tech Sol­i­dar­i­ty NL web­site if you’re inter­est­ed. But I will high­light a few things that val­ue sen­si­tive design asks us to con­sid­er that I think help us unpack what it means to prac­tice eth­i­cal design.

First of all, val­ues. Here’s how it is com­mon­ly defined in the literature:

A val­ue refers to what a per­son or group of peo­ple con­sid­er impor­tant in life.”

I like it because it’s com­mon sense, right? But it also makes clear that there can nev­er be one mono­lith­ic def­i­n­i­tion of what ‘good’ is in all cas­es. As we design­ers like to say: “it depends” and when it comes to val­ues things are no different.

Per­son or group” implies there can be var­i­ous stake­hold­ers. Val­ue sen­si­tive design dis­tin­guish­es between direct and indi­rect stake­hold­ers. The for­mer have direct con­tact with the tech­nol­o­gy, the lat­ter don’t but are affect­ed by it nonethe­less. Val­ue sen­si­tive design means tak­ing both into account. So this blows up the con­ven­tion­al notion of a sin­gle user to design for.

Var­i­ous stake­hold­er groups can have com­pet­ing val­ues and so to design for them means to arrive at some sort of trade-off between val­ues. This is a cru­cial point. There is no such thing as a per­fect or objec­tive­ly best solu­tion to eth­i­cal conun­drums. Not in the design of tech­nol­o­gy and not any­where else.

Val­ue sen­si­tive design encour­ages you to map stake­hold­ers and their val­ues. These will be dif­fer­ent for every design project. Anoth­er approach is to use lists like the one pic­tured here as an ana­lyt­i­cal tool to think about how a design impacts var­i­ous values.

Fur­ther­more, dur­ing your design process you might not only think about the short-term impact of a tech­nol­o­gy, but also think about how it will affect things in the long run.

And sim­i­lar­ly, you might think about the effects of a tech­nol­o­gy not only when a few peo­ple are using it, but also when it becomes wild­ly suc­cess­ful and every­body uses it.

There are tools out there that can help you think through these things. But so far much of the work in this area is hap­pen­ing on the aca­d­e­m­ic side. I think there is an oppor­tu­ni­ty for us to cre­ate tools and case stud­ies that will help us edu­cate our­selves on this stuff.

There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the tech­nolo­gies we work with, it helps to dig deep­er into the nature of the rela­tion­ship between tech­nol­o­gy and soci­ety. Yes, it com­pli­cates things. But that is exact­ly the point.

Priv­i­leg­ing sim­ple and scal­able solu­tions over those adapt­ed to local needs is social­ly, eco­nom­i­cal­ly and eco­log­i­cal­ly unsus­tain­able. So I hope you will join me in embrac­ing complexity.

John Boyd for designers

The first time I came across mil­i­tary strate­gist John Boy­d’s ideas was prob­a­bly through Venkatesh Rao’s writ­ing. For exam­ple, I remem­ber enjoy­ing Be Some­body or Do Some­thing.

Boyd was clear­ly a con­trar­i­an per­son. I tend to have a soft spot for such fig­ures so I read a high­ly enter­tain­ing biog­ra­phy by Roger Coram. Get­ting more inter­est­ed in his the­o­ries I then read an appli­ca­tion of Boy­d’s ideas to busi­ness by Chet Richards. Still not sat­is­fied, I decid­ed to final­ly buck­le down and read the com­pre­hen­sive sur­vey of his mar­tial and sci­en­tif­ic influ­ences plus tran­scripts of all his brief­in­gs by Frans Osinga.

It’s been a huge­ly enjoy­able and reward­ing intel­lec­tu­al trip. I feel like Boyd has giv­en me some pret­ty sharp new tools-to-think-with. From his back­ground you might think these tools are lim­it­ed to war­fare. But in fact they can be applied much more broad­ly, to any field in which we need to make deci­sions under uncer­tain circumstances. 

As we go about our dai­ly lives we are actu­al­ly always deal­ing with this dynam­ic. But the stakes are usu­al­ly low, so we most­ly don’t real­ly care about hav­ing a thor­ough under­stand­ing of how to do what we want to do. In war­fare the stakes are obvi­ous­ly unusu­al­ly high, so it makes sense for some of the most artic­u­late think­ing on the sub­ject to emerge from it.

As a design­er I have always been inter­est­ed in how my pro­fes­sion makes deci­sions. Design­ers usu­al­ly deal with high lev­els of uncer­tain­ty too. Although lives are rarely at stake, the con­tin­ued via­bil­i­ty of busi­ness­es and qual­i­ty of peo­ples lives usu­al­ly are, at least in some way. Fur­ther­more, there is always a leap of faith involved with any design deci­sion. When we sug­gest a path for­ward with our sketch­es and pro­to­types, and we choose to pro­ceed to devel­op­ment, we can nev­er be entire­ly sure if our intend­ed out­comes will pan out as we had hoped.

This uncer­tain­ty has always been present in any design act, but an argu­ment could be made that tech­nol­o­gy has increased the amount of uncer­tain­ty in our world.

The way I see it, the meth­ods of user cen­tred design, inter­ac­tion design, user expe­ri­ence, etc are all attempts to “deal with” uncer­tain­ty in var­i­ous ways. The same can be said for the tech­niques of agile soft­ware development. 

These meth­ods can be divid­ed into rough­ly two cat­e­gories, which more or less cor­re­spond to the upper two quad­rants of this two-by-two by Venkatesh. Bor­row­ing the dia­gram’s labels, one is called Spore. It is risk-averse and focus­es on sus­tain­abil­i­ty. The oth­er is called Hydra and it is risk-savvy and about anti-fragili­ty. Spore tries to lim­it the neg­a­tive con­se­quences of unex­pect­ed events, and Hydra tries to max­imise their pos­i­tive consequences. 

An exam­ple of a Spore-like design move would be to insist on thor­ough user research at the start of a project. We expend sig­nif­i­cant resources to dimin­ish the amount of unknowns about our tar­get audi­ence. An exam­ple of a Hydra-like design move is the kind of playtest­ing employed by many game design­ers. We leave open the pos­si­bil­i­ty of sur­pris­ing acts from our tar­get audi­ence and hope to sub­se­quent­ly use those as the basis for new design directions. 

It is inter­est­ing to note that these upper two quad­rants are strate­gies for deal­ing with uncer­tain­ty based on syn­the­sis. The oth­er two rely on analy­sis. We typ­i­cal­ly asso­ciate syn­the­sis with cre­ativ­i­ty and by exten­sion with design. But as Boyd fre­quent­ly points out, inven­tion requires both analy­sis and syn­the­sis, which he liked to call destruc­tion and cre­ation. When I reflect on my own way of work­ing, par­tic­u­lar­ly in the ear­ly stages of a project, the so-called fuzzy front end, I too rely on a cycle of destruc­tion and cre­ation to make progress. 

I do not see one of the two approach­es, Spore or Hydra, as inher­ent­ly supe­ri­or. But my per­son­al pref­er­ence is most def­i­nite­ly the Hydra approach. I think this is because a risk-savvy stance is most help­ful when try­ing to invent new things, and when try­ing to design for play and playfulness.

The main thing I learned from Boyd for my own design prac­tice is to be aware of uncer­tain­ty in the first place, and to know how to deal with it in an agile way. You might not be will­ing to do all the read­ing I did, but I would rec­om­mend to at least peruse the one long-form essay Boyd wrote, titled Destruc­tion and Cre­ation (PDF), about how to be cre­ative and deci­sive in the face of uncertainty.