Unboxing’ at Behavior Design Amsterdam #16

Below is a write-up of the talk I gave at the Behav­ior Design Ams­ter­dam #16 meet­up on Thurs­day, Feb­ru­ary 15, 2018.

'Pandora' by John William Waterhouse (1896)
‘Pan­do­ra’ by John William Water­house (1896)

I’d like to talk about the future of our design prac­tice and what I think we should focus our atten­tion on. It is all relat­ed to this idea of com­plex­i­ty and open­ing up black box­es. We’re going to take the scenic route, though. So bear with me.

Software Design

Two years ago I spent about half a year in Sin­ga­pore.

While there I worked as prod­uct strate­gist and design­er at a start­up called ARTO, an art rec­om­men­da­tion ser­vice. It shows you a ran­dom sam­ple of art­works, you tell it which ones you like, and it will then start rec­om­mend­ing pieces it thinks you like. In case you were won­der­ing: yes, swip­ing left and right was involved.

We had this inter­est­ing prob­lem of ingest­ing art from many dif­fer­ent sources (most­ly online gal­leries) with meta­da­ta of wild­ly vary­ing lev­els of qual­i­ty. So, using meta­da­ta to fig­ure out which art to show was a bit of a non-starter. It should come as no sur­prise then, that we start­ed look­ing into machine learning—image pro­cess­ing in par­tic­u­lar.

And so I found myself work­ing with my engi­neer­ing col­leagues on an art rec­om­men­da­tion stream which was dri­ven at least in part by machine learn­ing. And I quick­ly realised we had a prob­lem. In terms of how we worked togeth­er on this part of the prod­uct, it felt like we had tak­en a bunch of steps back in time. Back to a way of col­lab­o­rat­ing that was less inte­grat­ed and less respon­sive.

That’s because we have all these nice tools and tech­niques for design­ing tra­di­tion­al soft­ware prod­ucts. But soft­ware is deter­min­is­tic. Machine learn­ing is fun­da­men­tal­ly dif­fer­ent in nature: it is prob­a­bilis­tic.

It was hard for me to take the lead in the design of this part of the prod­uct for two rea­sons. First of all, it was chal­leng­ing to get a first-hand feel of the machine learn­ing fea­ture before it was imple­ment­ed.

And sec­ond of all, it was hard for me to com­mu­ni­cate or visu­alise the intend­ed behav­iour of the machine learn­ing fea­ture to the rest of the team.

So when I came back to the Nether­lands I decid­ed to dig into this prob­lem of design for machine learn­ing. Turns out I opened up quite the can of worms for myself. But that’s okay.

There are two rea­sons I care about this:

The first is that I think we need more design-led inno­va­tion in the machine learn­ing space. At the moment it is engi­neer­ing-dom­i­nat­ed, which doesn’t nec­es­sar­i­ly lead to use­ful out­comes. But if you want to take the lead in the design of machine learn­ing appli­ca­tions, you need a firm han­dle on the nature of the tech­nol­o­gy.

The sec­ond rea­son why I think we need to edu­cate our­selves as design­ers on the nature of machine learn­ing is that we need to take respon­si­bil­i­ty for the impact the tech­nol­o­gy has on the lives of peo­ple. There is a lot of talk about ethics in the design indus­try at the moment. Which I con­sid­er a pos­i­tive sign. But I also see a reluc­tance to real­ly grap­ple with what ethics is and what the rela­tion­ship between tech­nol­o­gy and soci­ety is. We seem to want easy answers, which is under­stand­able because we are all very busy peo­ple. But hav­ing spent some time dig­ging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a fea­ture. And we should embrace it.

Machine Learning

At the end of 2016 I attend­ed ThingsCon here in Ams­ter­dam and I was intro­duced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both inter­est­ed in machine learn­ing. So with encour­age­ment from Ianus we decid­ed to put togeth­er a work­shop that would enable indus­tri­al design mas­ter stu­dents to tan­gle with it in a hands-on man­ner.

About a year lat­er now, this has grown into a thing we call Pro­to­typ­ing the Use­less But­ler. Dur­ing the work­shop, you use machine learn­ing algo­rithms to train a mod­el that takes inputs from a net­work-con­nect­ed arduino’s sen­sors and dri­ves that same arduino’s actu­a­tors. In effect, you can cre­ate inter­ac­tive behav­iour with­out writ­ing a sin­gle line of code. And you get a first hand feel for how com­mon appli­ca­tions of machine learn­ing work. Things like regres­sion, clas­si­fi­ca­tion and dynam­ic time warp­ing.

The thing that makes this work­shop tick is an open source soft­ware appli­ca­tion called Wek­ina­tor. Which was cre­at­ed by Rebec­ca Fiebrink. It was orig­i­nal­ly aimed at per­form­ing artists so that they could build inter­ac­tive instru­ments with­out writ­ing code. But it takes inputs from any­thing and sends out­puts to any­thing. So we appro­pri­at­ed it towards our own ends.

You can find every­thing relat­ed to Use­less But­ler on this GitHub repo.

The think­ing behind this work­shop is that for us design­ers to be able to think cre­ative­ly about appli­ca­tions of machine learn­ing, we need a gran­u­lar under­stand­ing of the nature of the tech­nol­o­gy. The thing with design­ers is, we can’t real­ly learn about such things from books. A lot of design knowl­edge is tac­it, it emerges from our phys­i­cal engage­ment with the world. This is why things like sketch­ing and pro­to­typ­ing are such essen­tial parts of our way of work­ing. And so with use­less but­ler we aim to cre­ate an envi­ron­ment in which you as a design­er can gain tac­it knowl­edge about the work­ings of machine learn­ing.

Sim­ply put, for a lot of us, machine learn­ing is a black box. With Use­less But­ler, we open the black box a bit and let you peer inside. This should improve the odds of design-led inno­va­tion hap­pen­ing in the machine learn­ing space. And it should also help with ethics. But it’s def­i­nite­ly not enough. Knowl­edge about the tech­nol­o­gy isn’t the only issue here. There are more black box­es to open.

Values

Which brings me back to that oth­er black box: ethics. Like I already men­tioned there is a lot of talk in the tech indus­try about how we should “be more eth­i­cal”. But things are often reduced to this notion that design­ers should do no harm. As if ethics is a prob­lem to be fixed in stead of a thing to be prac­ticed.

So I start­ed to talk about this to peo­ple I know in acad­e­mia and more than once this thing called Val­ue Sen­si­tive Design was men­tioned. It should be no sur­prise to any­one that schol­ars have been chew­ing on this stuff for quite a while. One of the ear­li­est ref­er­ences I came across, an essay by Batya Fried­man in Inter­ac­tions is from 1996! This is a les­son to all of us I think. Pay more atten­tion to what the aca­d­e­mics are talk­ing about.

So, at the end of last year I dove into this top­ic. Our host Iskan­der Smit, Rob Mai­jers and myself coor­di­nate a grass­roots com­mu­ni­ty for tech work­ers called Tech Sol­i­dar­i­ty NL. We want to build tech­nol­o­gy that serves the needs of the many, not the few. Val­ue Sen­si­tive Design seemed like a good thing to dig into and so we did.

I’m not going to dive into the details here. There’s a report on the Tech Sol­i­dar­i­ty NL web­site if you’re inter­est­ed. But I will high­light a few things that val­ue sen­si­tive design asks us to con­sid­er that I think help us unpack what it means to prac­tice eth­i­cal design.

First of all, val­ues. Here’s how it is com­mon­ly defined in the lit­er­a­ture:

A val­ue refers to what a per­son or group of peo­ple con­sid­er impor­tant in life.”

I like it because it’s com­mon sense, right? But it also makes clear that there can nev­er be one mono­lith­ic def­i­n­i­tion of what ‘good’ is in all cas­es. As we design­ers like to say: “it depends” and when it comes to val­ues things are no dif­fer­ent.

Per­son or group” implies there can be var­i­ous stake­hold­ers. Val­ue sen­si­tive design dis­tin­guish­es between direct and indi­rect stake­hold­ers. The for­mer have direct con­tact with the tech­nol­o­gy, the lat­ter don’t but are affect­ed by it nonethe­less. Val­ue sen­si­tive design means tak­ing both into account. So this blows up the con­ven­tion­al notion of a sin­gle user to design for.

Var­i­ous stake­hold­er groups can have com­pet­ing val­ues and so to design for them means to arrive at some sort of trade-off between val­ues. This is a cru­cial point. There is no such thing as a per­fect or objec­tive­ly best solu­tion to eth­i­cal conun­drums. Not in the design of tech­nol­o­gy and not any­where else.

Val­ue sen­si­tive design encour­ages you to map stake­hold­ers and their val­ues. These will be dif­fer­ent for every design project. Anoth­er approach is to use lists like the one pic­tured here as an ana­lyt­i­cal tool to think about how a design impacts var­i­ous val­ues.

Fur­ther­more, dur­ing your design process you might not only think about the short-term impact of a tech­nol­o­gy, but also think about how it will affect things in the long run.

And sim­i­lar­ly, you might think about the effects of a tech­nol­o­gy not only when a few peo­ple are using it, but also when it becomes wild­ly suc­cess­ful and every­body uses it.

There are tools out there that can help you think through these things. But so far much of the work in this area is hap­pen­ing on the aca­d­e­m­ic side. I think there is an oppor­tu­ni­ty for us to cre­ate tools and case stud­ies that will help us edu­cate our­selves on this stuff.

There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the tech­nolo­gies we work with, it helps to dig deep­er into the nature of the rela­tion­ship between tech­nol­o­gy and soci­ety. Yes, it com­pli­cates things. But that is exact­ly the point.

Priv­i­leg­ing sim­ple and scal­able solu­tions over those adapt­ed to local needs is social­ly, eco­nom­i­cal­ly and eco­log­i­cal­ly unsus­tain­able. So I hope you will join me in embrac­ing com­plex­i­ty.

Starting a PhD

Today is the first offi­cial work day of my new doc­tor­al researcher posi­tion at Delft Uni­ver­si­ty of Tech­nol­o­gy. After more than two years of lay­ing the ground work, I’m start­ing out on a new chal­lenge.

I remem­ber sit­ting out­side a Jew­el cof­fee bar in Sin­ga­pore1 and going over the var­i­ous options for what­ev­er would be next after shut­ting down Hub­bub. I knew I want­ed to delve into the impact of machine learn­ing and data sci­ence on inter­ac­tion design. And large­ly through process of elim­i­na­tion I felt the best place for me to do so would be inside of acad­e­mia.

Back in the Nether­lands, with help from Ianus Keller, I start­ed mak­ing inroads at TU Delft, my first choice for this kind of work. I had vis­it­ed it on and off over the years, coach­ing stu­dents and doing guest lec­tures. I’d felt at home right away.

There were quite a few twists and turns along the way but now here we are. Start­ing this month I am a doc­tor­al can­di­date at Delft Uni­ver­si­ty of Technology’s fac­ul­ty of Indus­tri­al Design Engi­neer­ing.

My research is pro­vi­sion­al­ly titled ‘Intel­li­gi­bil­i­ty and Trans­paren­cy of Smart Pub­lic Infra­struc­tures: A Design Ori­ent­ed Approach’. Its main object of study is the MX3D smart bridge. My super­vi­sors are Gerd Kortuem and Neelke Doorn. And it’s all part of the NWO-fund­ed project ‘BRIdg­ing Data in the built Envi­ron­ment (BRIDE)’.

Below is a first rough abstract of the research. But in the months to come this is like­ly to change sub­stan­tial­ly as I start ham­mer­ing out a prop­er research plan. I plan to post the occa­sion­al update on my work here, so if you’re inter­est­ed your best bet is prob­a­bly to do the old RSS thing. There’s social media too, of course. And I might set up a newslet­ter at some point. We’ll see.

If any of this res­onates, do get in touch. I’d love to start a con­ver­sa­tion with as many peo­ple as pos­si­ble about this stuff.

Intel­li­gi­bil­i­ty and Trans­paren­cy of Smart Pub­lic Infra­struc­tures: A Design Ori­ent­ed Approach

This phd will explore how design­ers, tech­nol­o­gists, and cit­i­zens can uti­lize rapid urban man­u­fac­tur­ing and IoT tech­nolo­gies for design­ing urban space that express­es its intel­li­gence from the inter­sec­tion of peo­ple, places, activ­i­ties and tech­nol­o­gy, not mere­ly from the pres­ence of cut­ting-edge tech­nol­o­gy. The key ques­tion is how smart pub­lic infra­struc­ture, i.e. data-dri­ven and algo­rithm-rich pub­lic infra­struc­tures, can be under­stood by lay-peo­ple.

The design-ori­ent­ed research will uti­lize a ‘research through design’ approach to devel­op a dig­i­tal expe­ri­ence around the bridge and the sur­round­ing urban space. Dur­ing this extend­ed design and mak­ing process the phd stu­dent will con­duct empir­i­cal research to inves­ti­gate design choic­es and their impli­ca­tions on (1) new forms of par­tic­i­pa­to­ry data-informed design process­es, (2) the tech­nol­o­gy-medi­at­ed expe­ri­ence of urban space, (3) the emerg­ing rela­tion­ship between res­i­dents and “their” bridge, and (4) new forms of data-informed, cit­i­zen led gov­er­nance of pub­lic space.

  1. My Foursquare his­to­ry and 750 Words archive tell me this was on Sat­ur­day, Jan­u­ary 16, 2016. []

Playful Design for Workplace Change Management’ at PLAYTrack conference 2017 in Aarhus

Lase defender collab at FUSE

At the end of last year I was invit­ed to speak at the PLAY­Track con­fer­ence in Aarhus about the work­place change man­age­ment games made by Hub­bub. It turned out to be a great oppor­tu­ni­ty to recon­nect with the play research com­mu­ni­ty.

I was very much impressed by the pro­gram assem­bled by the organ­is­ers. Peo­ple came from a wide range of dis­ci­plines and cru­cial­ly, there was ample time to dis­cuss and reflect on the mate­ri­als pre­sent­ed. As I tweet­ed after­wards, this is a thing that most con­fer­ence organ­is­ers get wrong.

I was par­tic­u­lar­ly inspired by the work of Ben­jamin Mardell and Mara Krechevsky at Harvard’s Project ZeroMak­ing Learn­ing Vis­i­ble looks like a great resource for any­one who teach­es. Then there was Reed Stevens from North­west­ern Uni­ver­si­ty whose project FUSE is one of the most sol­id exam­ples of play­ful learn­ing for STEAM I’ve seen thus far. I was also fas­ci­nat­ed by Cia­ra Laverty’s work at PEDAL on observ­ing par­ent-child play. Miguel Sicart deliv­ered anoth­er great provo­ca­tion on the dark side of play­ful design. And final­ly I was delight­ed to hear about and expe­ri­ence for myself some of Amos Blan­ton’s work at the LEGO Foun­da­tion. I should also call out Ben Fin­cham’s many provoca­tive con­tri­bu­tions from the audi­ence.

The abstract for my talk is below, which cov­ers most of what I talked about. I tried to give peo­ple a good sense of:

  • what the games con­sist­ed of,
  • what we were aim­ing to achieve,
  • how both the fic­tion and the play­er activ­i­ties sup­port­ed these goals,
  • how we made learn­ing out­comes vis­i­ble to our play­ers and clients,
  • and final­ly how we went about design­ing and devel­op­ing these games.

Both projects have sol­id write-ups over at the Hub­bub web­site, so I’ll just point to those here: Code 4 and Rip­ple Effect.

In the final sec­tion of the talk I spent a bit of time reflect­ing on how I would approach projects like this today. After all, it has been sev­en years since we made Code 4, and four years since Rip­ple Effect. That’s ages ago and my per­spec­tive has def­i­nite­ly changes since we made these.

Participatory design

First of all, I would get even more seri­ous about co-design­ing with play­ers at every step. I would recruit rep­re­sen­ta­tives of play­ers and invest them with real influ­ence. In the projects we did, the pri­ma­ry vehi­cle for play­er influ­ence was through playtest­ing. But this is nec­es­sar­i­ly lim­it­ed. I also won’t pre­tend this is at all easy to do in a com­mer­cial con­text.

But, these games are ulti­mate­ly about improv­ing work­er pro­duc­tiv­i­ty. So how do we make it so that work­ers share in the real-world prof­its yield­ed by a suc­cess­ful cul­ture change?

I know of the exis­tence of par­tic­i­pa­to­ry design but from my expe­ri­ence it is not a com­mon approach in the indus­try. Why?

Value sensitive design

On a relat­ed note, I would get more seri­ous about what val­ues are sup­port­ed by the sys­tem, in whose inter­est they are and where they come from. Ear­ly field research and work­shops with audi­ence do sur­face some val­ues but val­ues from cus­tomer rep­re­sen­ta­tives tend to dom­i­nate. Again, the com­mer­cial con­text we work in is a poten­tial chal­lenge.

I know of val­ue sen­si­tive design, but as with par­tic­i­pa­to­ry design, it has yet to catch on in a big way in the indus­try. So again, why is that?

Disintermediation

One thing I con­tin­ue to be inter­est­ed in is to reduce the com­plex­i­ty of a game system’s phys­i­cal affor­dances (which includes its code), and to push even more of the sub­stance of the game into those social allowances that make up the non-mate­r­i­al aspects of the game. This allows for spon­ta­neous rene­go­ti­a­tion of the game by the play­ers. This is dis­in­ter­me­di­a­tion as a strat­e­gy. David Kanaga’s take on games as toys remains huge­ly inspi­ra­tional in this regard, as does Bernard De Koven’s book The Well Played Game.

Gamefulness versus playfulness

Code 4 had more focus on sat­is­fy­ing the need for auton­o­my. Rip­ple Effect had more focus on com­pe­tence, or in any case, it had less empha­sis on auton­o­my. There was less room for ‘play’ around the core dig­i­tal game. It seems to me that mas­ter­ing a sub­jec­tive sim­u­la­tion of a sub­ject is not nec­es­sar­i­ly what a work­place game for cul­ture change should be aim­ing for. So, less game­ful design, more play­ful design.

Adaptation

Final­ly, the agency mod­el does not enable us to stick around for the long haul. But work­place games might be bet­ter suit­ed to a set­up where things aren’t thought of as a one-off project but more of an ongo­ing process.

In How Build­ings Learn, Stew­art Brand talks about how archi­tects should revis­it build­ings they’ve designed after they are built to learn about how peo­ple are actu­al­ly using them. He also talks about how good build­ings are build­ings that its inhab­i­tants can adapt to their needs. What does that look like in the con­text of a game for work­place cul­ture change?


Play­ful Design for Work­place Change Man­age­ment

Code 4 (2011, com­mis­sioned by the Tax Admin­is­tra­tion of the Nether­lands) and Rip­ple Effect (2013, com­mis­sioned by Roy­al Dutch Shell) are both games for work­place change man­age­ment designed and devel­oped by Hub­bub, a bou­tique play­ful design agency which oper­at­ed from Utrecht, The Nether­lands and Berlin, Ger­many between 2009 and 2015. These games are exam­ples of how a goal-ori­ent­ed seri­ous game can be used to encour­age play­ful appro­pri­a­tion of work­place infra­struc­ture and social norms, result­ing in an open-end­ed and cre­ative explo­ration of new and inno­v­a­tive ways of work­ing.

Seri­ous game projects are usu­al­ly com­mis­sioned to solve prob­lems. Solv­ing the prob­lem of cul­tur­al change in a straight­for­ward man­ner means view­ing games as a way to per­suade work­ers of a desired future state. They typ­i­cal­ly take videogame form, sim­u­lat­ing the desired new way of work­ing as deter­mined by man­age­ment. To play the game well, play­ers need to mas­ter its sys­tem and by extension—it is assumed—learning hap­pens.

These games can be be enjoy­able expe­ri­ences and an improve­ment on pre­vi­ous forms of work­place learn­ing, but in our view they decrease the pos­si­bil­i­ty space of poten­tial work­place cul­tur­al change. They dimin­ish work­er agency, and they waste the cre­ative and inno­v­a­tive poten­tial of involv­ing them in the inven­tion of an improved work­place cul­ture.

We instead choose to view work­place games as an oppor­tu­ni­ty to increase the space of pos­si­bil­i­ty. We resist the temp­ta­tion to bake the desired new way of work­ing into the game’s phys­i­cal and dig­i­tal affor­dances. Instead, we leave how to play well up to the play­ers. Since these games are team-based and col­lab­o­ra­tive, play­ers need to nego­ti­ate their way of work­ing around the game among them­selves. In addi­tion, because the games are dis­trib­uted in time—running over a num­ber of weeks—and are playable at play­er dis­cre­tion dur­ing the work­day, play­ers are giv­en license to appro­pri­ate work­place infra­struc­ture and sub­vert social norms towards in-game ends.

We tried to make learn­ing tan­gi­ble in var­i­ous ways. Because the games at the core are web appli­ca­tions to which play­ers log on with indi­vid­ual accounts we were able to col­lect data on play­er behav­iour. To guar­an­tee pri­va­cy, employ­ers did not have direct access to game data­bas­es and only received anonymised reports. We took respon­si­bil­i­ty for play­er learn­ing by facil­i­tat­ing coach­ing ses­sions in which they could safe­ly reflect on their game expe­ri­ences. Round­ing out these efforts, we con­duct­ed sur­veys to gain insight into the play­er expe­ri­ence from a more qual­i­ta­tive and sub­jec­tive per­spec­tive.

These games offer a mod­el for a rea­son­ably demo­c­ra­t­ic and eth­i­cal way of doing game-based work­place change man­age­ment. How­ev­er, we would like to see efforts that fur­ther democ­ra­tise their design and development—involving work­ers at every step. We also wor­ry about how games can be used to cre­ate the illu­sion of work­er influ­ence while at the same time soft­ware is deployed through­out the work­place to lim­it their agency.

Our exam­ples may be inspir­ing but because of these devel­op­ments we feel we can’t con­tin­ue this type of work with­out seri­ous­ly recon­sid­er­ing our cur­rent process­es, tech­nol­o­gy stacks and busi­ness practices—and ulti­mate­ly whether we should be mak­ing games at all.

Design and machine learning – an annotated reading list

Ear­li­er this year I coached Design for Inter­ac­tion mas­ter stu­dents at Delft Uni­ver­si­ty of Tech­nol­o­gy in the course Research Method­ol­o­gy. The stu­dents organ­ised three sem­i­nars for which I pro­vid­ed the claims and assigned read­ing. In the sem­i­nars they argued about my claims using the Toul­min Mod­el of Argu­men­ta­tion. The read­ings served as sources for back­ing and evi­dence.

The claims and read­ings were all relat­ed to my nascent research project about machine learn­ing. We delved into both design­ing for machine learn­ing, and using machine learn­ing as a design tool.

Below are the read­ings I assigned, with some notes on each, which should help you decide if you want to dive into them your­self.

Hebron, Patrick. 2016. Machine Learn­ing for Design­ers. Sebastopol: O’Reilly.

The only non-aca­d­e­m­ic piece in this list. This served the pur­pose of get­ting all stu­dents on the same page with regards to what machine learn­ing is, its appli­ca­tions of machine learn­ing in inter­ac­tion design, and com­mon chal­lenges encoun­tered. I still can’t think of any oth­er sin­gle resource that is as good a start­ing point for the sub­ject as this one.

Fiebrink, Rebec­ca. 2016. “Machine Learn­ing as Meta-Instru­ment: Human-Machine Part­ner­ships Shap­ing Expres­sive Instru­men­tal Cre­ation.” In Musi­cal Instru­ments in the 21st Cen­tu­ry, 14:137–51. Sin­ga­pore: Springer Sin­ga­pore. doi:10.1007/978–981–10–2951–6_10.

Fiebrink’s Wek­ina­tor is ground­break­ing, fun and inspir­ing so I had to include some of her writ­ing in this list. This is most­ly of inter­est for those look­ing into the use of machine learn­ing for design and oth­er cre­ative and artis­tic endeav­ours. An impor­tant idea explored here is that tools that make use of (inter­ac­tive, super­vised) machine learn­ing can be thought of as instru­ments. Using such a tool is like play­ing or per­form­ing, explor­ing a pos­si­bil­i­ty space, engag­ing in a dia­logue with the tool. For a tool to feel like an instru­ment requires a tight action-feed­back loop.

Dove, Gra­ham, Kim Hal­skov, Jodi For­l­izzi, and John Zim­mer­man. 2017. UX Design Inno­va­tion: Chal­lenges for Work­ing with Machine Learn­ing as a Design Mate­r­i­al. The 2017 CHI Con­fer­ence. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.

A real­ly good sur­vey of how design­ers cur­rent­ly deal with machine learn­ing. Key take­aways include that in most cas­es, the appli­ca­tion of machine learn­ing is still engi­neer­ing-led as opposed to design-led, which ham­pers the cre­ation of non-obvi­ous machine learn­ing appli­ca­tions. It also makes it hard for design­ers to con­sid­er eth­i­cal impli­ca­tions of design choic­es. A key rea­son for this is that at the moment, pro­to­typ­ing with machine learn­ing is pro­hib­i­tive­ly cum­ber­some.

Fiebrink, Rebec­ca, Per­ry R Cook, and Dan True­man. 2011. “Human Mod­el Eval­u­a­tion in Inter­ac­tive Super­vised Learn­ing.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.

The sec­ond Fiebrink piece in this list, which is more of a deep dive into how peo­ple use Wek­ina­tor. As with the chap­ter list­ed above this is required read­ing for those work­ing on design tools which make use of inter­ac­tive machine learn­ing. An impor­tant find­ing here is that users of intel­li­gent design tools might have very dif­fer­ent cri­te­ria for eval­u­at­ing the ‘cor­rect­ness’ of a trained mod­el than engi­neers do. Such cri­te­ria are like­ly sub­jec­tive and eval­u­a­tion requires first-hand use of the mod­el in real time.

Bostrom, Nick, and Eliez­er Yud­kowsky. 2014. “The Ethics of Arti­fi­cial Intel­li­gence.” In The Cam­bridge Hand­book of Arti­fi­cial Intel­li­gence, edit­ed by Kei­th Frank­ish and William M Ram­sey, 316–34. Cam­bridge: Cam­bridge Uni­ver­si­ty Press. doi:10.1017/CBO9781139046855.020.

Bostrom is known for his some­what crazy but thought­pro­vok­ing book on super­in­tel­li­gence and although a large part of this chap­ter is about the ethics of gen­er­al arti­fi­cial intel­li­gence (which at the very least is still a way out), the first sec­tion dis­cuss­es the ethics of cur­rent “nar­row” arti­fi­cial intel­li­gence. It makes for a good check­list of things design­ers should keep in mind when they cre­ate new appli­ca­tions of machine learn­ing. Key insight: when a machine learn­ing sys­tem takes on work with social dimensions—tasks pre­vi­ous­ly per­formed by humans—the sys­tem inher­its its social require­ments.

Yang, Qian, John Zim­mer­man, Aaron Ste­in­feld, and Antho­ny Toma­sic. 2016. Plan­ning Adap­tive Mobile Expe­ri­ences When Wire­fram­ing. The 2016 ACM Con­fer­ence. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.

Final­ly, a feet-in-the-mud explo­ration of what it actu­al­ly means to design for machine learn­ing with the tools most com­mon­ly used by design­ers today: draw­ings and dia­grams of var­i­ous sorts. In this case the focus is on using machine learn­ing to make an inter­face adap­tive. It includes an inter­est­ing dis­cus­sion of how to bal­ance the use of implic­it and explic­it user inputs for adap­ta­tion, and how to deal with infer­ence errors. Once again the lim­i­ta­tions of cur­rent sketch­ing and pro­to­typ­ing tools is men­tioned, and relat­ed to the need for design­ers to devel­op tac­it knowl­edge about machine learn­ing. Such tac­it knowl­edge will only be gained when design­ers can work with machine learn­ing in a hands-on man­ner.

Supplemental material

Floyd, Chris­tiane. 1984. “A Sys­tem­at­ic Look at Pro­to­typ­ing.” In Approach­es to Pro­to­typ­ing, 1–18. Berlin, Hei­del­berg: Springer Berlin Hei­del­berg. doi:10.1007/978–3–642–69796–8_1.

I pro­vid­ed this to stu­dents so that they get some addi­tion­al ground­ing in the var­i­ous kinds of pro­to­typ­ing that are out there. It helps to pre­vent reduc­tive notions of pro­to­typ­ing, and it makes for a nice com­ple­ment to Buxton’s work on sketch­ing.

Ble­vis, E, Y Lim, and E Stolter­man. 2006. “Regard­ing Soft­ware as a Mate­r­i­al of Design.”

Some of the papers refer to machine learn­ing as a “design mate­r­i­al” and this paper helps to under­stand what that idea means. Soft­ware is a mate­r­i­al with­out qual­i­ties (it is extreme­ly mal­leable, it can sim­u­late near­ly any­thing). Yet, it helps to con­sid­er it as a phys­i­cal mate­r­i­al in the metaphor­i­cal sense because we can then apply ways of design think­ing and doing to soft­ware pro­gram­ming.

Status update

This is not exact­ly a now page, but I thought I would write up what I am doing at the moment since last report­ing on my sta­tus in my end-of-year report.

The major­i­ty of my work­days are spent doing free­lance design con­sult­ing. My pri­ma­ry gig has been through Eend at the Dutch Vic­tim Sup­port Foun­da­tion, where until very recent­ly I was part of a team build­ing online ser­vices. I helped out with prod­uct strat­e­gy, set­ting up a lean UX design process, and get­ting an inte­grat­ed agile design and devel­op­ment team up and run­ning. The first ser­vices are now ship­ping so it is time for me to move on, after 10 months of very grat­i­fy­ing work. I real­ly enjoy work­ing in the pub­lic sec­tor and I hope to be doing more of it in future.

So yes, this means I am avail­able and you can hire me to do strat­e­gy and design for soft­ware prod­ucts and ser­vices. Just send me an email.

Short­ly before the Dutch nation­al elec­tions of this year, Iskan­der and I gath­ered a group of fel­low tech work­ers under the ban­ner of “Tech Sol­i­dar­i­ty NL to dis­cuss the con­cern­ing lurch to the right in nation­al pol­i­tics and what our field can do about it. This has devel­oped into a small but active com­mu­ni­ty who gath­er month­ly to edu­cate our­selves and devel­op plans for col­lec­tive action. I am get­ting a huge boost out of this. Fig­ur­ing out how to be a left­ist in this day and age is not easy. The only way to do it is to prac­tice and for that reflec­tion with peers is invalu­able. Build­ing and facil­i­tat­ing a group like this is huge­ly edu­ca­tion­al too. I have learned a lot about how a com­mu­ni­ty is boot-strapped and nur­tured.

If you are in the Nether­lands, your pol­i­tics are left of cen­ter, and you work in tech­nol­o­gy, con­sid­er your­self invit­ed to join.

And final­ly, the last major thing on my plate is a con­tin­u­ing effort to secure a PhD posi­tion for myself. I am get­ting great sup­port from peo­ple at Delft Uni­ver­si­ty of Tech­nol­o­gy, in par­tic­u­lar Gerd Kortuem. I am focus­ing on inter­net of things prod­ucts that have fea­tures dri­ven by machine learn­ing. My ulti­mate aim is to devel­op pro­to­typ­ing tools for design and devel­op­ment teams that will help them cre­ate more inno­v­a­tive and more eth­i­cal solu­tions. The first step for this will be to con­duct field research inside com­pa­nies who are cre­at­ing such prod­ucts right now. So I am reach­ing out to peo­ple to see if I can secure a rea­son­able amount of poten­tial col­lab­o­ra­tors for this, which will go a long way in prov­ing the fea­si­bil­i­ty of my whole plan.

If you know of any com­pa­nies that devel­op con­sumer-fac­ing prod­ucts that have a con­nect­ed hard­ware com­po­nent and make use of machine learn­ing to dri­ve fea­tures, do let me know.

That’s about it. Free­lance UX con­sult­ing, left­ist tech-work­er organ­is­ing and design-for-machine-learn­ing research. Quite hap­py with that mix, real­ly.

Machine Learning for Designers’ workshop

On Wednes­day Péter Kun, Hol­ly Rob­bins and myself taught a one-day work­shop on machine learn­ing at Delft Uni­ver­si­ty of Tech­nol­o­gy. We had about thir­ty master’s stu­dents from the indus­tri­al design engi­neer­ing fac­ul­ty. The aim was to get them acquaint­ed with the tech­nol­o­gy through hands-on tin­ker­ing with the Wek­ina­tor as cen­tral teach­ing tool.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Background

The rea­son­ing behind this work­shop is twofold.

On the one hand I expect design­ers will find them­selves work­ing on projects involv­ing machine learn­ing more and more often. The tech­nol­o­gy has cer­tain prop­er­ties that dif­fer from tra­di­tion­al soft­ware. Most impor­tant­ly, machine learn­ing is prob­a­bilis­tic in stead of deter­min­is­tic. It is impor­tant that design­ers under­stand this because oth­er­wise they are like­ly to make bad deci­sions about its appli­ca­tion.

The sec­ond rea­son is that I have a strong sense machine learn­ing can play a role in the aug­men­ta­tion of the design process itself. So-called intel­li­gent design tools could make design­ers more effi­cient and effec­tive. They could also enable the cre­ation of designs that would oth­er­wise be impos­si­ble or very hard to achieve.

The work­shop explored both ideas.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Format

The struc­ture was rough­ly as fol­lows:

In the morn­ing we start­ed out pro­vid­ing a very broad intro­duc­tion to the tech­nol­o­gy. We talked about the very basic premise of (super­vised) learn­ing. Name­ly, pro­vid­ing exam­ples of inputs and desired out­puts and train­ing a mod­el based on those exam­ples. To make these con­cepts tan­gi­ble we then intro­duced the Wek­ina­tor and walked the stu­dents through get­ting it up and run­ning using basic exam­ples from the web­site. The final step was to invite them to explore alter­na­tive inputs and out­puts (such as game con­trollers and Arduino boards).

In the after­noon we pro­vid­ed a design brief, ask­ing the stu­dents to pro­to­type a data-enabled object with the set of tools they had acquired in the morn­ing. We assist­ed with tech­ni­cal hur­dles where nec­es­sary (of which there were more than a few) and closed out the day with demos and a group dis­cus­sion reflect­ing on their expe­ri­ences with the tech­nol­o­gy.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Results

As I tweet­ed on the way home that evening, the results were… inter­est­ing.

Not all groups man­aged to put some­thing togeth­er in the admit­ted­ly short amount of time they were pro­vid­ed with. They were most often stymied by get­ting an Arduino to talk to the Wek­ina­tor. Max was often picked as a go-between because the Wek­ina­tor receives OSC mes­sages over UDP, where­as the quick­est way to get an Arduino to talk to a com­put­er is over ser­i­al. But Max in my expe­ri­ence is a fick­le beast and would more than once crap out on us.

The groups that did build some­thing main­ly assem­bled pro­to­types from the exam­ples on hand. Which is fine, but since we were main­ly work­ing with the exam­ples from the Wek­ina­tor web­site they tend­ed towards the inter­ac­tive instru­ment side of things. We were hop­ing for explo­rations of IoT prod­uct con­cepts. For that more hand-rolling was required and this was only achiev­able for the stu­dents on the high­er end of the tech­ni­cal exper­tise spec­trum (and the more tena­cious ones).

The dis­cus­sion yield­ed some inter­est­ing insights into men­tal mod­els of the tech­nol­o­gy and how they are affect­ed by hands-on expe­ri­ence. A com­ment I heard more than once was: Why is this con­sid­ered learn­ing at all? The Wek­ina­tor was not per­ceived to be learn­ing any­thing. When chal­lenged on this by reit­er­at­ing the under­ly­ing prin­ci­ples it became clear the black box nature of the Wek­ina­tor ham­pers appre­ci­a­tion of some of the very real achieve­ments of the tech­nol­o­gy. It seems (for our stu­dents at least) machine learn­ing is stuck in a grey area between too-high expec­ta­tions and too-low recog­ni­tion of its capa­bil­i­ties.

Next steps

These results, and oth­ers, point towards some obvi­ous improve­ments which can be made to the work­shop for­mat, and to teach­ing design stu­dents about machine learn­ing more broad­ly.

  1. We can improve the toolset so that some of the heavy lift­ing involved with get­ting the var­i­ous parts to talk to each oth­er is made eas­i­er and more reli­able.
  2. We can build exam­ples that are geared towards the prac­tice of design­ing IoT prod­ucts and are ready for adap­ta­tion and hack­ing.
  3. And final­ly, and prob­a­bly most chal­leng­ing­ly, we can make the work­ings of machine learn­ing more trans­par­ent so that it becomes eas­i­er to devel­op a feel for its capa­bil­i­ties and short­com­ings.

We do intend to improve and teach the work­shop again. If you’re inter­est­ed in host­ing one (either in an edu­ca­tion­al or pro­fes­sion­al con­text) let me know. And stay tuned for updates on this and oth­er efforts to get design­ers to work in a hands-on man­ner with machine learn­ing.

Spe­cial thanks to the bril­liant Ianus Keller for con­nect­ing me to Péter and for allow­ing us to pilot this crazy idea at IDE Acad­e­my.

References

Sources used dur­ing prepa­ra­tion and run­ning of the work­shop:

  • The Wek­ina­tor – the UI is infu­ri­at­ing­ly poor but when it comes to get­ting start­ed with machine learn­ing this tool is unmatched.
  • Arduino – I have become par­tic­u­lar­ly fond of the MKR1000 board. Add a lithi­um-poly­mer bat­tery and you have every­thing you need to pro­to­type IoT prod­ucts.
  • OSC for ArduinoCNMAT’s imple­men­ta­tion of the open sound con­trol (OSC) encod­ing. Key puz­zle piece for get­ting the above two tools talk­ing to each oth­er.
  • Machine Learn­ing for Design­ers – my pre­ferred intro­duc­tion to the tech­nol­o­gy from a design­er­ly per­spec­tive.
  • A Visu­al Intro­duc­tion to Machine Learn­ing – a very acces­si­ble visu­al expla­na­tion of the basic under­pin­nings of com­put­ers apply­ing sta­tis­ti­cal learn­ing.
  • Remote Con­trol Theremin – an exam­ple project I pre­pared for the work­shop demo­ing how to have the Wek­ina­tor talk to an Arduino MKR1000 with OSC over UDP.

Design × AI coffee meetup

If you work in the field of design or arti­fi­cial intel­li­gence and are inter­est­ed in explor­ing the oppor­tu­ni­ties at their inter­sec­tion, con­sid­er your­self invit­ed to an infor­mal cof­fee meet­up on Feb­ru­ary 15, 10am at Brix in Ams­ter­dam.

Erik van der Plui­jm and myself have for a while now been car­ry­ing on a con­ver­sa­tion about AI and design and we felt it was time to expand the cir­cle a bit. We are very curi­ous who else out there shares our excite­ment.

Ques­tions we are mulling over include: How does the design process change when cre­at­ing intel­li­gent prod­ucts? And: How can teams col­lab­o­rate with intel­li­gent design tools to solve prob­lems in new and inter­est­ing ways?

Any­way, lots to chew on.

No need to sign up or any­thing, just show up and we’ll see what hap­pens.

Move 37

Design­ers make choic­es. They should be able to pro­vide ratio­nales for those choic­es. (Although some­times they can’t.) Being able to explain the think­ing that went into a design move to your­self, your team­mates and clients is part of being a pro­fes­sion­al.

Move 37. This was the move Alpha­Go made which took every­one by sur­prise because it appeared so wrong at first.

The inter­est­ing thing is that in hind­sight it appeared Alpha­Go had good rea­sons for this move. Based on a cal­cu­la­tion of odds, basi­cal­ly.

If asked at the time, would Alpha­Go have been able to pro­vide this ratio­nale?

It’s a thing that pops up in a lot of the read­ing I am doing around AI. This idea of trans­paren­cy. In some fields you don’t just want an AI to pro­vide you with a deci­sion, but also with the argu­ments sup­port­ing that deci­sion. Obvi­ous exam­ples would include a sys­tem that helps diag­nose dis­ease. You want it to pro­vide more than just the diag­no­sis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cul­tur­al and also legal require­ment.

It’s inter­est­ing.

Although lives don’t depend on it, the same might apply to intel­li­gent design tools. If I am work­ing with a sys­tem and it is offer­ing me design direc­tions or solu­tions, I want to know why it is sug­gest­ing these things as well. Because my rea­son for pick­ing one over the oth­er depends not just on the sur­face lev­el prop­er­ties of the design but also the under­ly­ing rea­sons. It might be impor­tant because I need to be able to tell stake­hold­ers about it.

An added side effect of this is that a design­er work­ing with such a sys­tem is be exposed to machine rea­son­ing about design choic­es. This could inform their own future think­ing too.

Trans­par­ent AI might help peo­ple improve them­selves. A black box can’t teach you much about the craft it’s per­form­ing. Look­ing at out­comes can be inspi­ra­tional or help­ful, but the process­es that lead up to them can be equal­ly infor­ma­tive. If not more so.

Imag­ine work­ing with an intel­li­gent design tool and get­ting the equiv­a­lent of an Alpha­Go move 37 moment. Huge­ly inspi­ra­tional. Game chang­er.

This idea gets me much more excit­ed than automat­ing design tasks does.

Waiting for the smart city

Nowa­days when we talk about the smart city we don’t nec­es­sar­i­ly talk about smart­ness or cities.

I feel like when the term is used it often obscures more than it reveals.

Here a few rea­sons why.

To begin with, the term sug­gests some­thing that is yet to arrive. Some kind of tech-enabled utopia. But actu­al­ly, cur­rent day cities are already smart to a greater or less­er degree depend­ing on where and how you look.

This is impor­tant because too often we post­pone action as we wait for the smart city to arrive. We don’t have to wait. We can act to improve things right now.

Fur­ther­more, ‘smart city’ sug­gests some­thing mono­lith­ic that can be designed as a whole. But a smart city, like any city, is a huge mess of inter­con­nect­ed things. It resists top­down design.

His­to­ry is lit­tered with failed attempts at author­i­tar­i­an high-mod­ernist city design. Just stop it.

Smart­ness should not be an end but a means.

I read ‘smart’ as a short­hand for ‘tech­no­log­i­cal­ly aug­ment­ed’. A smart city is a city eat­en by soft­ware. All cities are being eat­en (or have been eat­en) by soft­ware to a greater or less­er extent. Uber and Airbnb are obvi­ous exam­ples. Small­er more sub­tle ones abound.

The ques­tion is, smart to what end? Effi­cien­cy? Leg­i­bil­i­ty? Con­trol­la­bil­i­ty? Anti-fragili­ty? Playa­bil­i­ty? Live­abil­i­ty? Sus­tain­abil­i­ty? The answer depends on your out­look.

These are ways in which the smart city label obscures. It obscures agency. It obscures net­works. It obscures intent.

I’m not say­ing don’t ever use it. But in many cas­es you can get by with­out it. You can talk about spe­cif­ic parts that make up the whole of a city, spe­cif­ic tech­nolo­gies and spe­cif­ic aims.


Post­script 1

We can do the same exer­cise with the ‘city’ part of the meme.

The same process that is mak­ing cities smart (soft­ware eat­ing the world) is also mak­ing every­thing else smart. Smart towns. Smart coun­try­sides. The ends are dif­fer­ent. The net­works are dif­fer­ent. The process­es play out in dif­fer­ent ways.

It’s okay to think about cities but don’t think they have a monop­oly on ‘dis­rup­tion’.

Post­script 2

Some of this inspired by clever things I heard Sebas­t­ian Quack say at Play­ful Design for Smart Cities and Usman Haque at ThingsCon Ams­ter­dam.

Playful Design for Smart Cities

Ear­li­er this week I escaped the mis­er­able weath­er and food of the Nether­lands to spend a cou­ple of days in Barcelona, where I attend­ed the ‘Play­ful Design for Smart Cities’ work­shop at RMIT Europe.

I helped Jus­si Holopainen run a work­shop in which par­tic­i­pants from indus­try, gov­ern­ment and acad­e­mia togeth­er defined projects aimed at fur­ther explor­ing this idea of play­ful design with­in the con­text of smart cities, with­out falling into the trap of solu­tion­ism.

Before the work­shop I pre­sent­ed a sum­ma­ry of my chap­ter in The Game­ful World, along with some of my cur­rent think­ing on it. There were also great talks by Judith Ack­er­mann, Flo­ri­an ‘Floyd’ Müller, and Gilly Kar­jevsky and Sebas­t­ian Quack.

Below are the slides for my talk and links to all the arti­cles, books and exam­ples I explic­it­ly and implic­it­ly ref­er­enced through­out.