Contestable Infrastructures” at Beyond Smart Cities Today

I’ll be at Beyond Smart Cities Today the next cou­ple of days (18–19 Sep­tem­ber). Below is the abstract I sub­mit­ted, plus a bib­li­og­ra­phy of some of the stuff that went into my think­ing for this and relat­ed mat­ters that I won’t have the time to get into.

In the actu­al­ly exist­ing smart city, algo­rith­mic sys­tems are increas­ing­ly used for the pur­pos­es of auto­mat­ed deci­sion-mak­ing, includ­ing as part of pub­lic infra­struc­ture. Algo­rith­mic sys­tems raise a range of eth­i­cal con­cerns, many of which stem from their opac­i­ty. As a result, pre­scrip­tions for improv­ing the account­abil­i­ty, trust­wor­thi­ness and legit­i­ma­cy of algo­rith­mic sys­tems are often based on a trans­paren­cy ide­al. The think­ing goes that if the func­tion­ing and own­er­ship of an algo­rith­mic sys­tem is made per­ceiv­able, peo­ple under­stand them and are in turn able to super­vise them. How­ev­er, there are lim­its to this approach. Algo­rith­mic sys­tems are com­plex and ever-chang­ing socio-tech­ni­cal assem­blages. Ren­der­ing them vis­i­ble is not a straight­for­ward design and engi­neer­ing task. Fur­ther­more such trans­paren­cy does not nec­es­sar­i­ly lead to under­stand­ing or, cru­cial­ly, the abil­i­ty to act on this under­stand­ing. We believe legit­i­mate smart pub­lic infra­struc­ture needs to include the pos­si­bil­i­ty for sub­jects to artic­u­late objec­tions to pro­ce­dures and out­comes. The result­ing “con­testable infra­struc­ture” would cre­ate spaces that open up the pos­si­bil­i­ty for express­ing con­flict­ing views on the smart city. Our project is to explore the design impli­ca­tions of this line of rea­son­ing for the phys­i­cal assets that cit­i­zens encounter in the city. Because after all, these are the per­ceiv­able ele­ments of the larg­er infra­struc­tur­al sys­tems that recede from view.

  • Alkhat­ib, A., & Bern­stein, M. (2019). Street-Lev­el Algo­rithms. 1–13. https://doi.org/10.1145/3290605.3300760
  • Anan­ny, M., & Craw­ford, K. (2018). See­ing with­out know­ing: Lim­i­ta­tions of the trans­paren­cy ide­al and its appli­ca­tion to algo­rith­mic account­abil­i­ty. New Media and Soci­ety, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
  • Cen­ti­vany, A., & Glushko, B. (2016). “Pop­corn tastes good”: Par­tic­i­pa­to­ry pol­i­cy­mak­ing and Reddit’s “AMAged­don.” Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems — Pro­ceed­ings, 1126–1137. https://doi.org/10.1145/2858036.2858516
  • Craw­ford, K. (2016). Can an Algo­rithm be Ago­nis­tic? Ten Scenes from Life in Cal­cu­lat­ed Publics. Sci­ence Tech­nol­o­gy and Human Val­ues, 41(1), 77–92. https://doi.org/10.1177/0162243915589635
  • DiS­al­vo, C. (2010). Design, Democ­ra­cy and Ago­nis­tic Plu­ral­ism. Pro­ceed­ings of the Design Research Soci­ety Con­fer­ence, 366–371.
  • Hilde­brandt, M. (2017). Pri­va­cy As Pro­tec­tion of the Incom­putable Self: Ago­nis­tic Machine Learn­ing. SSRN Elec­tron­ic Jour­nal, 1–33. https://doi.org/10.2139/ssrn.3081776
  • Jack­son, S. J., Gille­spie, T., & Payette, S. (2014). The Pol­i­cy Knot: Re-inte­grat­ing Pol­i­cy, Prac­tice and Design. CSCW Stud­ies of Social Com­put­ing, 588–602. https://doi.org/10.1145/2531602.2531674
  • Jew­ell, M. (2018). Con­test­ing the deci­sion: liv­ing in (and liv­ing with) the smart city. Inter­na­tion­al Review of Law, Com­put­ers and Tech­nol­o­gy. https://doi.org/10.1080/13600869.2018.1457000
  • Lind­blom, L. (2019). Con­sent, Con­testa­bil­i­ty, and Unions. Busi­ness Ethics Quar­ter­ly. https://doi.org/10.1017/beq.2018.25
  • Mit­tel­stadt, B. D., Allo, P., Tad­deo, M., Wachter, S., & Flori­di, L. (2016). The ethics of algo­rithms: Map­ping the debate. Big Data & Soci­ety, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
  • Van de Poel, I. (2016). An eth­i­cal frame­work for eval­u­at­ing exper­i­men­tal tech­nol­o­gy. Sci­ence and Engi­neer­ing Ethics, 22(3), 667–686. https://doi.org/10.1007/s11948-015‑9724-3

Contestable Infrastructures: Designing for Dissent in Smart Public Objects” at We Make the City 2019

Thi­js Turèl of AMS Insti­tute and myself pre­sent­ed a ver­sion of the talk below at the Cities for Dig­i­tal Rights con­fer­ence on June 19 in Ams­ter­dam dur­ing the We Make the City fes­ti­val. The talk is an attempt to artic­u­late some of the ideas we both have been devel­op­ing for some time around con­testa­bil­i­ty in smart pub­lic infra­struc­ture. As always with this sort of thing, this is intend­ed as a con­ver­sa­tion piece so I wel­come any thoughts you may have.


The basic mes­sage of the talk is that when we start to do auto­mat­ed deci­sion-mak­ing in pub­lic infra­struc­ture using algo­rith­mic sys­tems, we need to design for the inevitable dis­agree­ments that may arise and fur­ther­more, we sug­gest there is an oppor­tu­ni­ty to focus on design­ing for such dis­agree­ments in the phys­i­cal objects that peo­ple encounter in urban space as they make use of infra­struc­ture.

We set the scene by show­ing a num­ber of exam­ples of smart pub­lic infra­struc­ture. A cyclist cross­ing that adapts to weath­er con­di­tions. If it’s rain­ing cyclists more fre­quent­ly get a green light. A pedes­tri­an cross­ing in Tilburg where elder­ly can use their mobile to get more time to cross. And final­ly, the case we are involved with our­selves: smart EV charg­ing in the city of Ams­ter­dam, about which more lat­er.

Image cred­its: Vat­ten­fall, Fietsfan010, De Nieuwe Draai

We iden­ti­fy three trends in smart pub­lic infra­struc­ture: (1) where pre­vi­ous­ly algo­rithms were used to inform pol­i­cy, now they are employed to per­form auto­mat­ed deci­sion-mak­ing on an indi­vid­ual case basis. This rais­es the stakes; (2) dis­trib­uted own­er­ship of these sys­tems as the result of pub­lic-pri­vate part­ner­ships and oth­er com­plex col­lab­o­ra­tion schemes leads to unclear respon­si­bil­i­ty; and final­ly (3) the increas­ing use of machine learn­ing leads to opaque deci­sion-mak­ing.

These trends, and algo­rith­mic sys­tems more gen­er­al­ly, raise a num­ber of eth­i­cal con­cerns. They include but are not lim­it­ed to: the use of induc­tive cor­re­la­tions (for exam­ple in the case of machine learn­ing) leads to unjus­ti­fied results; lack of access to and com­pre­hen­sion of a system’s inner work­ings pro­duces opac­i­ty, which in turn leads to a lack of trust in the sys­tems them­selves and the organ­i­sa­tions that use them; bias is intro­duced by a num­ber of fac­tors, includ­ing devel­op­ment team prej­u­dices, tech­ni­cal flaws, bad data and unfore­seen inter­ac­tions with oth­er sys­tems; and final­ly the use of pro­fil­ing, nudg­ing and per­son­al­i­sa­tion leads to dimin­ished human agency. (We high­ly rec­om­mend the arti­cle by Mit­tel­stadt et al. for a com­pre­hen­sive overview of eth­i­cal con­cerns raised by algo­rithms.)

So for us, the ques­tion that emerges from all this is: How do we organ­ise the super­vi­sion of smart pub­lic infra­struc­ture in a demo­c­ra­t­ic and law­ful way?

There are a num­ber of exist­ing approach­es to this ques­tion. These include legal and reg­u­la­to­ry (e.g. the right to expla­na­tion in the GDPR); audit­ing (e.g. KPMG’s AI in Con­trol” method, BKZ’s transparantielab); pro­cure­ment (e.g. open source claus­es); insourc­ing (e.g. GOV.UK) and design and engi­neer­ing (e.g. our own work on the trans­par­ent charg­ing sta­tion).

We feel there are two impor­tant lim­i­ta­tions with these exist­ing approach­es. The first is a focus on pro­fes­sion­als and the sec­ond is a focus on pre­dic­tion. We’ll dis­cuss each in turn.

Image cred­its: Cities Today

First of all, many solu­tions tar­get a pro­fes­sion­al class, be it accoun­tants, civ­il ser­vants, super­vi­so­ry boards, as well as tech­nol­o­gists, design­ers and so on. But we feel there is a role for the cit­i­zen as well, because the super­vi­sion of these sys­tems is sim­ply too impor­tant to be left to a priv­i­leged few. This role would include iden­ti­fy­ing wrong­do­ing, and sug­gest­ing alter­na­tives.

There is a ten­sion here, which is that from the per­spec­tive of the pub­lic sec­tor one should only ask cit­i­zens for their opin­ion when you have the inten­tion and the resources to actu­al­ly act on their sug­ges­tions. It can also be a chal­lenge to iden­ti­fy legit­i­mate con­cerns in the flood of feed­back that can some­times occur. From our point of view though, such con­cerns should not be used as an excuse to not engage the pub­lic. If cit­i­zen par­tic­i­pa­tion is con­sid­ered nec­es­sary, the focus should be on free­ing up resources and set­ting up struc­tures that make it fea­si­ble and effec­tive.

The sec­ond lim­i­ta­tion is pre­dic­tion. This is best illus­trat­ed with the Collinridge dilem­ma: in the ear­ly phas­es of new tech­nol­o­gy, when a tech­nol­o­gy and its social embed­ding are still mal­leable, there is uncer­tain­ty about the social effects of that tech­nol­o­gy. In lat­er phas­es, social effects may be clear but then often the tech­nol­o­gy has become so well entrenched in soci­ety that it is hard to over­come neg­a­tive social effects. (This sum­ma­ry is tak­en from an excel­lent van de Poel arti­cle on the ethics of exper­i­men­tal tech­nol­o­gy.)

Many solu­tions dis­re­gard the Collingridge dilem­ma and try to pre­dict and pre­vent adverse effects of new sys­tems at design-time. One exam­ple of this approach would be val­ue-sen­si­tive design. Our focus in stead is on use-time. Con­sid­er­ing the fact that smart pub­lic infra­struc­ture tends to be devel­oped on an ongo­ing basis, the ques­tion becomes how to make cit­i­zens a part­ner in this process. And even more specif­i­cal­ly we are inter­est­ed in how this can be made part of the design of the “touch­points” peo­ple actu­al­ly encounter in the streets, as well as their back­stage process­es.

Why do we focus on these phys­i­cal objects? Because this is where peo­ple actu­al­ly meet the infra­struc­tur­al sys­tems, of which large parts recede from view. These are the places where they become aware of their pres­ence. They are the prover­bial tip of the ice­berg.

Image cred­its: Sagar Dani

The use of auto­mat­ed deci­sion-mak­ing in infra­struc­ture reduces people’s agency. For this rea­son, resources for agency need to be designed back into these sys­tems. Fre­quent­ly the answer to this ques­tion is premised on a trans­paren­cy ide­al. This may be a pre­req­ui­site for agency, but it is not suf­fi­cient. Trans­paren­cy may help you become aware of what is going on, but it will not nec­es­sar­i­ly help you to act on that knowl­edge. This is why we pro­pose a shift from trans­paren­cy to con­testa­bil­i­ty. (We can high­ly rec­om­mend Anan­ny and Crawford’s arti­cle for more on why trans­paren­cy is insuf­fi­cient.)

To clar­i­fy what we mean by con­testa­bil­i­ty, con­sid­er the fol­low­ing three exam­ples: When you see the lights on your router blink in the mid­dle of the night when no-one in your house­hold is using the inter­net you can act on this knowl­edge by yank­ing out the device’s pow­er cord. You may nev­er use the emer­gency brake in a train but its pres­ence does give you a sense of con­trol. And final­ly, the cash reg­is­ter receipt pro­vides you with a view into both the pro­ce­dure and the out­come of the super­mar­ket check­out pro­ce­dure and it offers a resource with which you can dis­pute them if some­thing appears to be wrong.

Image cred­its: Aangifte­doen, source unknown for remain­der

None of these exam­ples is a per­fect illus­tra­tion of con­testa­bil­i­ty but they hint at some­thing more than trans­paren­cy, or per­haps even some­thing whol­ly sep­a­rate from it. We’ve been inves­ti­gat­ing what their equiv­a­lents would be in the con­text of smart pub­lic infra­struc­ture.

To illus­trate this point fur­ther let us come back to the smart EV charg­ing project we men­tioned ear­li­er. In Ams­ter­dam, pub­lic EV charg­ing sta­tions are becom­ing “smart” which in this case means they auto­mat­i­cal­ly adapt the speed of charg­ing to a num­ber of fac­tors. These include grid capac­i­ty, and the avail­abil­i­ty of solar ener­gy. Addi­tion­al fac­tors can be added in future, one of which under con­sid­er­a­tion is to give pri­or­i­ty to shared cars over pri­vate­ly owned cars. We are involved with an ongo­ing effort to con­sid­er how such charg­ing sta­tions can be redesigned so that peo­ple under­stand what’s going on behind the scenes and can act on this under­stand­ing. The moti­va­tion for this is that if not designed care­ful­ly, the opac­i­ty of smart EV charg­ing infra­struc­ture may be detri­men­tal to social accep­tance of the tech­nol­o­gy. (A first out­come of these efforts is the Trans­par­ent Charg­ing Sta­tion designed by The Incred­i­ble Machine. A fol­low-up project is ongo­ing.)

Image cred­its: The Incred­i­ble Machine, Kars Alfrink

We have iden­ti­fied a num­ber of dif­fer­ent ways in which peo­ple may object to smart EV charg­ing. They are list­ed in the table below. These types of objec­tions can lead us to fea­ture require­ments for mak­ing the sys­tem con­testable.

Because the list is pre­lim­i­nary, we asked the audi­ence if they could imag­ine addi­tion­al objec­tions, if those exam­ples rep­re­sent­ed new cat­e­gories, and if they would require addi­tion­al fea­tures for peo­ple to be able to act on them. One par­tic­u­lar­ly inter­est­ing sug­ges­tion that emerged was to give local com­mu­ni­ties con­trol over the poli­cies enact­ed by the charge points in their vicin­i­ty. That’s some­thing to fur­ther con­sid­er the impli­ca­tions of.

And that’s where we left it. So to sum­marise:

  1. Algo­rith­mic sys­tems are becom­ing part of pub­lic infra­struc­ture.
  2. Smart pub­lic infra­struc­ture rais­es new eth­i­cal con­cerns.
  3. Many solu­tions to eth­i­cal con­cerns are premised on a trans­paren­cy ide­al, but do not address the issue of dimin­ished agency.
  4. There are dif­fer­ent cat­e­gories of objec­tions peo­ple may have to an algo­rith­mic system’s work­ings.
  5. Mak­ing a sys­tem con­testable means cre­at­ing resources for peo­ple to object, open­ing up a space for the explo­ration of mean­ing­ful alter­na­tives to its cur­rent imple­men­ta­tion.

Research Through Design Reading List

After post­ing the list of engi­neer­ing ethics read­ings it occurred to me I also have a real­ly nice col­lec­tion of things to read from a course on research through design taught by Pieter Jan Stap­pers, which I took ear­li­er this year. I fig­ured some might get some use out of it and I like hav­ing it for my own ref­er­ence here as well.

The back­bone for this course is the chap­ter on research through design by Stap­pers and Giac­car­di in the ency­clo­pe­dia of human-com­put­er inter­ac­tion, which I high­ly rec­om­mend.

All of the read­ings below are ref­er­enced in that chap­ter. I’ve read some, quick­ly gut­ted oth­ers for mean­ing and the remain­der is still on my to-read list. For me per­son­al­ly, the things on anno­tat­ed port­fo­lios and inter­me­di­ate-lev­el knowl­edge by Gaver and Löw­gren were the most imme­di­ate­ly use­ful and applic­a­ble. I’d read the Zim­mer­man paper ear­li­er and although it’s pret­ty con­crete in its pre­scrip­tions I did not real­ly latch on to it.

  1. Brandt, Eva, and Thomas Binder. “Exper­i­men­tal design research: geneal­o­gy, inter­ven­tion, argu­ment.” Inter­na­tion­al Asso­ci­a­tion of Soci­eties of Design Research, Hong Kong 10 (2007).
  2. Gaver, Bill, and John Bow­ers. “Anno­tat­ed port­fo­lios.” inter­ac­tions 19.4 (2012): 40–49.
  3. Gaver, William. “What should we expect from research through design?.” Pro­ceed­ings of the SIGCHI con­fer­ence on human fac­tors in com­put­ing sys­tems. ACM, 2012.
  4. Löw­gren, Jonas. “Anno­tat­ed port­fo­lios and oth­er forms of inter­me­di­ate-lev­el knowl­edge.” Inter­ac­tions 20.1 (2013): 30–34.
  5. Stap­pers, Pieter Jan, F. Sleeswijk Viss­er, and A. I. Keller. “The role of pro­to­types and frame­works for struc­tur­ing explo­rations by research through design.” The Rout­ledge Com­pan­ion to Design Research (2014): 163–174.
  6. Stap­pers, Pieter Jan. “Meta-lev­els in Design Research.”
  7. Stap­pers, Pieter Jan. “Pro­to­types as cen­tral vein for knowl­edge devel­op­ment.” Pro­to­type: Design and craft in the 21st cen­tu­ry (2013): 85–97.
  8. Wensveen, Stephan, and Ben Matthews. “Pro­to­types and pro­to­typ­ing in design research.” The Rout­ledge Com­pan­ion to Design Research. Tay­lor & Fran­cis (2015).
  9. Zim­mer­man, John, Jodi For­l­izzi, and Shel­ley Even­son. “Research through design as a method for inter­ac­tion design research in HCI.” Pro­ceed­ings of the SIGCHI con­fer­ence on Human fac­tors in com­put­ing sys­tems. ACM, 2007.

Bonus lev­el: sev­er­al items relat­ed to “mud­dling through”…

  1. Flach, John M., and Fred Voorhorst. “What mat­ters?: Putting com­mon sense to work.” (2016).
  2. Lind­blom, Charles E. “Still Mud­dling, Not Yet Through.” Pub­lic Admin­is­tra­tion Review 39.6 (1979): 517–26.
  3. Lind­blom, Charles E. “The sci­ence of mud­dling through.” Pub­lic Admin­is­tra­tion Review 19.2 (1959): 79–88.

Engineering Ethics Reading List

I recent­ly fol­lowed an excel­lent three-day course on engi­neer­ing ethics. It was offered by the TU Delft grad­u­ate school and taught by Behnam Taibi with guest lec­tures from sev­er­al of our fac­ul­ty.

I found it par­tic­u­lar­ly help­ful to get some sug­ges­tions for fur­ther read­ing that rep­re­sent some of the foun­da­tion­al ideas in the field. I fig­ured it would be use­ful to oth­ers as well to have a point­er to them.

So here they are. I’ve quick­ly gut­ted these for their mean­ing. The one by Van de Poel I did read entire­ly and can high­ly rec­om­mend for any­one who’s doing design of emerg­ing tech­nolo­gies and wants to escape from the informed con­sent conun­drum.

I intend to dig into the Doorn one, not just because she’s one of my pro­mot­ers but also because resilience is a con­cept that is close­ly relat­ed to my own inter­ests. I’ll also get into the Flori­di one in detail but the con­cept of infor­ma­tion qual­i­ty and the care ethics per­spec­tive on the prob­lem of infor­ma­tion abun­dance and atten­tion scarci­ty I found imme­di­ate­ly applic­a­ble in inter­ac­tion design.

  1. Stil­goe, Jack, Richard Owen, and Phil Mac­naght­en. “Devel­op­ing a frame­work for respon­si­ble inno­va­tion.” Research Pol­i­cy 42.9 (2013): 1568–1580.
  2. Van den Hov­en, Jeroen. “Val­ue sen­si­tive design and respon­si­ble inno­va­tion.” Respon­si­ble inno­va­tion (2013): 75–83.
  3. Hans­son, Sven Ove. “Eth­i­cal cri­te­ria of risk accep­tance.” Erken­nt­nis 59.3 (2003): 291–309.
  4. Van de Poel, Ibo. “An eth­i­cal frame­work for eval­u­at­ing exper­i­men­tal tech­nol­o­gy.” Sci­ence and engi­neer­ing ethics22.3 (2016): 667–686.
  5. Hans­son, Sven Ove. “Philo­soph­i­cal prob­lems in cost–benefit analy­sis.” Eco­nom­ics & Phi­los­o­phy 23.2 (2007): 163–183.
  6. Flori­di, Luciano. “Big Data and infor­ma­tion qual­i­ty.” The phi­los­o­phy of infor­ma­tion qual­i­ty. Springer, Cham, 2014. 303–315.
  7. Doorn, Neelke, Pao­lo Gar­doni, and Colleen Mur­phy. “A mul­ti­dis­ci­pli­nary def­i­n­i­tion and eval­u­a­tion of resilience: The role of social jus­tice in defin­ing resilience.” Sus­tain­able and Resilient Infra­struc­ture (2018): 1–12.

We also got a draft of the intro chap­ter to a book on engi­neer­ing and ethics that Behnam is writ­ing. That looks very promis­ing as well but I can’t share yet for obvi­ous rea­sons.

Unboxing’ at Behavior Design Amsterdam #16

Below is a write-up of the talk I gave at the Behav­ior Design Ams­ter­dam #16 meet­up on Thurs­day, Feb­ru­ary 15, 2018.

'Pandora' by John William Waterhouse (1896)
‘Pan­do­ra’ by John William Water­house (1896)

I’d like to talk about the future of our design prac­tice and what I think we should focus our atten­tion on. It is all relat­ed to this idea of com­plex­i­ty and open­ing up black box­es. We’re going to take the scenic route, though. So bear with me.

Software Design

Two years ago I spent about half a year in Sin­ga­pore.

While there I worked as prod­uct strate­gist and design­er at a start­up called ARTO, an art rec­om­men­da­tion ser­vice. It shows you a ran­dom sam­ple of art­works, you tell it which ones you like, and it will then start rec­om­mend­ing pieces it thinks you like. In case you were won­der­ing: yes, swip­ing left and right was involved.

We had this inter­est­ing prob­lem of ingest­ing art from many dif­fer­ent sources (most­ly online gal­leries) with meta­da­ta of wild­ly vary­ing lev­els of qual­i­ty. So, using meta­da­ta to fig­ure out which art to show was a bit of a non-starter. It should come as no sur­prise then, that we start­ed look­ing into machine learning—image pro­cess­ing in par­tic­u­lar.

And so I found myself work­ing with my engi­neer­ing col­leagues on an art rec­om­men­da­tion stream which was dri­ven at least in part by machine learn­ing. And I quick­ly realised we had a prob­lem. In terms of how we worked togeth­er on this part of the prod­uct, it felt like we had tak­en a bunch of steps back in time. Back to a way of col­lab­o­rat­ing that was less inte­grat­ed and less respon­sive.

That’s because we have all these nice tools and tech­niques for design­ing tra­di­tion­al soft­ware prod­ucts. But soft­ware is deter­min­is­tic. Machine learn­ing is fun­da­men­tal­ly dif­fer­ent in nature: it is prob­a­bilis­tic.

It was hard for me to take the lead in the design of this part of the prod­uct for two rea­sons. First of all, it was chal­leng­ing to get a first-hand feel of the machine learn­ing fea­ture before it was imple­ment­ed.

And sec­ond of all, it was hard for me to com­mu­ni­cate or visu­alise the intend­ed behav­iour of the machine learn­ing fea­ture to the rest of the team.

So when I came back to the Nether­lands I decid­ed to dig into this prob­lem of design for machine learn­ing. Turns out I opened up quite the can of worms for myself. But that’s okay.

There are two rea­sons I care about this:

The first is that I think we need more design-led inno­va­tion in the machine learn­ing space. At the moment it is engi­neer­ing-dom­i­nat­ed, which doesn’t nec­es­sar­i­ly lead to use­ful out­comes. But if you want to take the lead in the design of machine learn­ing appli­ca­tions, you need a firm han­dle on the nature of the tech­nol­o­gy.

The sec­ond rea­son why I think we need to edu­cate our­selves as design­ers on the nature of machine learn­ing is that we need to take respon­si­bil­i­ty for the impact the tech­nol­o­gy has on the lives of peo­ple. There is a lot of talk about ethics in the design indus­try at the moment. Which I con­sid­er a pos­i­tive sign. But I also see a reluc­tance to real­ly grap­ple with what ethics is and what the rela­tion­ship between tech­nol­o­gy and soci­ety is. We seem to want easy answers, which is under­stand­able because we are all very busy peo­ple. But hav­ing spent some time dig­ging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a fea­ture. And we should embrace it.

Machine Learning

At the end of 2016 I attend­ed ThingsCon here in Ams­ter­dam and I was intro­duced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both inter­est­ed in machine learn­ing. So with encour­age­ment from Ianus we decid­ed to put togeth­er a work­shop that would enable indus­tri­al design mas­ter stu­dents to tan­gle with it in a hands-on man­ner.

About a year lat­er now, this has grown into a thing we call Pro­to­typ­ing the Use­less But­ler. Dur­ing the work­shop, you use machine learn­ing algo­rithms to train a mod­el that takes inputs from a net­work-con­nect­ed arduino’s sen­sors and dri­ves that same arduino’s actu­a­tors. In effect, you can cre­ate inter­ac­tive behav­iour with­out writ­ing a sin­gle line of code. And you get a first hand feel for how com­mon appli­ca­tions of machine learn­ing work. Things like regres­sion, clas­si­fi­ca­tion and dynam­ic time warp­ing.

The thing that makes this work­shop tick is an open source soft­ware appli­ca­tion called Wek­ina­tor. Which was cre­at­ed by Rebec­ca Fiebrink. It was orig­i­nal­ly aimed at per­form­ing artists so that they could build inter­ac­tive instru­ments with­out writ­ing code. But it takes inputs from any­thing and sends out­puts to any­thing. So we appro­pri­at­ed it towards our own ends.

You can find every­thing relat­ed to Use­less But­ler on this GitHub repo.

The think­ing behind this work­shop is that for us design­ers to be able to think cre­ative­ly about appli­ca­tions of machine learn­ing, we need a gran­u­lar under­stand­ing of the nature of the tech­nol­o­gy. The thing with design­ers is, we can’t real­ly learn about such things from books. A lot of design knowl­edge is tac­it, it emerges from our phys­i­cal engage­ment with the world. This is why things like sketch­ing and pro­to­typ­ing are such essen­tial parts of our way of work­ing. And so with use­less but­ler we aim to cre­ate an envi­ron­ment in which you as a design­er can gain tac­it knowl­edge about the work­ings of machine learn­ing.

Sim­ply put, for a lot of us, machine learn­ing is a black box. With Use­less But­ler, we open the black box a bit and let you peer inside. This should improve the odds of design-led inno­va­tion hap­pen­ing in the machine learn­ing space. And it should also help with ethics. But it’s def­i­nite­ly not enough. Knowl­edge about the tech­nol­o­gy isn’t the only issue here. There are more black box­es to open.

Values

Which brings me back to that oth­er black box: ethics. Like I already men­tioned there is a lot of talk in the tech indus­try about how we should “be more eth­i­cal”. But things are often reduced to this notion that design­ers should do no harm. As if ethics is a prob­lem to be fixed in stead of a thing to be prac­ticed.

So I start­ed to talk about this to peo­ple I know in acad­e­mia and more than once this thing called Val­ue Sen­si­tive Design was men­tioned. It should be no sur­prise to any­one that schol­ars have been chew­ing on this stuff for quite a while. One of the ear­li­est ref­er­ences I came across, an essay by Batya Fried­man in Inter­ac­tions is from 1996! This is a les­son to all of us I think. Pay more atten­tion to what the aca­d­e­mics are talk­ing about.

So, at the end of last year I dove into this top­ic. Our host Iskan­der Smit, Rob Mai­jers and myself coor­di­nate a grass­roots com­mu­ni­ty for tech work­ers called Tech Sol­i­dar­i­ty NL. We want to build tech­nol­o­gy that serves the needs of the many, not the few. Val­ue Sen­si­tive Design seemed like a good thing to dig into and so we did.

I’m not going to dive into the details here. There’s a report on the Tech Sol­i­dar­i­ty NL web­site if you’re inter­est­ed. But I will high­light a few things that val­ue sen­si­tive design asks us to con­sid­er that I think help us unpack what it means to prac­tice eth­i­cal design.

First of all, val­ues. Here’s how it is com­mon­ly defined in the lit­er­a­ture:

A val­ue refers to what a per­son or group of peo­ple con­sid­er impor­tant in life.”

I like it because it’s com­mon sense, right? But it also makes clear that there can nev­er be one mono­lith­ic def­i­n­i­tion of what ‘good’ is in all cas­es. As we design­ers like to say: “it depends” and when it comes to val­ues things are no dif­fer­ent.

Per­son or group” implies there can be var­i­ous stake­hold­ers. Val­ue sen­si­tive design dis­tin­guish­es between direct and indi­rect stake­hold­ers. The for­mer have direct con­tact with the tech­nol­o­gy, the lat­ter don’t but are affect­ed by it nonethe­less. Val­ue sen­si­tive design means tak­ing both into account. So this blows up the con­ven­tion­al notion of a sin­gle user to design for.

Var­i­ous stake­hold­er groups can have com­pet­ing val­ues and so to design for them means to arrive at some sort of trade-off between val­ues. This is a cru­cial point. There is no such thing as a per­fect or objec­tive­ly best solu­tion to eth­i­cal conun­drums. Not in the design of tech­nol­o­gy and not any­where else.

Val­ue sen­si­tive design encour­ages you to map stake­hold­ers and their val­ues. These will be dif­fer­ent for every design project. Anoth­er approach is to use lists like the one pic­tured here as an ana­lyt­i­cal tool to think about how a design impacts var­i­ous val­ues.

Fur­ther­more, dur­ing your design process you might not only think about the short-term impact of a tech­nol­o­gy, but also think about how it will affect things in the long run.

And sim­i­lar­ly, you might think about the effects of a tech­nol­o­gy not only when a few peo­ple are using it, but also when it becomes wild­ly suc­cess­ful and every­body uses it.

There are tools out there that can help you think through these things. But so far much of the work in this area is hap­pen­ing on the aca­d­e­m­ic side. I think there is an oppor­tu­ni­ty for us to cre­ate tools and case stud­ies that will help us edu­cate our­selves on this stuff.

There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the tech­nolo­gies we work with, it helps to dig deep­er into the nature of the rela­tion­ship between tech­nol­o­gy and soci­ety. Yes, it com­pli­cates things. But that is exact­ly the point.

Priv­i­leg­ing sim­ple and scal­able solu­tions over those adapt­ed to local needs is social­ly, eco­nom­i­cal­ly and eco­log­i­cal­ly unsus­tain­able. So I hope you will join me in embrac­ing com­plex­i­ty.

Starting a PhD

Today is the first offi­cial work day of my new doc­tor­al researcher posi­tion at Delft Uni­ver­si­ty of Tech­nol­o­gy. After more than two years of lay­ing the ground work, I’m start­ing out on a new chal­lenge.

I remem­ber sit­ting out­side a Jew­el cof­fee bar in Sin­ga­pore1 and going over the var­i­ous options for what­ev­er would be next after shut­ting down Hub­bub. I knew I want­ed to delve into the impact of machine learn­ing and data sci­ence on inter­ac­tion design. And large­ly through process of elim­i­na­tion I felt the best place for me to do so would be inside of acad­e­mia.

Back in the Nether­lands, with help from Ianus Keller, I start­ed mak­ing inroads at TU Delft, my first choice for this kind of work. I had vis­it­ed it on and off over the years, coach­ing stu­dents and doing guest lec­tures. I’d felt at home right away.

There were quite a few twists and turns along the way but now here we are. Start­ing this month I am a doc­tor­al can­di­date at Delft Uni­ver­si­ty of Technology’s fac­ul­ty of Indus­tri­al Design Engi­neer­ing.

My research is pro­vi­sion­al­ly titled ‘Intel­li­gi­bil­i­ty and Trans­paren­cy of Smart Pub­lic Infra­struc­tures: A Design Ori­ent­ed Approach’. Its main object of study is the MX3D smart bridge. My super­vi­sors are Gerd Kortuem and Neelke Doorn. And it’s all part of the NWO-fund­ed project ‘BRIdg­ing Data in the built Envi­ron­ment (BRIDE)’.

Below is a first rough abstract of the research. But in the months to come this is like­ly to change sub­stan­tial­ly as I start ham­mer­ing out a prop­er research plan. I plan to post the occa­sion­al update on my work here, so if you’re inter­est­ed your best bet is prob­a­bly to do the old RSS thing. There’s social media too, of course. And I might set up a newslet­ter at some point. We’ll see.

If any of this res­onates, do get in touch. I’d love to start a con­ver­sa­tion with as many peo­ple as pos­si­ble about this stuff.

Intel­li­gi­bil­i­ty and Trans­paren­cy of Smart Pub­lic Infra­struc­tures: A Design Ori­ent­ed Approach

This phd will explore how design­ers, tech­nol­o­gists, and cit­i­zens can uti­lize rapid urban man­u­fac­tur­ing and IoT tech­nolo­gies for design­ing urban space that express­es its intel­li­gence from the inter­sec­tion of peo­ple, places, activ­i­ties and tech­nol­o­gy, not mere­ly from the pres­ence of cut­ting-edge tech­nol­o­gy. The key ques­tion is how smart pub­lic infra­struc­ture, i.e. data-dri­ven and algo­rithm-rich pub­lic infra­struc­tures, can be under­stood by lay-peo­ple.

The design-ori­ent­ed research will uti­lize a ‘research through design’ approach to devel­op a dig­i­tal expe­ri­ence around the bridge and the sur­round­ing urban space. Dur­ing this extend­ed design and mak­ing process the phd stu­dent will con­duct empir­i­cal research to inves­ti­gate design choic­es and their impli­ca­tions on (1) new forms of par­tic­i­pa­to­ry data-informed design process­es, (2) the tech­nol­o­gy-medi­at­ed expe­ri­ence of urban space, (3) the emerg­ing rela­tion­ship between res­i­dents and “their” bridge, and (4) new forms of data-informed, cit­i­zen led gov­er­nance of pub­lic space.

  1. My Foursquare his­to­ry and 750 Words archive tell me this was on Sat­ur­day, Jan­u­ary 16, 2016. []

Playful Design for Workplace Change Management’ at PLAYTrack conference 2017 in Aarhus

Lase defender collab at FUSE

At the end of last year I was invit­ed to speak at the PLAY­Track con­fer­ence in Aarhus about the work­place change man­age­ment games made by Hub­bub. It turned out to be a great oppor­tu­ni­ty to recon­nect with the play research com­mu­ni­ty.

I was very much impressed by the pro­gram assem­bled by the organ­is­ers. Peo­ple came from a wide range of dis­ci­plines and cru­cial­ly, there was ample time to dis­cuss and reflect on the mate­ri­als pre­sent­ed. As I tweet­ed after­wards, this is a thing that most con­fer­ence organ­is­ers get wrong.

I was par­tic­u­lar­ly inspired by the work of Ben­jamin Mardell and Mara Krechevsky at Harvard’s Project ZeroMak­ing Learn­ing Vis­i­ble looks like a great resource for any­one who teach­es. Then there was Reed Stevens from North­west­ern Uni­ver­si­ty whose project FUSE is one of the most sol­id exam­ples of play­ful learn­ing for STEAM I’ve seen thus far. I was also fas­ci­nat­ed by Cia­ra Laverty’s work at PEDAL on observ­ing par­ent-child play. Miguel Sicart deliv­ered anoth­er great provo­ca­tion on the dark side of play­ful design. And final­ly I was delight­ed to hear about and expe­ri­ence for myself some of Amos Blan­ton’s work at the LEGO Foun­da­tion. I should also call out Ben Fin­cham’s many provoca­tive con­tri­bu­tions from the audi­ence.

The abstract for my talk is below, which cov­ers most of what I talked about. I tried to give peo­ple a good sense of:

  • what the games con­sist­ed of,
  • what we were aim­ing to achieve,
  • how both the fic­tion and the play­er activ­i­ties sup­port­ed these goals,
  • how we made learn­ing out­comes vis­i­ble to our play­ers and clients,
  • and final­ly how we went about design­ing and devel­op­ing these games.

Both projects have sol­id write-ups over at the Hub­bub web­site, so I’ll just point to those here: Code 4 and Rip­ple Effect.

In the final sec­tion of the talk I spent a bit of time reflect­ing on how I would approach projects like this today. After all, it has been sev­en years since we made Code 4, and four years since Rip­ple Effect. That’s ages ago and my per­spec­tive has def­i­nite­ly changes since we made these.

Participatory design

First of all, I would get even more seri­ous about co-design­ing with play­ers at every step. I would recruit rep­re­sen­ta­tives of play­ers and invest them with real influ­ence. In the projects we did, the pri­ma­ry vehi­cle for play­er influ­ence was through playtest­ing. But this is nec­es­sar­i­ly lim­it­ed. I also won’t pre­tend this is at all easy to do in a com­mer­cial con­text.

But, these games are ulti­mate­ly about improv­ing work­er pro­duc­tiv­i­ty. So how do we make it so that work­ers share in the real-world prof­its yield­ed by a suc­cess­ful cul­ture change?

I know of the exis­tence of par­tic­i­pa­to­ry design but from my expe­ri­ence it is not a com­mon approach in the indus­try. Why?

Value sensitive design

On a relat­ed note, I would get more seri­ous about what val­ues are sup­port­ed by the sys­tem, in whose inter­est they are and where they come from. Ear­ly field research and work­shops with audi­ence do sur­face some val­ues but val­ues from cus­tomer rep­re­sen­ta­tives tend to dom­i­nate. Again, the com­mer­cial con­text we work in is a poten­tial chal­lenge.

I know of val­ue sen­si­tive design, but as with par­tic­i­pa­to­ry design, it has yet to catch on in a big way in the indus­try. So again, why is that?

Disintermediation

One thing I con­tin­ue to be inter­est­ed in is to reduce the com­plex­i­ty of a game system’s phys­i­cal affor­dances (which includes its code), and to push even more of the sub­stance of the game into those social allowances that make up the non-mate­r­i­al aspects of the game. This allows for spon­ta­neous rene­go­ti­a­tion of the game by the play­ers. This is dis­in­ter­me­di­a­tion as a strat­e­gy. David Kanaga’s take on games as toys remains huge­ly inspi­ra­tional in this regard, as does Bernard De Koven’s book The Well Played Game.

Gamefulness versus playfulness

Code 4 had more focus on sat­is­fy­ing the need for auton­o­my. Rip­ple Effect had more focus on com­pe­tence, or in any case, it had less empha­sis on auton­o­my. There was less room for ‘play’ around the core dig­i­tal game. It seems to me that mas­ter­ing a sub­jec­tive sim­u­la­tion of a sub­ject is not nec­es­sar­i­ly what a work­place game for cul­ture change should be aim­ing for. So, less game­ful design, more play­ful design.

Adaptation

Final­ly, the agency mod­el does not enable us to stick around for the long haul. But work­place games might be bet­ter suit­ed to a set­up where things aren’t thought of as a one-off project but more of an ongo­ing process.

In How Build­ings Learn, Stew­art Brand talks about how archi­tects should revis­it build­ings they’ve designed after they are built to learn about how peo­ple are actu­al­ly using them. He also talks about how good build­ings are build­ings that its inhab­i­tants can adapt to their needs. What does that look like in the con­text of a game for work­place cul­ture change?


Play­ful Design for Work­place Change Man­age­ment

Code 4 (2011, com­mis­sioned by the Tax Admin­is­tra­tion of the Nether­lands) and Rip­ple Effect (2013, com­mis­sioned by Roy­al Dutch Shell) are both games for work­place change man­age­ment designed and devel­oped by Hub­bub, a bou­tique play­ful design agency which oper­at­ed from Utrecht, The Nether­lands and Berlin, Ger­many between 2009 and 2015. These games are exam­ples of how a goal-ori­ent­ed seri­ous game can be used to encour­age play­ful appro­pri­a­tion of work­place infra­struc­ture and social norms, result­ing in an open-end­ed and cre­ative explo­ration of new and inno­v­a­tive ways of work­ing.

Seri­ous game projects are usu­al­ly com­mis­sioned to solve prob­lems. Solv­ing the prob­lem of cul­tur­al change in a straight­for­ward man­ner means view­ing games as a way to per­suade work­ers of a desired future state. They typ­i­cal­ly take videogame form, sim­u­lat­ing the desired new way of work­ing as deter­mined by man­age­ment. To play the game well, play­ers need to mas­ter its sys­tem and by extension—it is assumed—learning hap­pens.

These games can be be enjoy­able expe­ri­ences and an improve­ment on pre­vi­ous forms of work­place learn­ing, but in our view they decrease the pos­si­bil­i­ty space of poten­tial work­place cul­tur­al change. They dimin­ish work­er agency, and they waste the cre­ative and inno­v­a­tive poten­tial of involv­ing them in the inven­tion of an improved work­place cul­ture.

We instead choose to view work­place games as an oppor­tu­ni­ty to increase the space of pos­si­bil­i­ty. We resist the temp­ta­tion to bake the desired new way of work­ing into the game’s phys­i­cal and dig­i­tal affor­dances. Instead, we leave how to play well up to the play­ers. Since these games are team-based and col­lab­o­ra­tive, play­ers need to nego­ti­ate their way of work­ing around the game among them­selves. In addi­tion, because the games are dis­trib­uted in time—running over a num­ber of weeks—and are playable at play­er dis­cre­tion dur­ing the work­day, play­ers are giv­en license to appro­pri­ate work­place infra­struc­ture and sub­vert social norms towards in-game ends.

We tried to make learn­ing tan­gi­ble in var­i­ous ways. Because the games at the core are web appli­ca­tions to which play­ers log on with indi­vid­ual accounts we were able to col­lect data on play­er behav­iour. To guar­an­tee pri­va­cy, employ­ers did not have direct access to game data­bas­es and only received anonymised reports. We took respon­si­bil­i­ty for play­er learn­ing by facil­i­tat­ing coach­ing ses­sions in which they could safe­ly reflect on their game expe­ri­ences. Round­ing out these efforts, we con­duct­ed sur­veys to gain insight into the play­er expe­ri­ence from a more qual­i­ta­tive and sub­jec­tive per­spec­tive.

These games offer a mod­el for a rea­son­ably demo­c­ra­t­ic and eth­i­cal way of doing game-based work­place change man­age­ment. How­ev­er, we would like to see efforts that fur­ther democ­ra­tise their design and development—involving work­ers at every step. We also wor­ry about how games can be used to cre­ate the illu­sion of work­er influ­ence while at the same time soft­ware is deployed through­out the work­place to lim­it their agency.

Our exam­ples may be inspir­ing but because of these devel­op­ments we feel we can’t con­tin­ue this type of work with­out seri­ous­ly recon­sid­er­ing our cur­rent process­es, tech­nol­o­gy stacks and busi­ness practices—and ulti­mate­ly whether we should be mak­ing games at all.

Design and machine learning – an annotated reading list

Ear­li­er this year I coached Design for Inter­ac­tion mas­ter stu­dents at Delft Uni­ver­si­ty of Tech­nol­o­gy in the course Research Method­ol­o­gy. The stu­dents organ­ised three sem­i­nars for which I pro­vid­ed the claims and assigned read­ing. In the sem­i­nars they argued about my claims using the Toul­min Mod­el of Argu­men­ta­tion. The read­ings served as sources for back­ing and evi­dence.

The claims and read­ings were all relat­ed to my nascent research project about machine learn­ing. We delved into both design­ing for machine learn­ing, and using machine learn­ing as a design tool.

Below are the read­ings I assigned, with some notes on each, which should help you decide if you want to dive into them your­self.

Hebron, Patrick. 2016. Machine Learn­ing for Design­ers. Sebastopol: O’Reilly.

The only non-aca­d­e­m­ic piece in this list. This served the pur­pose of get­ting all stu­dents on the same page with regards to what machine learn­ing is, its appli­ca­tions of machine learn­ing in inter­ac­tion design, and com­mon chal­lenges encoun­tered. I still can’t think of any oth­er sin­gle resource that is as good a start­ing point for the sub­ject as this one.

Fiebrink, Rebec­ca. 2016. “Machine Learn­ing as Meta-Instru­ment: Human-Machine Part­ner­ships Shap­ing Expres­sive Instru­men­tal Cre­ation.” In Musi­cal Instru­ments in the 21st Cen­tu­ry, 14:137–51. Sin­ga­pore: Springer Sin­ga­pore. doi:10.1007/978–981–10–2951–6_10.

Fiebrink’s Wek­ina­tor is ground­break­ing, fun and inspir­ing so I had to include some of her writ­ing in this list. This is most­ly of inter­est for those look­ing into the use of machine learn­ing for design and oth­er cre­ative and artis­tic endeav­ours. An impor­tant idea explored here is that tools that make use of (inter­ac­tive, super­vised) machine learn­ing can be thought of as instru­ments. Using such a tool is like play­ing or per­form­ing, explor­ing a pos­si­bil­i­ty space, engag­ing in a dia­logue with the tool. For a tool to feel like an instru­ment requires a tight action-feed­back loop.

Dove, Gra­ham, Kim Hal­skov, Jodi For­l­izzi, and John Zim­mer­man. 2017. UX Design Inno­va­tion: Chal­lenges for Work­ing with Machine Learn­ing as a Design Mate­r­i­al. The 2017 CHI Con­fer­ence. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.

A real­ly good sur­vey of how design­ers cur­rent­ly deal with machine learn­ing. Key take­aways include that in most cas­es, the appli­ca­tion of machine learn­ing is still engi­neer­ing-led as opposed to design-led, which ham­pers the cre­ation of non-obvi­ous machine learn­ing appli­ca­tions. It also makes it hard for design­ers to con­sid­er eth­i­cal impli­ca­tions of design choic­es. A key rea­son for this is that at the moment, pro­to­typ­ing with machine learn­ing is pro­hib­i­tive­ly cum­ber­some.

Fiebrink, Rebec­ca, Per­ry R Cook, and Dan True­man. 2011. “Human Mod­el Eval­u­a­tion in Inter­ac­tive Super­vised Learn­ing.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.

The sec­ond Fiebrink piece in this list, which is more of a deep dive into how peo­ple use Wek­ina­tor. As with the chap­ter list­ed above this is required read­ing for those work­ing on design tools which make use of inter­ac­tive machine learn­ing. An impor­tant find­ing here is that users of intel­li­gent design tools might have very dif­fer­ent cri­te­ria for eval­u­at­ing the ‘cor­rect­ness’ of a trained mod­el than engi­neers do. Such cri­te­ria are like­ly sub­jec­tive and eval­u­a­tion requires first-hand use of the mod­el in real time.

Bostrom, Nick, and Eliez­er Yud­kowsky. 2014. “The Ethics of Arti­fi­cial Intel­li­gence.” In The Cam­bridge Hand­book of Arti­fi­cial Intel­li­gence, edit­ed by Kei­th Frank­ish and William M Ram­sey, 316–34. Cam­bridge: Cam­bridge Uni­ver­si­ty Press. doi:10.1017/CBO9781139046855.020.

Bostrom is known for his some­what crazy but thought­pro­vok­ing book on super­in­tel­li­gence and although a large part of this chap­ter is about the ethics of gen­er­al arti­fi­cial intel­li­gence (which at the very least is still a way out), the first sec­tion dis­cuss­es the ethics of cur­rent “nar­row” arti­fi­cial intel­li­gence. It makes for a good check­list of things design­ers should keep in mind when they cre­ate new appli­ca­tions of machine learn­ing. Key insight: when a machine learn­ing sys­tem takes on work with social dimensions—tasks pre­vi­ous­ly per­formed by humans—the sys­tem inher­its its social require­ments.

Yang, Qian, John Zim­mer­man, Aaron Ste­in­feld, and Antho­ny Toma­sic. 2016. Plan­ning Adap­tive Mobile Expe­ri­ences When Wire­fram­ing. The 2016 ACM Con­fer­ence. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.

Final­ly, a feet-in-the-mud explo­ration of what it actu­al­ly means to design for machine learn­ing with the tools most com­mon­ly used by design­ers today: draw­ings and dia­grams of var­i­ous sorts. In this case the focus is on using machine learn­ing to make an inter­face adap­tive. It includes an inter­est­ing dis­cus­sion of how to bal­ance the use of implic­it and explic­it user inputs for adap­ta­tion, and how to deal with infer­ence errors. Once again the lim­i­ta­tions of cur­rent sketch­ing and pro­to­typ­ing tools is men­tioned, and relat­ed to the need for design­ers to devel­op tac­it knowl­edge about machine learn­ing. Such tac­it knowl­edge will only be gained when design­ers can work with machine learn­ing in a hands-on man­ner.

Supplemental material

Floyd, Chris­tiane. 1984. “A Sys­tem­at­ic Look at Pro­to­typ­ing.” In Approach­es to Pro­to­typ­ing, 1–18. Berlin, Hei­del­berg: Springer Berlin Hei­del­berg. doi:10.1007/978–3–642–69796–8_1.

I pro­vid­ed this to stu­dents so that they get some addi­tion­al ground­ing in the var­i­ous kinds of pro­to­typ­ing that are out there. It helps to pre­vent reduc­tive notions of pro­to­typ­ing, and it makes for a nice com­ple­ment to Buxton’s work on sketch­ing.

Ble­vis, E, Y Lim, and E Stolter­man. 2006. “Regard­ing Soft­ware as a Mate­r­i­al of Design.”

Some of the papers refer to machine learn­ing as a “design mate­r­i­al” and this paper helps to under­stand what that idea means. Soft­ware is a mate­r­i­al with­out qual­i­ties (it is extreme­ly mal­leable, it can sim­u­late near­ly any­thing). Yet, it helps to con­sid­er it as a phys­i­cal mate­r­i­al in the metaphor­i­cal sense because we can then apply ways of design think­ing and doing to soft­ware pro­gram­ming.

Status update

This is not exact­ly a now page, but I thought I would write up what I am doing at the moment since last report­ing on my sta­tus in my end-of-year report.

The major­i­ty of my work­days are spent doing free­lance design con­sult­ing. My pri­ma­ry gig has been through Eend at the Dutch Vic­tim Sup­port Foun­da­tion, where until very recent­ly I was part of a team build­ing online ser­vices. I helped out with prod­uct strat­e­gy, set­ting up a lean UX design process, and get­ting an inte­grat­ed agile design and devel­op­ment team up and run­ning. The first ser­vices are now ship­ping so it is time for me to move on, after 10 months of very grat­i­fy­ing work. I real­ly enjoy work­ing in the pub­lic sec­tor and I hope to be doing more of it in future.

So yes, this means I am avail­able and you can hire me to do strat­e­gy and design for soft­ware prod­ucts and ser­vices. Just send me an email.

Short­ly before the Dutch nation­al elec­tions of this year, Iskan­der and I gath­ered a group of fel­low tech work­ers under the ban­ner of “Tech Sol­i­dar­i­ty NL to dis­cuss the con­cern­ing lurch to the right in nation­al pol­i­tics and what our field can do about it. This has devel­oped into a small but active com­mu­ni­ty who gath­er month­ly to edu­cate our­selves and devel­op plans for col­lec­tive action. I am get­ting a huge boost out of this. Fig­ur­ing out how to be a left­ist in this day and age is not easy. The only way to do it is to prac­tice and for that reflec­tion with peers is invalu­able. Build­ing and facil­i­tat­ing a group like this is huge­ly edu­ca­tion­al too. I have learned a lot about how a com­mu­ni­ty is boot-strapped and nur­tured.

If you are in the Nether­lands, your pol­i­tics are left of cen­ter, and you work in tech­nol­o­gy, con­sid­er your­self invit­ed to join.

And final­ly, the last major thing on my plate is a con­tin­u­ing effort to secure a PhD posi­tion for myself. I am get­ting great sup­port from peo­ple at Delft Uni­ver­si­ty of Tech­nol­o­gy, in par­tic­u­lar Gerd Kortuem. I am focus­ing on inter­net of things prod­ucts that have fea­tures dri­ven by machine learn­ing. My ulti­mate aim is to devel­op pro­to­typ­ing tools for design and devel­op­ment teams that will help them cre­ate more inno­v­a­tive and more eth­i­cal solu­tions. The first step for this will be to con­duct field research inside com­pa­nies who are cre­at­ing such prod­ucts right now. So I am reach­ing out to peo­ple to see if I can secure a rea­son­able amount of poten­tial col­lab­o­ra­tors for this, which will go a long way in prov­ing the fea­si­bil­i­ty of my whole plan.

If you know of any com­pa­nies that devel­op con­sumer-fac­ing prod­ucts that have a con­nect­ed hard­ware com­po­nent and make use of machine learn­ing to dri­ve fea­tures, do let me know.

That’s about it. Free­lance UX con­sult­ing, left­ist tech-work­er organ­is­ing and design-for-machine-learn­ing research. Quite hap­py with that mix, real­ly.

Machine Learning for Designers’ workshop

On Wednes­day Péter Kun, Hol­ly Rob­bins and myself taught a one-day work­shop on machine learn­ing at Delft Uni­ver­si­ty of Tech­nol­o­gy. We had about thir­ty master’s stu­dents from the indus­tri­al design engi­neer­ing fac­ul­ty. The aim was to get them acquaint­ed with the tech­nol­o­gy through hands-on tin­ker­ing with the Wek­ina­tor as cen­tral teach­ing tool.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Background

The rea­son­ing behind this work­shop is twofold.

On the one hand I expect design­ers will find them­selves work­ing on projects involv­ing machine learn­ing more and more often. The tech­nol­o­gy has cer­tain prop­er­ties that dif­fer from tra­di­tion­al soft­ware. Most impor­tant­ly, machine learn­ing is prob­a­bilis­tic in stead of deter­min­is­tic. It is impor­tant that design­ers under­stand this because oth­er­wise they are like­ly to make bad deci­sions about its appli­ca­tion.

The sec­ond rea­son is that I have a strong sense machine learn­ing can play a role in the aug­men­ta­tion of the design process itself. So-called intel­li­gent design tools could make design­ers more effi­cient and effec­tive. They could also enable the cre­ation of designs that would oth­er­wise be impos­si­ble or very hard to achieve.

The work­shop explored both ideas.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Format

The struc­ture was rough­ly as fol­lows:

In the morn­ing we start­ed out pro­vid­ing a very broad intro­duc­tion to the tech­nol­o­gy. We talked about the very basic premise of (super­vised) learn­ing. Name­ly, pro­vid­ing exam­ples of inputs and desired out­puts and train­ing a mod­el based on those exam­ples. To make these con­cepts tan­gi­ble we then intro­duced the Wek­ina­tor and walked the stu­dents through get­ting it up and run­ning using basic exam­ples from the web­site. The final step was to invite them to explore alter­na­tive inputs and out­puts (such as game con­trollers and Arduino boards).

In the after­noon we pro­vid­ed a design brief, ask­ing the stu­dents to pro­to­type a data-enabled object with the set of tools they had acquired in the morn­ing. We assist­ed with tech­ni­cal hur­dles where nec­es­sary (of which there were more than a few) and closed out the day with demos and a group dis­cus­sion reflect­ing on their expe­ri­ences with the tech­nol­o­gy.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Results

As I tweet­ed on the way home that evening, the results were… inter­est­ing.

Not all groups man­aged to put some­thing togeth­er in the admit­ted­ly short amount of time they were pro­vid­ed with. They were most often stymied by get­ting an Arduino to talk to the Wek­ina­tor. Max was often picked as a go-between because the Wek­ina­tor receives OSC mes­sages over UDP, where­as the quick­est way to get an Arduino to talk to a com­put­er is over ser­i­al. But Max in my expe­ri­ence is a fick­le beast and would more than once crap out on us.

The groups that did build some­thing main­ly assem­bled pro­to­types from the exam­ples on hand. Which is fine, but since we were main­ly work­ing with the exam­ples from the Wek­ina­tor web­site they tend­ed towards the inter­ac­tive instru­ment side of things. We were hop­ing for explo­rations of IoT prod­uct con­cepts. For that more hand-rolling was required and this was only achiev­able for the stu­dents on the high­er end of the tech­ni­cal exper­tise spec­trum (and the more tena­cious ones).

The dis­cus­sion yield­ed some inter­est­ing insights into men­tal mod­els of the tech­nol­o­gy and how they are affect­ed by hands-on expe­ri­ence. A com­ment I heard more than once was: Why is this con­sid­ered learn­ing at all? The Wek­ina­tor was not per­ceived to be learn­ing any­thing. When chal­lenged on this by reit­er­at­ing the under­ly­ing prin­ci­ples it became clear the black box nature of the Wek­ina­tor ham­pers appre­ci­a­tion of some of the very real achieve­ments of the tech­nol­o­gy. It seems (for our stu­dents at least) machine learn­ing is stuck in a grey area between too-high expec­ta­tions and too-low recog­ni­tion of its capa­bil­i­ties.

Next steps

These results, and oth­ers, point towards some obvi­ous improve­ments which can be made to the work­shop for­mat, and to teach­ing design stu­dents about machine learn­ing more broad­ly.

  1. We can improve the toolset so that some of the heavy lift­ing involved with get­ting the var­i­ous parts to talk to each oth­er is made eas­i­er and more reli­able.
  2. We can build exam­ples that are geared towards the prac­tice of design­ing IoT prod­ucts and are ready for adap­ta­tion and hack­ing.
  3. And final­ly, and prob­a­bly most chal­leng­ing­ly, we can make the work­ings of machine learn­ing more trans­par­ent so that it becomes eas­i­er to devel­op a feel for its capa­bil­i­ties and short­com­ings.

We do intend to improve and teach the work­shop again. If you’re inter­est­ed in host­ing one (either in an edu­ca­tion­al or pro­fes­sion­al con­text) let me know. And stay tuned for updates on this and oth­er efforts to get design­ers to work in a hands-on man­ner with machine learn­ing.

Spe­cial thanks to the bril­liant Ianus Keller for con­nect­ing me to Péter and for allow­ing us to pilot this crazy idea at IDE Acad­e­my.

References

Sources used dur­ing prepa­ra­tion and run­ning of the work­shop:

  • The Wek­ina­tor – the UI is infu­ri­at­ing­ly poor but when it comes to get­ting start­ed with machine learn­ing this tool is unmatched.
  • Arduino – I have become par­tic­u­lar­ly fond of the MKR1000 board. Add a lithi­um-poly­mer bat­tery and you have every­thing you need to pro­to­type IoT prod­ucts.
  • OSC for ArduinoCNMAT’s imple­men­ta­tion of the open sound con­trol (OSC) encod­ing. Key puz­zle piece for get­ting the above two tools talk­ing to each oth­er.
  • Machine Learn­ing for Design­ers – my pre­ferred intro­duc­tion to the tech­nol­o­gy from a design­er­ly per­spec­tive.
  • A Visu­al Intro­duc­tion to Machine Learn­ing – a very acces­si­ble visu­al expla­na­tion of the basic under­pin­nings of com­put­ers apply­ing sta­tis­ti­cal learn­ing.
  • Remote Con­trol Theremin – an exam­ple project I pre­pared for the work­shop demo­ing how to have the Wek­ina­tor talk to an Arduino MKR1000 with OSC over UDP.