Participatory AI and ML engineering

In the first half of this year, I’ve pre­sent­ed sev­er­al ver­sions of a brief talk on par­tic­i­pa­to­ry AI. I fig­ured I would post an amal­gam of these to the blog for future ref­er­ence. (Pre­vi­ous­ly, on the blog, I post­ed a brief lit review on the same top­ic; this talk builds on that.)

So, to start, the main point of this talk is that many par­tic­i­pa­to­ry approach­es to AI don’t engage deeply with the specifics of the tech­nol­o­gy. One such spe­cif­ic is the trans­la­tion work engi­neers do to make a prob­lem “learn­able” by a machine (Kang, 2023). From this per­spec­tive, the main ques­tion to ask becomes, how does trans­la­tion hap­pen in our spe­cif­ic projects? Should cit­i­zens be involved in this trans­la­tion work? If so, how to achieve this? 

Before we dig into the state of par­tic­i­pa­to­ry AI, let’s begin by clar­i­fy­ing why we might want to enable par­tic­i­pa­tion in the first place. A com­mon moti­va­tion is a lack of demo­c­ra­t­ic con­trol over AI sys­tems. (This is par­tic­u­lar­ly con­cern­ing when AI sys­tems are used for gov­ern­ment pol­i­cy exe­cu­tion. These are the sys­tems I most­ly look at in my own research.) And so the response is to bring the peo­ple into the devel­op­ment process, and to let them co-decide matters.

In these cas­es, par­tic­i­pa­tion can be under­stood as an enabler of demo­c­ra­t­ic agency, i.e., a way for sub­jects to legit­i­mate the use of AI sys­tems (cf. Peter, 2020 in Rubel et al., 2021). Peter dis­tin­guish­es two path­ways: a nor­ma­tive one and a demo­c­ra­t­ic one. Par­tic­i­pa­tion can be seen as an exam­ple of the demo­c­ra­t­ic path­way to legit­i­ma­tion. A cru­cial detail Peter men­tions here, which is often over­looked in par­tic­i­pa­to­ry AI lit­er­a­ture, is that nor­ma­tive con­straints must lim­it the demo­c­ra­t­ic path­way to avoid arbitrariness.

So, what is the state of par­tic­i­pa­to­ry AI research and prac­tice? I will look at each in turn next.

As men­tioned, I pre­vi­ous­ly post­ed on the state of par­tic­i­pa­to­ry AI research, so I won’t repeat that in full here. (For the record, I reviewed Birhane et al. (2022), Brat­teteig & Verne (2018), Del­ga­do et al. (2023), Ehsan & Riedl (2020), Fef­fer et al. (2023), Gerdes (2022), Groves et al. (2023), Robert­son et al. (2023), Sloane et al. (2020), and Zytko et al. (2022).) Ele­ments that jump out include: 

  • Super­fi­cial and unrep­re­sen­ta­tive involvement.
  • Piece­meal approach­es that have min­i­mal impact on decision-making.
  • Par­tic­i­pants with a con­sul­ta­tive role rather than that of active decision-makers.
  • A lack of bridge-builders between stake­hold­er perspectives.
  • Par­tic­i­pa­tion wash­ing and exploita­tive com­mu­ni­ty involvement.
  • Strug­gles with the dynam­ic nature of tech­nol­o­gy over time.
  • Dis­crep­an­cies between the time scales for users to eval­u­ate design ideas ver­sus the pace at which sys­tems are developed.
  • A demand for par­tic­i­pa­tion to enhance com­mu­ni­ty knowl­edge and to actu­al­ly empow­er them.

Tak­ing a step back, if I were to eval­u­ate the state of the sci­en­tif­ic lit­er­a­ture on par­tic­i­pa­to­ry AI, it strikes me that many of these issues are not new to AI. They have been present in par­tic­i­pa­to­ry design more broad­ly for some time already. Many of these issues are also not nec­es­sar­i­ly spe­cif­ic to AI. The ones I would call out include the issues relat­ed to AI sys­tem dynamism, time scales of par­tic­i­pa­tion ver­sus devel­op­ment, and knowl­edge gaps between var­i­ous actors in par­tic­i­pa­to­ry process­es (and, relat­ed­ly, the lack of bridge-builders).

So, what about prac­tice? Let’s look at two reports that I feel are a good rep­re­sen­ta­tion of the broad­er field: Frame­work for Mean­ing­ful Stake­hold­er Involve­ment by ECNL & Soci­etyIn­side, and Democ­ra­tiz­ing AI: Prin­ci­ples for Mean­ing­ful Pub­lic Par­tic­i­pa­tion by Data & Society.

Frame­work for Mean­ing­ful Stake­hold­er Involve­ment is aimed at busi­ness­es, orga­ni­za­tions, and insti­tu­tions that use AI. It focus­es on human rights, eth­i­cal assess­ment, and com­pli­ance. It aims to be a tool for plan­ning, deliv­er­ing, and eval­u­at­ing stake­hold­er engage­ment effec­tive­ly, empha­siz­ing three core ele­ments: Shared Pur­pose, Trust­wor­thy Process, and Vis­i­ble Impact.

Democ­ra­tiz­ing AI frames pub­lic par­tic­i­pa­tion in AI devel­op­ment as a way to add legit­i­ma­cy and account­abil­i­ty and to help pre­vent harm­ful impacts. It out­lines risks asso­ci­at­ed with AI, includ­ing biased out­comes, opaque deci­sion-mak­ing process­es, and design­ers lack­ing real-world impact aware­ness. Caus­es for inef­fec­tive par­tic­i­pa­tion include uni­di­rec­tion­al com­mu­ni­ca­tion, socioe­co­nom­ic bar­ri­ers, super­fi­cial engage­ment, and inef­fec­tive third-par­ty involve­ment. The report uses envi­ron­men­tal law as a ref­er­ence point and offers eight guide­lines for mean­ing­ful pub­lic par­tic­i­pa­tion in AI.

Tak­ing stock of these reports, we can say that the build­ing blocks for the over­all process are avail­able to those seri­ous­ly look­ing. The chal­lenges fac­ing par­tic­i­pa­to­ry AI are, on the one hand, eco­nom­ic and polit­i­cal. On the oth­er hand, they are relat­ed to the specifics of the tech­nol­o­gy at hand. For the remain­der of this piece, let’s dig into the lat­ter a bit more.

Let’s focus on trans­la­tion work done by engi­neers dur­ing mod­el development.

For this, I build on work by Kang (2023), which focus­es on the qual­i­ta­tive analy­sis of how phe­nom­e­na are trans­lat­ed into ML-com­pat­i­ble forms, pay­ing spe­cif­ic atten­tion to the onto­log­i­cal trans­la­tions that occur in mak­ing a prob­lem learn­able. Trans­la­tion in ML means trans­form­ing com­plex qual­i­ta­tive phe­nom­e­na into quan­tifi­able and com­putable forms. Mul­ti­fac­eted prob­lems are con­vert­ed into a “usable quan­ti­ta­tive ref­er­ence” or “ground truth.” This trans­la­tion is not a mere rep­re­sen­ta­tion of real­i­ty but a refor­mu­la­tion of a prob­lem into math­e­mat­i­cal terms, mak­ing it under­stand­able and process­able by ML algo­rithms. This trans­for­ma­tion involves a sig­nif­i­cant amount of “onto­log­i­cal dis­so­nance,” as it medi­ates and often sim­pli­fies the com­plex­i­ty of real-world phe­nom­e­na into a tax­on­o­my or set of class­es for ML pre­dic­tion. The process of trans­lat­ing is based on assump­tions and stan­dards that may alter the nature of the ML task and intro­duce new social and tech­ni­cal problems. 

So what? I pro­pose we can use the notion of trans­la­tion as a frame for ML engi­neer­ing. Under­stand­ing ML mod­el engi­neer­ing as trans­la­tion is a poten­tial­ly use­ful way to ana­lyze what hap­pens at each step of the process: What gets select­ed for trans­la­tion, how the trans­la­tion is per­formed, and what the result­ing trans­la­tion con­sists of.

So, if we seek to make par­tic­i­pa­to­ry AI engage more with the tech­ni­cal par­tic­u­lar­i­ties of ML, we could begin by iden­ti­fy­ing trans­la­tions that have hap­pened or might hap­pen in our projects. We could then ask to what extent these acts of trans­la­tion are val­ue-laden. For those that are, we could think about how to com­mu­ni­cate these trans­la­tions to a lay audi­ence. A par­tic­u­lar chal­lenge I expect we will be faced with is what the mean­ing­ful lev­el of abstrac­tion for cit­i­zen par­tic­i­pa­tion dur­ing AI devel­op­ment is. We should also ask what the appro­pri­ate ‘vehi­cle’ for cit­i­zen par­tic­i­pa­tion will be. And we should seek to move beyond small-scale, one-off, often unrep­re­sen­ta­tive forms of direct participation.

Bibliography

  • Birhane, A., Isaac, W., Prab­hakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Pow­er to the Peo­ple? Oppor­tu­ni­ties and Chal­lenges for Par­tic­i­pa­to­ry AI. Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–8. https://doi.org/10/grnj99
  • Brat­teteig, T., & Verne, G. (2018). Does AI make PD obso­lete?: Explor­ing chal­lenges from arti­fi­cial intel­li­gence to par­tic­i­pa­to­ry design. Pro­ceed­ings of the 15th Par­tic­i­pa­to­ry Design Con­fer­ence: Short Papers, Sit­u­at­ed Actions, Work­shops and Tuto­r­i­al — Vol­ume 2, 1–5. https://doi.org/10/ghsn84
  • Del­ga­do, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Par­tic­i­pa­to­ry Turn in AI Design: The­o­ret­i­cal Foun­da­tions and the Cur­rent State of Prac­tice. Pro­ceed­ings of the 3rd ACM Con­fer­ence on Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–23. https://doi.org/10/gs8kvm
  • Ehsan, U., & Riedl, M. O. (2020). Human-Cen­tered Explain­able AI: Towards a Reflec­tive Sociotech­ni­cal Approach. In C. Stephani­dis, M. Kuro­su, H. Degen, & L. Rein­er­man-Jones (Eds.), HCI Inter­na­tion­al 2020—Late Break­ing Papers: Mul­ti­modal­i­ty and Intel­li­gence (pp. 449–466). Springer Inter­na­tion­al Pub­lish­ing. https://doi.org/10/gskmgf
  • Fef­fer, M., Skir­pan, M., Lip­ton, Z., & Hei­dari, H. (2023). From Pref­er­ence Elic­i­ta­tion to Par­tic­i­pa­to­ry ML: A Crit­i­cal Sur­vey & Guide­lines for Future Research. Pro­ceed­ings of the 2023 AAAI/ACM Con­fer­ence on AI, Ethics, and Soci­ety, 38–48. https://doi.org/10/gs8kvx
  • Gerdes, A. (2022). A par­tic­i­pa­to­ry data-cen­tric approach to AI Ethics by Design. Applied Arti­fi­cial Intel­li­gence, 36(1), 2009222. https://doi.org/10/gs8kt4
  • Groves, L., Pep­pin, A., Strait, A., & Bren­nan, J. (2023). Going pub­lic: The role of pub­lic par­tic­i­pa­tion approach­es in com­mer­cial AI labs. Pro­ceed­ings of the 2023 ACM Con­fer­ence on Fair­ness, Account­abil­i­ty, and Trans­paren­cy, 1162–1173. https://doi.org/10/gs8kvs
  • Kang, E. B. (2023). Ground truth trac­ings (GTT): On the epis­temic lim­its of machine learn­ing. Big Data & Soci­ety, 10(1), 1–12. https://doi.org/10/gtfgvx
  • Peter, F. (2020). The Grounds of Polit­i­cal Legit­i­ma­cy. Jour­nal of the Amer­i­can Philo­soph­i­cal Asso­ci­a­tion, 6(3), 372–390. https://doi.org/10/grqfhn
  • Robert­son, S., Nguyen, T., Hu, C., Albis­ton, C., Nikzad, A., & Sale­hi, N. (2023). Expres­sive­ness, Cost, and Col­lec­tivism: How the Design of Pref­er­ence Lan­guages Shapes Par­tic­i­pa­tion in Algo­rith­mic Deci­sion-Mak­ing. Pro­ceed­ings of the 2023 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–16. https://doi.org/10/gr6q2t
  • Rubel, A., Cas­tro, C., & Pham, A. K. (2021). Algo­rithms and auton­o­my: The ethics of auto­mat­ed deci­sion sys­tems. Cam­bridge Uni­ver­si­ty Press.
  • Sloane, M., Moss, E., Awom­o­lo, O., & For­lano, L. (2020). Par­tic­i­pa­tion is not a Design Fix for Machine Learn­ing. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  • Zytko, D., J. Wis­niews­ki, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Par­tic­i­pa­to­ry Design of AI Sys­tems: Oppor­tu­ni­ties and Chal­lenges Across Diverse Users, Rela­tion­ships, and Appli­ca­tion Domains. Extend­ed Abstracts of the 2022 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–4. https://doi.org/10/gs8kv6

Participatory AI literature review

I’ve been think­ing alot about civic par­tic­i­pa­tion in machine learn­ing sys­tems devel­op­ment. In par­tic­u­lar, involv­ing non-experts in the poten­tial­ly val­ue-laden trans­la­tion work from spec­i­fi­ca­tions that engi­neers do when they build their mod­els. Below is a sum­ma­ry of a selec­tion of lit­er­a­ture I found on the top­ic, which may serve as a jump­ing-off point for future research.

Abstract

The lit­er­a­ture on par­tic­i­pa­to­ry arti­fi­cial intel­li­gence (AI) reveals a com­plex land­scape marked by chal­lenges and evolv­ing method­olo­gies. Fef­fer et al. (2023) cri­tique the reduc­tion of par­tic­i­pa­tion to com­pu­ta­tion­al mech­a­nisms that only approx­i­mate nar­row moral val­ues. They also note that engage­ments with stake­hold­ers are often super­fi­cial and unrep­re­sen­ta­tive. Groves et al. (2023) iden­ti­fy sig­nif­i­cant bar­ri­ers in com­mer­cial AI labs, includ­ing high costs, frag­ment­ed approach­es, exploita­tion con­cerns, lack of trans­paren­cy, and con­tex­tu­al com­plex­i­ties. These bar­ri­ers lead to a piece­meal approach to par­tic­i­pa­tion with min­i­mal impact on deci­sion-mak­ing in AI labs. Del­ga­do et al. (2023) observe that par­tic­i­pa­to­ry AI involves stake­hold­ers most­ly in a con­sul­ta­tive role with­out inte­grat­ing them as active deci­sion-mak­ers through­out the AI design lifecycle.

Gerdes (2022) pro­pos­es a data-cen­tric approach to AI ethics and under­scores the need for inter­dis­ci­pli­nary bridge builders to rec­on­cile dif­fer­ent stake­hold­er per­spec­tives. Robert­son et al. (2023) explore par­tic­i­pa­to­ry algo­rithm design, empha­siz­ing the need for pref­er­ence lan­guages that bal­ance expres­sive­ness, cost, and collectivism—Sloane et al. (2020) cau­tion against “par­tic­i­pa­tion wash­ing” and the poten­tial for exploita­tive com­mu­ni­ty involve­ment. Brat­teteig & Verne (2018) high­light AI’s chal­lenges to tra­di­tion­al par­tic­i­pa­to­ry design (PD) meth­ods, includ­ing unpre­dictable tech­no­log­i­cal changes and a lack of user-ori­ent­ed eval­u­a­tion. Birhane et al. (2022) call for a clear­er under­stand­ing of mean­ing­ful par­tic­i­pa­tion, advo­cat­ing for a shift towards vibrant, con­tin­u­ous engage­ment that enhances com­mu­ni­ty knowl­edge and empow­er­ment. The lit­er­a­ture sug­gests a press­ing need for more effec­tive, inclu­sive, and empow­er­ing par­tic­i­pa­to­ry approach­es in AI development.

Bibliography

  1. Birhane, A., Isaac, W., Prab­hakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Pow­er to the Peo­ple? Oppor­tu­ni­ties and Chal­lenges for Par­tic­i­pa­to­ry AI. Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–8. https://doi.org/10/grnj99
  2. Brat­teteig, T., & Verne, G. (2018). Does AI make PD obso­lete?: Explor­ing chal­lenges from arti­fi­cial intel­li­gence to par­tic­i­pa­to­ry design. Pro­ceed­ings of the 15th Par­tic­i­pa­to­ry Design Con­fer­ence: Short Papers, Sit­u­at­ed Actions, Work­shops and Tuto­r­i­al — Vol­ume 2, 1–5. https://doi.org/10/ghsn84
  3. Del­ga­do, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Par­tic­i­pa­to­ry Turn in AI Design: The­o­ret­i­cal Foun­da­tions and the Cur­rent State of Prac­tice. Pro­ceed­ings of the 3rd ACM Con­fer­ence on Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–23. https://doi.org/10/gs8kvm
  4. Ehsan, U., & Riedl, M. O. (2020). Human-Cen­tered Explain­able AI: Towards a Reflec­tive Sociotech­ni­cal Approach. In C. Stephani­dis, M. Kuro­su, H. Degen, & L. Rein­er­man-Jones (Eds.), HCI Inter­na­tion­al 2020—Late Break­ing Papers: Mul­ti­modal­i­ty and Intel­li­gence (pp. 449–466). Springer Inter­na­tion­al Pub­lish­ing. https://doi.org/10/gskmgf
  5. Fef­fer, M., Skir­pan, M., Lip­ton, Z., & Hei­dari, H. (2023). From Pref­er­ence Elic­i­ta­tion to Par­tic­i­pa­to­ry ML: A Crit­i­cal Sur­vey & Guide­lines for Future Research. Pro­ceed­ings of the 2023 AAAI/ACM Con­fer­ence on AI, Ethics, and Soci­ety, 38–48. https://doi.org/10/gs8kvx
  6. Gerdes, A. (2022). A par­tic­i­pa­to­ry data-cen­tric approach to AI Ethics by Design. Applied Arti­fi­cial Intel­li­gence, 36(1), 2009222. https://doi.org/10/gs8kt4
  7. Groves, L., Pep­pin, A., Strait, A., & Bren­nan, J. (2023). Going pub­lic: The role of pub­lic par­tic­i­pa­tion approach­es in com­mer­cial AI labs. Pro­ceed­ings of the 2023 ACM Con­fer­ence on Fair­ness, Account­abil­i­ty, and Trans­paren­cy, 1162–1173. https://doi.org/10/gs8kvs
  8. Robert­son, S., Nguyen, T., Hu, C., Albis­ton, C., Nikzad, A., & Sale­hi, N. (2023). Expres­sive­ness, Cost, and Col­lec­tivism: How the Design of Pref­er­ence Lan­guages Shapes Par­tic­i­pa­tion in Algo­rith­mic Deci­sion-Mak­ing. Pro­ceed­ings of the 2023 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–16. https://doi.org/10/gr6q2t
  9. Sloane, M., Moss, E., Awom­o­lo, O., & For­lano, L. (2020). Par­tic­i­pa­tion is not a Design Fix for Machine Learn­ing. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  10. Zytko, D., J. Wis­niews­ki, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Par­tic­i­pa­to­ry Design of AI Sys­tems: Oppor­tu­ni­ties and Chal­lenges Across Diverse Users, Rela­tion­ships, and Appli­ca­tion Domains. Extend­ed Abstracts of the 2022 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–4. https://doi.org/10/gs8kv6

Unboxing’ at Behavior Design Amsterdam #16

Below is a write-up of the talk I gave at the Behav­ior Design Ams­ter­dam #16 meet­up on Thurs­day, Feb­ru­ary 15, 2018.

'Pandora' by John William Waterhouse (1896)
‘Pan­do­ra’ by John William Water­house (1896)

I’d like to talk about the future of our design prac­tice and what I think we should focus our atten­tion on. It is all relat­ed to this idea of com­plex­i­ty and open­ing up black box­es. We’re going to take the scenic route, though. So bear with me.

Software Design

Two years ago I spent about half a year in Singapore.

While there I worked as prod­uct strate­gist and design­er at a start­up called ARTO, an art rec­om­men­da­tion ser­vice. It shows you a ran­dom sam­ple of art­works, you tell it which ones you like, and it will then start rec­om­mend­ing pieces it thinks you like. In case you were won­der­ing: yes, swip­ing left and right was involved.

We had this inter­est­ing prob­lem of ingest­ing art from many dif­fer­ent sources (most­ly online gal­leries) with meta­da­ta of wild­ly vary­ing lev­els of qual­i­ty. So, using meta­da­ta to fig­ure out which art to show was a bit of a non-starter. It should come as no sur­prise then, that we start­ed look­ing into machine learning—image pro­cess­ing in particular.

And so I found myself work­ing with my engi­neer­ing col­leagues on an art rec­om­men­da­tion stream which was dri­ven at least in part by machine learn­ing. And I quick­ly realised we had a prob­lem. In terms of how we worked togeth­er on this part of the prod­uct, it felt like we had tak­en a bunch of steps back in time. Back to a way of col­lab­o­rat­ing that was less inte­grat­ed and less responsive.

That’s because we have all these nice tools and tech­niques for design­ing tra­di­tion­al soft­ware prod­ucts. But soft­ware is deter­min­is­tic. Machine learn­ing is fun­da­men­tal­ly dif­fer­ent in nature: it is probabilistic.

It was hard for me to take the lead in the design of this part of the prod­uct for two rea­sons. First of all, it was chal­leng­ing to get a first-hand feel of the machine learn­ing fea­ture before it was implemented.

And sec­ond of all, it was hard for me to com­mu­ni­cate or visu­alise the intend­ed behav­iour of the machine learn­ing fea­ture to the rest of the team.

So when I came back to the Nether­lands I decid­ed to dig into this prob­lem of design for machine learn­ing. Turns out I opened up quite the can of worms for myself. But that’s okay.

There are two rea­sons I care about this:

The first is that I think we need more design-led inno­va­tion in the machine learn­ing space. At the moment it is engi­neer­ing-dom­i­nat­ed, which doesn’t nec­es­sar­i­ly lead to use­ful out­comes. But if you want to take the lead in the design of machine learn­ing appli­ca­tions, you need a firm han­dle on the nature of the technology.

The sec­ond rea­son why I think we need to edu­cate our­selves as design­ers on the nature of machine learn­ing is that we need to take respon­si­bil­i­ty for the impact the tech­nol­o­gy has on the lives of peo­ple. There is a lot of talk about ethics in the design indus­try at the moment. Which I con­sid­er a pos­i­tive sign. But I also see a reluc­tance to real­ly grap­ple with what ethics is and what the rela­tion­ship between tech­nol­o­gy and soci­ety is. We seem to want easy answers, which is under­stand­able because we are all very busy peo­ple. But hav­ing spent some time dig­ging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a fea­ture. And we should embrace it.

Machine Learning

At the end of 2016 I attend­ed ThingsCon here in Ams­ter­dam and I was intro­duced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both inter­est­ed in machine learn­ing. So with encour­age­ment from Ianus we decid­ed to put togeth­er a work­shop that would enable indus­tri­al design mas­ter stu­dents to tan­gle with it in a hands-on manner.

About a year lat­er now, this has grown into a thing we call Pro­to­typ­ing the Use­less But­ler. Dur­ing the work­shop, you use machine learn­ing algo­rithms to train a mod­el that takes inputs from a net­work-con­nect­ed arduino’s sen­sors and dri­ves that same arduino’s actu­a­tors. In effect, you can cre­ate inter­ac­tive behav­iour with­out writ­ing a sin­gle line of code. And you get a first hand feel for how com­mon appli­ca­tions of machine learn­ing work. Things like regres­sion, clas­si­fi­ca­tion and dynam­ic time warping.

The thing that makes this work­shop tick is an open source soft­ware appli­ca­tion called Wek­ina­tor. Which was cre­at­ed by Rebec­ca Fiebrink. It was orig­i­nal­ly aimed at per­form­ing artists so that they could build inter­ac­tive instru­ments with­out writ­ing code. But it takes inputs from any­thing and sends out­puts to any­thing. So we appro­pri­at­ed it towards our own ends.

You can find every­thing relat­ed to Use­less But­ler on this GitHub repo.

The think­ing behind this work­shop is that for us design­ers to be able to think cre­ative­ly about appli­ca­tions of machine learn­ing, we need a gran­u­lar under­stand­ing of the nature of the tech­nol­o­gy. The thing with design­ers is, we can’t real­ly learn about such things from books. A lot of design knowl­edge is tac­it, it emerges from our phys­i­cal engage­ment with the world. This is why things like sketch­ing and pro­to­typ­ing are such essen­tial parts of our way of work­ing. And so with use­less but­ler we aim to cre­ate an envi­ron­ment in which you as a design­er can gain tac­it knowl­edge about the work­ings of machine learning.

Sim­ply put, for a lot of us, machine learn­ing is a black box. With Use­less But­ler, we open the black box a bit and let you peer inside. This should improve the odds of design-led inno­va­tion hap­pen­ing in the machine learn­ing space. And it should also help with ethics. But it’s def­i­nite­ly not enough. Knowl­edge about the tech­nol­o­gy isn’t the only issue here. There are more black box­es to open.

Values

Which brings me back to that oth­er black box: ethics. Like I already men­tioned there is a lot of talk in the tech indus­try about how we should “be more eth­i­cal”. But things are often reduced to this notion that design­ers should do no harm. As if ethics is a prob­lem to be fixed in stead of a thing to be practiced.

So I start­ed to talk about this to peo­ple I know in acad­e­mia and more than once this thing called Val­ue Sen­si­tive Design was men­tioned. It should be no sur­prise to any­one that schol­ars have been chew­ing on this stuff for quite a while. One of the ear­li­est ref­er­ences I came across, an essay by Batya Fried­man in Inter­ac­tions is from 1996! This is a les­son to all of us I think. Pay more atten­tion to what the aca­d­e­mics are talk­ing about.

So, at the end of last year I dove into this top­ic. Our host Iskan­der Smit, Rob Mai­jers and myself coor­di­nate a grass­roots com­mu­ni­ty for tech work­ers called Tech Sol­i­dar­i­ty NL. We want to build tech­nol­o­gy that serves the needs of the many, not the few. Val­ue Sen­si­tive Design seemed like a good thing to dig into and so we did.

I’m not going to dive into the details here. There’s a report on the Tech Sol­i­dar­i­ty NL web­site if you’re inter­est­ed. But I will high­light a few things that val­ue sen­si­tive design asks us to con­sid­er that I think help us unpack what it means to prac­tice eth­i­cal design.

First of all, val­ues. Here’s how it is com­mon­ly defined in the literature:

A val­ue refers to what a per­son or group of peo­ple con­sid­er impor­tant in life.”

I like it because it’s com­mon sense, right? But it also makes clear that there can nev­er be one mono­lith­ic def­i­n­i­tion of what ‘good’ is in all cas­es. As we design­ers like to say: “it depends” and when it comes to val­ues things are no different.

Per­son or group” implies there can be var­i­ous stake­hold­ers. Val­ue sen­si­tive design dis­tin­guish­es between direct and indi­rect stake­hold­ers. The for­mer have direct con­tact with the tech­nol­o­gy, the lat­ter don’t but are affect­ed by it nonethe­less. Val­ue sen­si­tive design means tak­ing both into account. So this blows up the con­ven­tion­al notion of a sin­gle user to design for.

Var­i­ous stake­hold­er groups can have com­pet­ing val­ues and so to design for them means to arrive at some sort of trade-off between val­ues. This is a cru­cial point. There is no such thing as a per­fect or objec­tive­ly best solu­tion to eth­i­cal conun­drums. Not in the design of tech­nol­o­gy and not any­where else.

Val­ue sen­si­tive design encour­ages you to map stake­hold­ers and their val­ues. These will be dif­fer­ent for every design project. Anoth­er approach is to use lists like the one pic­tured here as an ana­lyt­i­cal tool to think about how a design impacts var­i­ous values.

Fur­ther­more, dur­ing your design process you might not only think about the short-term impact of a tech­nol­o­gy, but also think about how it will affect things in the long run.

And sim­i­lar­ly, you might think about the effects of a tech­nol­o­gy not only when a few peo­ple are using it, but also when it becomes wild­ly suc­cess­ful and every­body uses it.

There are tools out there that can help you think through these things. But so far much of the work in this area is hap­pen­ing on the aca­d­e­m­ic side. I think there is an oppor­tu­ni­ty for us to cre­ate tools and case stud­ies that will help us edu­cate our­selves on this stuff.

There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the tech­nolo­gies we work with, it helps to dig deep­er into the nature of the rela­tion­ship between tech­nol­o­gy and soci­ety. Yes, it com­pli­cates things. But that is exact­ly the point.

Priv­i­leg­ing sim­ple and scal­able solu­tions over those adapt­ed to local needs is social­ly, eco­nom­i­cal­ly and eco­log­i­cal­ly unsus­tain­able. So I hope you will join me in embrac­ing complexity.

Prototyping the Useless Butler: Machine Learning for IoT Designers

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Ams­ter­dam 2017, pho­to by nunocruzstreet.com

At ThingsCon Ams­ter­dam 2017, Péter and I ran a sec­ond iter­a­tion of our machine learn­ing work­shop. We improved on our first attempt at TU Delft in a num­ber of ways.

  • We pre­pared exam­ple code for com­mu­ni­cat­ing with Wek­ina­tor from a wifi con­nect­ed Arduino MKR1000 over OSC.
  • We cre­at­ed a pre­de­fined bread­board setup.
  • We devel­oped three exer­cis­es, one for each type of Wek­ina­tor out­put: regres­sion, clas­si­fi­ca­tion and dynam­ic time warping.

In con­trast to the first ver­sion, we had two hours to run through the whole thing, in stead of a day… So we had to cut some cor­ners, and dou­bled down on walk­ing par­tic­i­pants through a num­ber of exer­cis­es so that they would come out of it with some read­i­ly applic­a­ble skills. 

We dubbed the work­shop ‘pro­to­typ­ing the use­less but­ler’, with thanks to Philip van Allen for the sug­ges­tion to frame the exer­cis­es around build­ing some­thing non-pro­duc­tive so that the focus was shift­ed to play and exploration.

All of the code, the cir­cuit dia­gram and slides are over on GitHub. But I’ll sum­marise things here.

  1. We spent a very short amount of time intro­duc­ing machine learn­ing. We used Google’s Teach­able Machine as an exam­ple and con­trast­ed reg­u­lar pro­gram­ming with using machine learn­ing algo­rithms to train mod­els. The point was to pro­vide folks with just enough con­cep­tu­al scaf­fold­ing so that the rest of the work­shop would make sense.
  2. We then intro­duced our ‘tool­chain’ which con­sists of Wek­ina­tor, the Arduino MKR1000 mod­ule and the OSC pro­to­col. The aim of this tool­chain is to allow design­ers who work in the IoT space to get a feel for the mate­r­i­al prop­er­ties of machine learn­ing through hands-on tin­ker­ing. We tried to cre­ate a tool­chain with as few mov­ing parts as pos­si­ble, because each addi­tion­al com­po­nent would intro­duce anoth­er point of fail­ure which might require debug­ging. This tool­chain would enable design­ers to either use machine learn­ing to rapid­ly pro­to­type inter­ac­tive behav­iour with min­i­mal or no pro­gram­ming. It can also be used to pro­to­type prod­ucts that expose inter­ac­tive machine learn­ing fea­tures to end users. (For a spec­u­la­tive exam­ple of one such prod­uct, see Bjørn Kar­man­n’s Objec­ti­fi­er.)
  3. Par­tic­i­pants were then asked to set up all the required parts on their own work­sta­tion. A list can be found on the Use­less But­ler GitHub page.
  4. We then pro­ceed­ed to build the cir­cuit. We pro­vid­ed all the com­po­nents and showed a Fritz­ing dia­gram to help peo­ple along. The basic idea of this cir­cuit, the epony­mous use­less but­ler, was to have a suf­fi­cient­ly rich set of inputs and out­puts with which to play, that would suit all three types of Wek­ina­tor out­put. So we set­tled on a pair of pho­tore­sis­tors or LDRs as inputs and an RGB LED as output.
  5. With the pre­req­ui­sites installed and the cir­cuit built we were ready to walk through the exam­ples. For regres­sion we mapped the con­tin­u­ous stream of read­ings from the two LDRs to three out­puts, one each for the red, green and blue of the LED. For clas­si­fi­ca­tion we put the state of both LDRs into one of four cat­e­gories, each switch­ing the RGB LED to a spe­cif­ic col­or (cyan, magen­ta, yel­low or white). And final­ly, for dynam­ic time warp­ing, we asked Wek­ina­tor to recog­nise one of three ges­tures and switch the RGB LED to one of three states (red, green or off).

When we reflect­ed on the work­shop after­wards, we agreed we now have a proven con­cept. Par­tic­i­pants were able to get the tool­chain up and run­ning and could play around with iter­a­tive­ly train­ing and eval­u­at­ing their mod­el until it behaved as intended. 

How­ev­er, there is still quite a bit of room for improve­ment. On a prac­ti­cal note, quite a bit of time was tak­en up by the build­ing of the cir­cuit, which isn’t the point of the work­shop. One way of deal­ing with this is to bring those to a work­shop pre-built. Doing so would enable us to get to the machine learn­ing quick­er and would open up time and space to also engage with the par­tic­i­pants about the point of it all. 

We’re keen on bring­ing this work­shop to more set­tings in future. If we do, I’m sure we’ll find the oppor­tu­ni­ty to improve on things once more and I will report back here.

Many thanks to Iskan­der and the rest of the ThingsCon team for invit­ing us to the conference.

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Ams­ter­dam 2017, pho­to by nunocruzstreet.com

Design and machine learning – an annotated reading list

Ear­li­er this year I coached Design for Inter­ac­tion mas­ter stu­dents at Delft Uni­ver­si­ty of Tech­nol­o­gy in the course Research Method­ol­o­gy. The stu­dents organ­ised three sem­i­nars for which I pro­vid­ed the claims and assigned read­ing. In the sem­i­nars they argued about my claims using the Toul­min Mod­el of Argu­men­ta­tion. The read­ings served as sources for back­ing and evidence.

The claims and read­ings were all relat­ed to my nascent research project about machine learn­ing. We delved into both design­ing for machine learn­ing, and using machine learn­ing as a design tool.

Below are the read­ings I assigned, with some notes on each, which should help you decide if you want to dive into them yourself.

Hebron, Patrick. 2016. Machine Learn­ing for Design­ers. Sebastopol: O’Reilly.

The only non-aca­d­e­m­ic piece in this list. This served the pur­pose of get­ting all stu­dents on the same page with regards to what machine learn­ing is, its appli­ca­tions of machine learn­ing in inter­ac­tion design, and com­mon chal­lenges encoun­tered. I still can’t think of any oth­er sin­gle resource that is as good a start­ing point for the sub­ject as this one.

Fiebrink, Rebec­ca. 2016. “Machine Learn­ing as Meta-Instru­ment: Human-Machine Part­ner­ships Shap­ing Expres­sive Instru­men­tal Cre­ation.” In Musi­cal Instru­ments in the 21st Cen­tu­ry, 14:137–51. Sin­ga­pore: Springer Sin­ga­pore. doi:10.1007/978–981–10–2951–6_10.

Fiebrink’s Wek­ina­tor is ground­break­ing, fun and inspir­ing so I had to include some of her writ­ing in this list. This is most­ly of inter­est for those look­ing into the use of machine learn­ing for design and oth­er cre­ative and artis­tic endeav­ours. An impor­tant idea explored here is that tools that make use of (inter­ac­tive, super­vised) machine learn­ing can be thought of as instru­ments. Using such a tool is like play­ing or per­form­ing, explor­ing a pos­si­bil­i­ty space, engag­ing in a dia­logue with the tool. For a tool to feel like an instru­ment requires a tight action-feed­back loop.

Dove, Gra­ham, Kim Hal­skov, Jodi For­l­izzi, and John Zim­mer­man. 2017. UX Design Inno­va­tion: Chal­lenges for Work­ing with Machine Learn­ing as a Design Mate­r­i­al. The 2017 CHI Con­fer­ence. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.

A real­ly good sur­vey of how design­ers cur­rent­ly deal with machine learn­ing. Key take­aways include that in most cas­es, the appli­ca­tion of machine learn­ing is still engi­neer­ing-led as opposed to design-led, which ham­pers the cre­ation of non-obvi­ous machine learn­ing appli­ca­tions. It also makes it hard for design­ers to con­sid­er eth­i­cal impli­ca­tions of design choic­es. A key rea­son for this is that at the moment, pro­to­typ­ing with machine learn­ing is pro­hib­i­tive­ly cumbersome.

Fiebrink, Rebec­ca, Per­ry R Cook, and Dan True­man. 2011. “Human Mod­el Eval­u­a­tion in Inter­ac­tive Super­vised Learn­ing.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.

The sec­ond Fiebrink piece in this list, which is more of a deep dive into how peo­ple use Wek­ina­tor. As with the chap­ter list­ed above this is required read­ing for those work­ing on design tools which make use of inter­ac­tive machine learn­ing. An impor­tant find­ing here is that users of intel­li­gent design tools might have very dif­fer­ent cri­te­ria for eval­u­at­ing the ‘cor­rect­ness’ of a trained mod­el than engi­neers do. Such cri­te­ria are like­ly sub­jec­tive and eval­u­a­tion requires first-hand use of the mod­el in real time. 

Bostrom, Nick, and Eliez­er Yud­kowsky. 2014. “The Ethics of Arti­fi­cial Intel­li­gence.” In The Cam­bridge Hand­book of Arti­fi­cial Intel­li­gence, edit­ed by Kei­th Frank­ish and William M Ram­sey, 316–34. Cam­bridge: Cam­bridge Uni­ver­si­ty Press. doi:10.1017/CBO9781139046855.020.

Bostrom is known for his some­what crazy but thought­pro­vok­ing book on super­in­tel­li­gence and although a large part of this chap­ter is about the ethics of gen­er­al arti­fi­cial intel­li­gence (which at the very least is still a way out), the first sec­tion dis­cuss­es the ethics of cur­rent “nar­row” arti­fi­cial intel­li­gence. It makes for a good check­list of things design­ers should keep in mind when they cre­ate new appli­ca­tions of machine learn­ing. Key insight: when a machine learn­ing sys­tem takes on work with social dimensions—tasks pre­vi­ous­ly per­formed by humans—the sys­tem inher­its its social requirements.

Yang, Qian, John Zim­mer­man, Aaron Ste­in­feld, and Antho­ny Toma­sic. 2016. Plan­ning Adap­tive Mobile Expe­ri­ences When Wire­fram­ing. The 2016 ACM Con­fer­ence. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.

Final­ly, a feet-in-the-mud explo­ration of what it actu­al­ly means to design for machine learn­ing with the tools most com­mon­ly used by design­ers today: draw­ings and dia­grams of var­i­ous sorts. In this case the focus is on using machine learn­ing to make an inter­face adap­tive. It includes an inter­est­ing dis­cus­sion of how to bal­ance the use of implic­it and explic­it user inputs for adap­ta­tion, and how to deal with infer­ence errors. Once again the lim­i­ta­tions of cur­rent sketch­ing and pro­to­typ­ing tools is men­tioned, and relat­ed to the need for design­ers to devel­op tac­it knowl­edge about machine learn­ing. Such tac­it knowl­edge will only be gained when design­ers can work with machine learn­ing in a hands-on manner.

Supplemental material

Floyd, Chris­tiane. 1984. “A Sys­tem­at­ic Look at Pro­to­typ­ing.” In Approach­es to Pro­to­typ­ing, 1–18. Berlin, Hei­del­berg: Springer Berlin Hei­del­berg. doi:10.1007/978–3–642–69796–8_1.

I pro­vid­ed this to stu­dents so that they get some addi­tion­al ground­ing in the var­i­ous kinds of pro­to­typ­ing that are out there. It helps to pre­vent reduc­tive notions of pro­to­typ­ing, and it makes for a nice com­ple­ment to Buxton’s work on sketch­ing.

Ble­vis, E, Y Lim, and E Stolter­man. 2006. “Regard­ing Soft­ware as a Mate­r­i­al of Design.”

Some of the papers refer to machine learn­ing as a “design mate­r­i­al” and this paper helps to under­stand what that idea means. Soft­ware is a mate­r­i­al with­out qual­i­ties (it is extreme­ly mal­leable, it can sim­u­late near­ly any­thing). Yet, it helps to con­sid­er it as a phys­i­cal mate­r­i­al in the metaphor­i­cal sense because we can then apply ways of design think­ing and doing to soft­ware programming.

Status update

This is not exact­ly a now page, but I thought I would write up what I am doing at the moment since last report­ing on my sta­tus in my end-of-year report.

The major­i­ty of my work­days are spent doing free­lance design con­sult­ing. My pri­ma­ry gig has been through Eend at the Dutch Vic­tim Sup­port Foun­da­tion, where until very recent­ly I was part of a team build­ing online ser­vices. I helped out with prod­uct strat­e­gy, set­ting up a lean UX design process, and get­ting an inte­grat­ed agile design and devel­op­ment team up and run­ning. The first ser­vices are now ship­ping so it is time for me to move on, after 10 months of very grat­i­fy­ing work. I real­ly enjoy work­ing in the pub­lic sec­tor and I hope to be doing more of it in future.

So yes, this means I am avail­able and you can hire me to do strat­e­gy and design for soft­ware prod­ucts and ser­vices. Just send me an email.

Short­ly before the Dutch nation­al elec­tions of this year, Iskan­der and I gath­ered a group of fel­low tech work­ers under the ban­ner of “Tech Sol­i­dar­i­ty NL to dis­cuss the con­cern­ing lurch to the right in nation­al pol­i­tics and what our field can do about it. This has devel­oped into a small but active com­mu­ni­ty who gath­er month­ly to edu­cate our­selves and devel­op plans for col­lec­tive action. I am get­ting a huge boost out of this. Fig­ur­ing out how to be a left­ist in this day and age is not easy. The only way to do it is to prac­tice and for that reflec­tion with peers is invalu­able. Build­ing and facil­i­tat­ing a group like this is huge­ly edu­ca­tion­al too. I have learned a lot about how a com­mu­ni­ty is boot-strapped and nurtured.

If you are in the Nether­lands, your pol­i­tics are left of cen­ter, and you work in tech­nol­o­gy, con­sid­er your­self invit­ed to join.

And final­ly, the last major thing on my plate is a con­tin­u­ing effort to secure a PhD posi­tion for myself. I am get­ting great sup­port from peo­ple at Delft Uni­ver­si­ty of Tech­nol­o­gy, in par­tic­u­lar Gerd Kortuem. I am focus­ing on inter­net of things prod­ucts that have fea­tures dri­ven by machine learn­ing. My ulti­mate aim is to devel­op pro­to­typ­ing tools for design and devel­op­ment teams that will help them cre­ate more inno­v­a­tive and more eth­i­cal solu­tions. The first step for this will be to con­duct field research inside com­pa­nies who are cre­at­ing such prod­ucts right now. So I am reach­ing out to peo­ple to see if I can secure a rea­son­able amount of poten­tial col­lab­o­ra­tors for this, which will go a long way in prov­ing the fea­si­bil­i­ty of my whole plan.

If you know of any com­pa­nies that devel­op con­sumer-fac­ing prod­ucts that have a con­nect­ed hard­ware com­po­nent and make use of machine learn­ing to dri­ve fea­tures, do let me know.

That’s about it. Free­lance UX con­sult­ing, left­ist tech-work­er organ­is­ing and design-for-machine-learn­ing research. Quite hap­py with that mix, really.

Machine Learning for Designers’ workshop

On Wednes­day Péter Kun, Hol­ly Rob­bins and myself taught a one-day work­shop on machine learn­ing at Delft Uni­ver­si­ty of Tech­nol­o­gy. We had about thir­ty master’s stu­dents from the indus­tri­al design engi­neer­ing fac­ul­ty. The aim was to get them acquaint­ed with the tech­nol­o­gy through hands-on tin­ker­ing with the Wek­ina­tor as cen­tral teach­ing tool.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Background

The rea­son­ing behind this work­shop is twofold. 

On the one hand I expect design­ers will find them­selves work­ing on projects involv­ing machine learn­ing more and more often. The tech­nol­o­gy has cer­tain prop­er­ties that dif­fer from tra­di­tion­al soft­ware. Most impor­tant­ly, machine learn­ing is prob­a­bilis­tic in stead of deter­min­is­tic. It is impor­tant that design­ers under­stand this because oth­er­wise they are like­ly to make bad deci­sions about its application. 

The sec­ond rea­son is that I have a strong sense machine learn­ing can play a role in the aug­men­ta­tion of the design process itself. So-called intel­li­gent design tools could make design­ers more effi­cient and effec­tive. They could also enable the cre­ation of designs that would oth­er­wise be impos­si­ble or very hard to achieve.

The work­shop explored both ideas.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Format

The struc­ture was rough­ly as follows: 

In the morn­ing we start­ed out pro­vid­ing a very broad intro­duc­tion to the tech­nol­o­gy. We talked about the very basic premise of (super­vised) learn­ing. Name­ly, pro­vid­ing exam­ples of inputs and desired out­puts and train­ing a mod­el based on those exam­ples. To make these con­cepts tan­gi­ble we then intro­duced the Wek­ina­tor and walked the stu­dents through get­ting it up and run­ning using basic exam­ples from the web­site. The final step was to invite them to explore alter­na­tive inputs and out­puts (such as game con­trollers and Arduino boards).

In the after­noon we pro­vid­ed a design brief, ask­ing the stu­dents to pro­to­type a data-enabled object with the set of tools they had acquired in the morn­ing. We assist­ed with tech­ni­cal hur­dles where nec­es­sary (of which there were more than a few) and closed out the day with demos and a group dis­cus­sion reflect­ing on their expe­ri­ences with the technology.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Results

As I tweet­ed on the way home that evening, the results were… interesting. 

Not all groups man­aged to put some­thing togeth­er in the admit­ted­ly short amount of time they were pro­vid­ed with. They were most often stymied by get­ting an Arduino to talk to the Wek­ina­tor. Max was often picked as a go-between because the Wek­ina­tor receives OSC mes­sages over UDP, where­as the quick­est way to get an Arduino to talk to a com­put­er is over ser­i­al. But Max in my expe­ri­ence is a fick­le beast and would more than once crap out on us.

The groups that did build some­thing main­ly assem­bled pro­to­types from the exam­ples on hand. Which is fine, but since we were main­ly work­ing with the exam­ples from the Wek­ina­tor web­site they tend­ed towards the inter­ac­tive instru­ment side of things. We were hop­ing for explo­rations of IoT prod­uct con­cepts. For that more hand-rolling was required and this was only achiev­able for the stu­dents on the high­er end of the tech­ni­cal exper­tise spec­trum (and the more tena­cious ones).

The dis­cus­sion yield­ed some inter­est­ing insights into men­tal mod­els of the tech­nol­o­gy and how they are affect­ed by hands-on expe­ri­ence. A com­ment I heard more than once was: Why is this con­sid­ered learn­ing at all? The Wek­ina­tor was not per­ceived to be learn­ing any­thing. When chal­lenged on this by reit­er­at­ing the under­ly­ing prin­ci­ples it became clear the black box nature of the Wek­ina­tor ham­pers appre­ci­a­tion of some of the very real achieve­ments of the tech­nol­o­gy. It seems (for our stu­dents at least) machine learn­ing is stuck in a grey area between too-high expec­ta­tions and too-low recog­ni­tion of its capabilities.

Next steps

These results, and oth­ers, point towards some obvi­ous improve­ments which can be made to the work­shop for­mat, and to teach­ing design stu­dents about machine learn­ing more broadly. 

  1. We can improve the toolset so that some of the heavy lift­ing involved with get­ting the var­i­ous parts to talk to each oth­er is made eas­i­er and more reliable.
  2. We can build exam­ples that are geared towards the prac­tice of design­ing IoT prod­ucts and are ready for adap­ta­tion and hacking.
  3. And final­ly, and prob­a­bly most chal­leng­ing­ly, we can make the work­ings of machine learn­ing more trans­par­ent so that it becomes eas­i­er to devel­op a feel for its capa­bil­i­ties and shortcomings.

We do intend to improve and teach the work­shop again. If you’re inter­est­ed in host­ing one (either in an edu­ca­tion­al or pro­fes­sion­al con­text) let me know. And stay tuned for updates on this and oth­er efforts to get design­ers to work in a hands-on man­ner with machine learning.

Spe­cial thanks to the bril­liant Ianus Keller for con­nect­ing me to Péter and for allow­ing us to pilot this crazy idea at IDE Acad­e­my.

References

Sources used dur­ing prepa­ra­tion and run­ning of the workshop:

  • The Wek­ina­tor – the UI is infu­ri­at­ing­ly poor but when it comes to get­ting start­ed with machine learn­ing this tool is unmatched.
  • Arduino – I have become par­tic­u­lar­ly fond of the MKR1000 board. Add a lithi­um-poly­mer bat­tery and you have every­thing you need to pro­to­type IoT products.
  • OSC for ArduinoCNMAT’s imple­men­ta­tion of the open sound con­trol (OSC) encod­ing. Key puz­zle piece for get­ting the above two tools talk­ing to each other.
  • Machine Learn­ing for Design­ers – my pre­ferred intro­duc­tion to the tech­nol­o­gy from a design­er­ly perspective.
  • A Visu­al Intro­duc­tion to Machine Learn­ing – a very acces­si­ble visu­al expla­na­tion of the basic under­pin­nings of com­put­ers apply­ing sta­tis­ti­cal learning.
  • Remote Con­trol Theremin – an exam­ple project I pre­pared for the work­shop demo­ing how to have the Wek­ina­tor talk to an Arduino MKR1000 with OSC over UDP.

High-skill robots, low-skill workers

Some notes on what I think I under­stand about tech­nol­o­gy and inequality.

Let’s start with an obvi­ous big ques­tion: is tech­nol­o­gy destroy­ing jobs faster than they can be replaced? On the long term the evi­dence isn’t strong. Humans always appear to invent new things to do. There is no rea­son this time around should be any different.

But in the short term tech­nol­o­gy has con­tributed to an evap­o­ra­tion of mid-skilled jobs. Parts of these jobs are auto­mat­ed entire­ly, parts can be done by few­er peo­ple because of high­er pro­duc­tiv­i­ty gained from tech.

While pro­duc­tiv­i­ty con­tin­ues to grow, jobs are lag­ging behind. The year 2000 appears to have been a turn­ing point. “Some­thing” hap­pened around that time. But no-one knows exact­ly what. 

My hunch is that we’ve seen an emer­gence of a new class of pseu­do-monop­o­lies. Oli­gop­o­lies. And this is com­pound­ed by a ‘win­ner takes all’ dynam­ic that tech­nol­o­gy seems to produce. 

Oth­ers have point­ed to glob­al­i­sa­tion but although this might be a con­tribut­ing fac­tor, the evi­dence does not sup­port the idea that it is the major cause.

So what are we left with?

His­tor­i­cal­ly, look­ing at pre­vi­ous tech­no­log­i­cal upsets, it appears edu­ca­tion makes a big dif­fer­ence. Peo­ple neg­a­tive­ly affect­ed by tech­no­log­i­cal progress should have access to good edu­ca­tion so that they have options. In the US the access to high qual­i­ty edu­ca­tion is not equal­ly divided.

Appar­ent­ly fam­i­ly income is asso­ci­at­ed with edu­ca­tion­al achieve­ment. So if your fam­i­ly is rich, you are more like­ly to become a high skilled indi­vid­ual. And high skilled indi­vid­u­als are priv­i­leged by the tech economy.

And if Piket­ty’s is right, we are approach­ing a real­i­ty in which mon­ey made from wealth ris­es faster than wages. So there is a feed­back loop in place which only exac­er­bates the situation.

One more bul­let: If you think trick­le-down eco­nom­ics, increas­ing the size of the pie will help, you might be mis­tak­en. It appears social mobil­i­ty is helped more by decreas­ing inequal­i­ty in the dis­tri­b­u­tion of income growth.

So some pre­lim­i­nary con­clu­sions: a pro­gres­sive tax on wealth won’t solve the issue. The edu­ca­tion sys­tem will require reform, too. 

I think this is the cen­tral irony of the whole sit­u­a­tion: we are work­ing hard to teach machines how to learn. But we are neglect­ing to improve how peo­ple learn.

Move 37

Design­ers make choic­es. They should be able to pro­vide ratio­nales for those choic­es. (Although some­times they can’t.) Being able to explain the think­ing that went into a design move to your­self, your team­mates and clients is part of being a pro­fes­sion­al.

Move 37. This was the move Alpha­Go made which took every­one by sur­prise because it appeared so wrong at first.

The inter­est­ing thing is that in hind­sight it appeared Alpha­Go had good rea­sons for this move. Based on a cal­cu­la­tion of odds, basically.

If asked at the time, would Alpha­Go have been able to pro­vide this rationale?

It’s a thing that pops up in a lot of the read­ing I am doing around AI. This idea of trans­paren­cy. In some fields you don’t just want an AI to pro­vide you with a deci­sion, but also with the argu­ments sup­port­ing that deci­sion. Obvi­ous exam­ples would include a sys­tem that helps diag­nose dis­ease. You want it to pro­vide more than just the diag­no­sis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cul­tur­al and also legal requirement.

It’s inter­est­ing.

Although lives don’t depend on it, the same might apply to intel­li­gent design tools. If I am work­ing with a sys­tem and it is offer­ing me design direc­tions or solu­tions, I want to know why it is sug­gest­ing these things as well. Because my rea­son for pick­ing one over the oth­er depends not just on the sur­face lev­el prop­er­ties of the design but also the under­ly­ing rea­sons. It might be impor­tant because I need to be able to tell stake­hold­ers about it.

An added side effect of this is that a design­er work­ing with such a sys­tem is be exposed to machine rea­son­ing about design choic­es. This could inform their own future think­ing too.

Trans­par­ent AI might help peo­ple improve them­selves. A black box can’t teach you much about the craft it’s per­form­ing. Look­ing at out­comes can be inspi­ra­tional or help­ful, but the process­es that lead up to them can be equal­ly infor­ma­tive. If not more so.

Imag­ine work­ing with an intel­li­gent design tool and get­ting the equiv­a­lent of an Alpha­Go move 37 moment. Huge­ly inspi­ra­tional. Game changer.

This idea gets me much more excit­ed than automat­ing design tasks does.

Adapting intelligent tools for creativity

I read Alper’s book on con­ver­sa­tion­al user inter­faces over the week­end and was struck by this paragraph:

The holy grail of a con­ver­sa­tion­al sys­tem would be one that’s aware of itself — one that knows its own mod­el and inter­nal struc­ture and allows you to change all of that by talk­ing to it. Imag­ine being able to tell Siri to tone it down a bit with the jokes and that it would then actu­al­ly do that.”

His point stuck with me because I think this is of par­tic­u­lar impor­tance to cre­ative tools. These need to be flex­i­ble so that a vari­ety of peo­ple can use them in dif­fer­ent cir­cum­stances. This adapt­abil­i­ty is what lends a tool depth.

The depth I am think­ing of in cre­ative tools is sim­i­lar to the one in games, which appears to be derived from a kind of semi-ordered­ness. In short, you’re look­ing for a sweet spot between too sim­ple and too complex.

And of course, you need good defaults.

Back to adap­ta­tion. This can hap­pen in at least two ways on the inter­face lev­el: modal or mod­e­less. A sim­ple exam­ple of the for­mer would be to go into a pref­er­ences win­dow to change the behav­iour of your draw­ing pack­age. Sim­i­lar­ly, mod­e­less adap­ta­tion hap­pens when you rearrange some pan­els to bet­ter suit the task at hand.

Return­ing to Siri, the equiv­a­lence of mod­e­less adap­ta­tion would be to tell her to tone it down when her sense of humor irks you. 

For the modal solu­tion, imag­ine a humor slid­er in a set­tings screen some­where. This would be a ter­ri­ble solu­tion because it offers a poor map­ping of a con­trol to a per­son­al­i­ty trait. Can you pin­point on a scale of 1 to 10 your pre­ferred amount of humor in your hypo­thet­i­cal per­son­al assis­tant? And any­way, doesn’t it depend on a lot of sit­u­a­tion­al things such as your mood, the par­tic­u­lar task you’re try­ing to com­plete and so on? In short, this requires some­thing more sit­u­at­ed and adaptive. 

So just being able to tell Siri to tone it down would be the equiv­a­lent of rear­rang­ing your Pho­to­shop palets. And in a next inter­ac­tion Siri might care­ful­ly try some humor again to gauge your response. And if you encour­age her, she might be more humor­ous again.

Enough about fun­ny Siri for now because it’s a bit of a sil­ly example.

Fun­ny Siri, although she’s a bit of a Sil­ly exam­ple, does illus­trate anoth­er prob­lem I am try­ing to wrap my head around. How does an intel­li­gent tool for cre­ativ­i­ty com­mu­ni­cate its inter­nal state? Because it is prob­a­bilis­tic, it can’t be eas­i­ly mapped to a graph­ic infor­ma­tion dis­play. And so our old way of manip­u­lat­ing state, and more specif­i­cal­ly adapt­ing a tool to our needs becomes very dif­fer­ent too.

It seems to be best for an intel­li­gent sys­tem to be open to sug­ges­tions from users about how to behave. Adapt­ing an intel­li­gent cre­ative tool is less like rear­rang­ing your work­space and more like coor­di­nat­ing with a coworker. 

My ide­al is for this to be done in the same mode (and so using the same con­trols) as when doing the work itself. I expect this to allow for more flu­id inter­ac­tions, going back and forth between doing the work at hand, and meta-com­mu­ni­ca­tion about how the sys­tem sup­ports the work. I think if we look at how peo­ple col­lab­o­rate this hap­pens a lot, com­mu­ni­ca­tion and meta-com­mu­ni­ca­tion going on con­tin­u­ous­ly in the same channels.

We don’t need a self-aware arti­fi­cial intel­li­gence to do this. We need to apply what com­put­er sci­en­tists call super­vised learn­ing. The basic idea is to pro­vide a sys­tem with exam­ple inputs and desired out­puts, and let it infer the nec­es­sary rules from them. If the results are unsat­is­fac­to­ry, you sim­ply con­tin­ue train­ing it until it per­forms well enough. 

A super fun exam­ple of this approach is the Wek­ina­tor, a piece of machine learn­ing soft­ware for cre­at­ing musi­cal instru­ments. Below is a video in which Wekinator’s cre­ator Rebec­ca Fiebrink per­forms sev­er­al demos.

Here we have an intel­li­gent sys­tem learn­ing from exam­ples. A per­son manip­u­lat­ing data in stead of code to get to a par­tic­u­lar desired behav­iour. But what Wek­ina­tor lacks and what I expect will be required for this type of thing to real­ly catch on is for the train­ing to hap­pen in the same mode or medi­um as the per­for­mance. The tech­nol­o­gy seems to be get­ting there, but there are many inter­ac­tion design prob­lems remain­ing to be solved.