Participatory AI and ML engineering

In the first half of this year, I’ve pre­sent­ed sev­er­al ver­sions of a brief talk on par­tic­i­pa­to­ry AI. I fig­ured I would post an amal­gam of these to the blog for future ref­er­ence. (Pre­vi­ous­ly, on the blog, I post­ed a brief lit review on the same top­ic; this talk builds on that.)

So, to start, the main point of this talk is that many par­tic­i­pa­to­ry approach­es to AI don’t engage deeply with the specifics of the tech­nol­o­gy. One such spe­cif­ic is the trans­la­tion work engi­neers do to make a prob­lem “learn­able” by a machine (Kang, 2023). From this per­spec­tive, the main ques­tion to ask becomes, how does trans­la­tion hap­pen in our spe­cif­ic projects? Should cit­i­zens be involved in this trans­la­tion work? If so, how to achieve this? 

Before we dig into the state of par­tic­i­pa­to­ry AI, let’s begin by clar­i­fy­ing why we might want to enable par­tic­i­pa­tion in the first place. A com­mon moti­va­tion is a lack of demo­c­ra­t­ic con­trol over AI sys­tems. (This is par­tic­u­lar­ly con­cern­ing when AI sys­tems are used for gov­ern­ment pol­i­cy exe­cu­tion. These are the sys­tems I most­ly look at in my own research.) And so the response is to bring the peo­ple into the devel­op­ment process, and to let them co-decide matters.

In these cas­es, par­tic­i­pa­tion can be under­stood as an enabler of demo­c­ra­t­ic agency, i.e., a way for sub­jects to legit­i­mate the use of AI sys­tems (cf. Peter, 2020 in Rubel et al., 2021). Peter dis­tin­guish­es two path­ways: a nor­ma­tive one and a demo­c­ra­t­ic one. Par­tic­i­pa­tion can be seen as an exam­ple of the demo­c­ra­t­ic path­way to legit­i­ma­tion. A cru­cial detail Peter men­tions here, which is often over­looked in par­tic­i­pa­to­ry AI lit­er­a­ture, is that nor­ma­tive con­straints must lim­it the demo­c­ra­t­ic path­way to avoid arbitrariness.

So, what is the state of par­tic­i­pa­to­ry AI research and prac­tice? I will look at each in turn next.

As men­tioned, I pre­vi­ous­ly post­ed on the state of par­tic­i­pa­to­ry AI research, so I won’t repeat that in full here. (For the record, I reviewed Birhane et al. (2022), Brat­teteig & Verne (2018), Del­ga­do et al. (2023), Ehsan & Riedl (2020), Fef­fer et al. (2023), Gerdes (2022), Groves et al. (2023), Robert­son et al. (2023), Sloane et al. (2020), and Zytko et al. (2022).) Ele­ments that jump out include: 

  • Super­fi­cial and unrep­re­sen­ta­tive involvement.
  • Piece­meal approach­es that have min­i­mal impact on decision-making.
  • Par­tic­i­pants with a con­sul­ta­tive role rather than that of active decision-makers.
  • A lack of bridge-builders between stake­hold­er perspectives.
  • Par­tic­i­pa­tion wash­ing and exploita­tive com­mu­ni­ty involvement.
  • Strug­gles with the dynam­ic nature of tech­nol­o­gy over time.
  • Dis­crep­an­cies between the time scales for users to eval­u­ate design ideas ver­sus the pace at which sys­tems are developed.
  • A demand for par­tic­i­pa­tion to enhance com­mu­ni­ty knowl­edge and to actu­al­ly empow­er them.

Tak­ing a step back, if I were to eval­u­ate the state of the sci­en­tif­ic lit­er­a­ture on par­tic­i­pa­to­ry AI, it strikes me that many of these issues are not new to AI. They have been present in par­tic­i­pa­to­ry design more broad­ly for some time already. Many of these issues are also not nec­es­sar­i­ly spe­cif­ic to AI. The ones I would call out include the issues relat­ed to AI sys­tem dynamism, time scales of par­tic­i­pa­tion ver­sus devel­op­ment, and knowl­edge gaps between var­i­ous actors in par­tic­i­pa­to­ry process­es (and, relat­ed­ly, the lack of bridge-builders).

So, what about prac­tice? Let’s look at two reports that I feel are a good rep­re­sen­ta­tion of the broad­er field: Frame­work for Mean­ing­ful Stake­hold­er Involve­ment by ECNL & Soci­etyIn­side, and Democ­ra­tiz­ing AI: Prin­ci­ples for Mean­ing­ful Pub­lic Par­tic­i­pa­tion by Data & Society.

Frame­work for Mean­ing­ful Stake­hold­er Involve­ment is aimed at busi­ness­es, orga­ni­za­tions, and insti­tu­tions that use AI. It focus­es on human rights, eth­i­cal assess­ment, and com­pli­ance. It aims to be a tool for plan­ning, deliv­er­ing, and eval­u­at­ing stake­hold­er engage­ment effec­tive­ly, empha­siz­ing three core ele­ments: Shared Pur­pose, Trust­wor­thy Process, and Vis­i­ble Impact.

Democ­ra­tiz­ing AI frames pub­lic par­tic­i­pa­tion in AI devel­op­ment as a way to add legit­i­ma­cy and account­abil­i­ty and to help pre­vent harm­ful impacts. It out­lines risks asso­ci­at­ed with AI, includ­ing biased out­comes, opaque deci­sion-mak­ing process­es, and design­ers lack­ing real-world impact aware­ness. Caus­es for inef­fec­tive par­tic­i­pa­tion include uni­di­rec­tion­al com­mu­ni­ca­tion, socioe­co­nom­ic bar­ri­ers, super­fi­cial engage­ment, and inef­fec­tive third-par­ty involve­ment. The report uses envi­ron­men­tal law as a ref­er­ence point and offers eight guide­lines for mean­ing­ful pub­lic par­tic­i­pa­tion in AI.

Tak­ing stock of these reports, we can say that the build­ing blocks for the over­all process are avail­able to those seri­ous­ly look­ing. The chal­lenges fac­ing par­tic­i­pa­to­ry AI are, on the one hand, eco­nom­ic and polit­i­cal. On the oth­er hand, they are relat­ed to the specifics of the tech­nol­o­gy at hand. For the remain­der of this piece, let’s dig into the lat­ter a bit more.

Let’s focus on trans­la­tion work done by engi­neers dur­ing mod­el development.

For this, I build on work by Kang (2023), which focus­es on the qual­i­ta­tive analy­sis of how phe­nom­e­na are trans­lat­ed into ML-com­pat­i­ble forms, pay­ing spe­cif­ic atten­tion to the onto­log­i­cal trans­la­tions that occur in mak­ing a prob­lem learn­able. Trans­la­tion in ML means trans­form­ing com­plex qual­i­ta­tive phe­nom­e­na into quan­tifi­able and com­putable forms. Mul­ti­fac­eted prob­lems are con­vert­ed into a “usable quan­ti­ta­tive ref­er­ence” or “ground truth.” This trans­la­tion is not a mere rep­re­sen­ta­tion of real­i­ty but a refor­mu­la­tion of a prob­lem into math­e­mat­i­cal terms, mak­ing it under­stand­able and process­able by ML algo­rithms. This trans­for­ma­tion involves a sig­nif­i­cant amount of “onto­log­i­cal dis­so­nance,” as it medi­ates and often sim­pli­fies the com­plex­i­ty of real-world phe­nom­e­na into a tax­on­o­my or set of class­es for ML pre­dic­tion. The process of trans­lat­ing is based on assump­tions and stan­dards that may alter the nature of the ML task and intro­duce new social and tech­ni­cal problems. 

So what? I pro­pose we can use the notion of trans­la­tion as a frame for ML engi­neer­ing. Under­stand­ing ML mod­el engi­neer­ing as trans­la­tion is a poten­tial­ly use­ful way to ana­lyze what hap­pens at each step of the process: What gets select­ed for trans­la­tion, how the trans­la­tion is per­formed, and what the result­ing trans­la­tion con­sists of.

So, if we seek to make par­tic­i­pa­to­ry AI engage more with the tech­ni­cal par­tic­u­lar­i­ties of ML, we could begin by iden­ti­fy­ing trans­la­tions that have hap­pened or might hap­pen in our projects. We could then ask to what extent these acts of trans­la­tion are val­ue-laden. For those that are, we could think about how to com­mu­ni­cate these trans­la­tions to a lay audi­ence. A par­tic­u­lar chal­lenge I expect we will be faced with is what the mean­ing­ful lev­el of abstrac­tion for cit­i­zen par­tic­i­pa­tion dur­ing AI devel­op­ment is. We should also ask what the appro­pri­ate ‘vehi­cle’ for cit­i­zen par­tic­i­pa­tion will be. And we should seek to move beyond small-scale, one-off, often unrep­re­sen­ta­tive forms of direct participation.

Bibliography

  • Birhane, A., Isaac, W., Prab­hakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Pow­er to the Peo­ple? Oppor­tu­ni­ties and Chal­lenges for Par­tic­i­pa­to­ry AI. Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–8. https://doi.org/10/grnj99
  • Brat­teteig, T., & Verne, G. (2018). Does AI make PD obso­lete?: Explor­ing chal­lenges from arti­fi­cial intel­li­gence to par­tic­i­pa­to­ry design. Pro­ceed­ings of the 15th Par­tic­i­pa­to­ry Design Con­fer­ence: Short Papers, Sit­u­at­ed Actions, Work­shops and Tuto­r­i­al — Vol­ume 2, 1–5. https://doi.org/10/ghsn84
  • Del­ga­do, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Par­tic­i­pa­to­ry Turn in AI Design: The­o­ret­i­cal Foun­da­tions and the Cur­rent State of Prac­tice. Pro­ceed­ings of the 3rd ACM Con­fer­ence on Equi­ty and Access in Algo­rithms, Mech­a­nisms, and Opti­miza­tion, 1–23. https://doi.org/10/gs8kvm
  • Ehsan, U., & Riedl, M. O. (2020). Human-Cen­tered Explain­able AI: Towards a Reflec­tive Sociotech­ni­cal Approach. In C. Stephani­dis, M. Kuro­su, H. Degen, & L. Rein­er­man-Jones (Eds.), HCI Inter­na­tion­al 2020—Late Break­ing Papers: Mul­ti­modal­i­ty and Intel­li­gence (pp. 449–466). Springer Inter­na­tion­al Pub­lish­ing. https://doi.org/10/gskmgf
  • Fef­fer, M., Skir­pan, M., Lip­ton, Z., & Hei­dari, H. (2023). From Pref­er­ence Elic­i­ta­tion to Par­tic­i­pa­to­ry ML: A Crit­i­cal Sur­vey & Guide­lines for Future Research. Pro­ceed­ings of the 2023 AAAI/ACM Con­fer­ence on AI, Ethics, and Soci­ety, 38–48. https://doi.org/10/gs8kvx
  • Gerdes, A. (2022). A par­tic­i­pa­to­ry data-cen­tric approach to AI Ethics by Design. Applied Arti­fi­cial Intel­li­gence, 36(1), 2009222. https://doi.org/10/gs8kt4
  • Groves, L., Pep­pin, A., Strait, A., & Bren­nan, J. (2023). Going pub­lic: The role of pub­lic par­tic­i­pa­tion approach­es in com­mer­cial AI labs. Pro­ceed­ings of the 2023 ACM Con­fer­ence on Fair­ness, Account­abil­i­ty, and Trans­paren­cy, 1162–1173. https://doi.org/10/gs8kvs
  • Kang, E. B. (2023). Ground truth trac­ings (GTT): On the epis­temic lim­its of machine learn­ing. Big Data & Soci­ety, 10(1), 1–12. https://doi.org/10/gtfgvx
  • Peter, F. (2020). The Grounds of Polit­i­cal Legit­i­ma­cy. Jour­nal of the Amer­i­can Philo­soph­i­cal Asso­ci­a­tion, 6(3), 372–390. https://doi.org/10/grqfhn
  • Robert­son, S., Nguyen, T., Hu, C., Albis­ton, C., Nikzad, A., & Sale­hi, N. (2023). Expres­sive­ness, Cost, and Col­lec­tivism: How the Design of Pref­er­ence Lan­guages Shapes Par­tic­i­pa­tion in Algo­rith­mic Deci­sion-Mak­ing. Pro­ceed­ings of the 2023 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–16. https://doi.org/10/gr6q2t
  • Rubel, A., Cas­tro, C., & Pham, A. K. (2021). Algo­rithms and auton­o­my: The ethics of auto­mat­ed deci­sion sys­tems. Cam­bridge Uni­ver­si­ty Press.
  • Sloane, M., Moss, E., Awom­o­lo, O., & For­lano, L. (2020). Par­tic­i­pa­tion is not a Design Fix for Machine Learn­ing. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  • Zytko, D., J. Wis­niews­ki, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Par­tic­i­pa­to­ry Design of AI Sys­tems: Oppor­tu­ni­ties and Chal­lenges Across Diverse Users, Rela­tion­ships, and Appli­ca­tion Domains. Extend­ed Abstracts of the 2022 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems, 1–4. https://doi.org/10/gs8kv6

Democratizing AI Through Continuous Adaptability: The Role of DevOps

Below are the abstract and slides for my con­tri­bu­tion to the TILT­ing Per­spec­tives 2024 pan­el “The mutu­al shap­ing of demo­c­ra­t­ic prac­tices & AI,” mod­er­at­ed by Mer­el Noorman.

Slides

Abstract

Contestability

This pre­sen­ta­tion delves into democ­ra­tiz­ing arti­fi­cial intel­li­gence (AI) sys­tems through con­testa­bil­i­ty. Con­testa­bil­i­ty refers to the abil­i­ty of AI sys­tems to remain open and respon­sive to dis­putes through­out their life­cy­cle. It approach­es AI sys­tems as are­nas where groups com­pete for pow­er over designs and outcomes. 

Autonomy, democratic agency, legitimation

We iden­ti­fy con­testa­bil­i­ty as a crit­i­cal sys­tem qual­i­ty for respect­ing peo­ple’s auton­o­my. This includes their demo­c­ra­t­ic agency: their abil­i­ty to legit­i­mate poli­cies. This includes poli­cies enact­ed by AI systems. 

For a deci­sion to be legit­i­mate, it must be demo­c­ra­t­i­cal­ly willed or rely on “nor­ma­tive author­i­ty.” The demo­c­ra­t­ic path­way should be con­strained by nor­ma­tive bounds to avoid arbi­trari­ness. The appeal to author­i­ty should meet the “access con­straint,” which ensures cit­i­zens can form beliefs about poli­cies with a suf­fi­cient degree of agency (Peter, 2020 in Rubel et al., 2021).

Con­testa­bil­i­ty is the qual­i­ty that ensures mech­a­nisms are in place for sub­jects to exer­cise their demo­c­ra­t­ic agency. In the case of an appeal to nor­ma­tive author­i­ty, con­testa­bil­i­ty mech­a­nisms are how sub­jects and their rep­re­sen­ta­tives gain access to the infor­ma­tion that will enable them to eval­u­ate its jus­ti­fi­a­bil­i­ty. In this way, con­testa­bil­i­ty sat­is­fies the access con­straint. In the case of demo­c­ra­t­ic will, con­testa­bil­i­ty-by-design prac­tices are how sys­tem devel­op­ment is democ­ra­tized. The auton­o­my account of legit­i­ma­tion adds the nor­ma­tive con­straints that should bind this demo­c­ra­t­ic pathway.

Him­mel­re­ich (2022) sim­i­lar­ly argues that only a “thick” con­cep­tion of democ­ra­cy will address some of the cur­rent short­com­ings of AI devel­op­ment. This is a path­way that not only allows for par­tic­i­pa­tion but also includes delib­er­a­tion over justifications.

The agonistic arena

Else­where, we have pro­posed the Ago­nis­tic Are­na as a metaphor for think­ing about the democ­ra­ti­za­tion of AI sys­tems (Alfrink et al., 2024). Con­testable AI embod­ies the gen­er­a­tive metaphor of the Are­na. This metaphor char­ac­ter­izes pub­lic AI as a space where inter­locu­tors embrace con­flict as pro­duc­tive. Seen through the lens of the Are­na, pub­lic AI prob­lems stem from a need for oppor­tu­ni­ties for adver­sar­i­al inter­ac­tion between stakeholders. 

This metaphor­i­cal fram­ing sug­gests pre­scrip­tions to make more con­tentious and open to dis­pute the norms and pro­ce­dures that shape:

  1. AI sys­tem design deci­sions on a glob­al lev­el, and
  2. human-AI sys­tem out­put deci­sions on a local lev­el (i.e., indi­vid­ual deci­sion out­comes), estab­lish­ing new dia­log­i­cal feed­back loops between stake­hold­ers that ensure con­tin­u­ous monitoring.

The Are­na metaphor encour­ages a design ethos of revis­abil­i­ty and reversibil­i­ty so that AI sys­tems embody the ago­nis­tic ide­al of contingency.

Post-deployment malleability, feedback-ladenness

Unlike phys­i­cal sys­tems, AI tech­nolo­gies exhib­it a unique mal­leabil­i­ty post-deployment. 

For exam­ple, LLM chat­bots opti­mize their per­for­mance based on a vari­ety of feed­back sources, includ­ing inter­ac­tions with users, as well as feed­back col­lect­ed through crowd-sourced data work.

Because of this open-end­ed­ness, demo­c­ra­t­ic con­trol and over­sight in the oper­a­tions phase of the sys­tem’s life­cy­cle become a par­tic­u­lar concern.

This is a con­cern because while AI sys­tems are dynam­ic and feed­back-laden (Gilbert et al., 2023), many of the exist­ing over­sight and con­trol mea­sures are sta­t­ic, one-off exer­cis­es that strug­gle to track sys­tems as they evolve over time.

DevOps

The field of DevOps is piv­otal in this con­text. DevOps focus­es on sys­tem instru­men­ta­tion for enhanced mon­i­tor­ing and con­trol for con­tin­u­ous improve­ment. Typ­i­cal­ly, met­rics for DevOps and their machine learn­ing-spe­cif­ic MLOps off­shoot empha­size tech­ni­cal per­for­mance and busi­ness objectives.

How­ev­er, there is scope to expand these to include mat­ters of pub­lic con­cern. The mat­ters-of-con­cern per­spec­tive shifts the focus on issues such as fair­ness or dis­crim­i­na­tion, view­ing them as chal­lenges that can­not be resolved through uni­ver­sal meth­ods with absolute cer­tain­ty. Rather, it high­lights how stan­dards are local­ly nego­ti­at­ed with­in spe­cif­ic insti­tu­tion­al con­texts, empha­siz­ing that such stan­dards are nev­er guar­an­teed (Lam­p­land & Star, 2009, Geiger et al., 2023).

MLOps Metrics

In the con­text of machine learn­ing sys­tems, tech­ni­cal met­rics focus on mod­el accu­ra­cy. For exam­ple, a finan­cial ser­vices com­pa­ny might use Area Under The Curve Receiv­er Oper­at­ing Char­ac­ter­is­tics (AUC-ROC) to con­tin­u­ous­ly mon­i­tor and main­tain the per­for­mance of their fraud detec­tion mod­el in production.

Busi­ness met­rics focus on cost-ben­e­fit analy­ses. For exam­ple, a bank might use a cost-ben­e­fit matrix to bal­ance the poten­tial rev­enue from approv­ing a loan against the risk of default, ensur­ing that the over­all prof­itabil­i­ty of their loan port­fo­lio is optimized.

Drift

These met­rics can be mon­i­tored over time to detect “drift” between a mod­el and the world. Train­ing sets are sta­t­ic. Real­i­ty is dynam­ic. It changes over time. Drift occurs when the nature of new input data diverges from the data a mod­el was trained on. A change in per­for­mance met­rics may be used to alert sys­tem oper­a­tors, who can then inves­ti­gate and decide on a course of action, e.g., retrain­ing a mod­el on updat­ed data. This, in effect, cre­ates a feed­back loop between the sys­tem in use and its ongo­ing development.

An expan­sion of these prac­tices in the inter­est of con­testa­bil­i­ty would require:

  1. set­ting dif­fer­ent metrics,
  2. expos­ing these met­rics to addi­tion­al audi­ences, and
  3. estab­lish­ing feed­back loops with the process­es that gov­ern mod­els and the sys­tems they are embed­ded in.

Example 1: Camera Cars

Let’s say a city gov­ern­ment uses a cam­era-equipped vehi­cle and a com­put­er vision mod­el to detect pot­holes in pub­lic roads. In addi­tion to accu­ra­cy and a favor­able cost-ben­e­fit ratio, cit­i­zens, and road users in par­tic­u­lar, may care about the time between a detect­ed pot­hole and its fix­ing. Or, they may care about the dis­tri­b­u­tion of pot­holes across the city. Fur­ther­more, when road main­te­nance appears to be degrad­ing, this should be tak­en up with depart­ment lead­er­ship, the respon­si­ble alder­per­son, and coun­cil members.

Example 2: EV Charching

Or, let’s say the same city gov­ern­ment uses an algo­rith­mic sys­tem to opti­mize pub­lic elec­tric vehi­cle (EV) charg­ing sta­tions for green ener­gy use by adapt­ing charg­ing speeds to expect­ed sun and wind. EV dri­vers may want to know how much ener­gy has been shift­ed to green­er time win­dows and its trends. With­out such vis­i­bil­i­ty on a sys­tem’s actu­al goal achieve­ment, cit­i­zens’ abil­i­ty to legit­i­mate its use suf­fers. As I have already men­tioned, demo­c­ra­t­ic agency, when enact­ed via the appeal to author­i­ty, depends on access to “nor­ma­tive facts” that under­pin poli­cies. And final­ly, pro­fessed sys­tem func­tion­al­i­ty must be demon­strat­ed as well (Raji et al., 2022).

DevOps as sociotechnical leverage point for democratizing AI

These brief exam­ples show that the DevOps approach is a poten­tial sociotech­ni­cal lever­age point. It offers path­ways for democ­ra­tiz­ing AI sys­tem design, devel­op­ment, and operations. 

DevOps can be adapt­ed to fur­ther con­testa­bil­i­ty. It cre­ates new chan­nels between human and machine actors. One of DevOp­s’s essen­tial activ­i­ties is mon­i­tor­ing (Smith, 2020), which pre­sup­pos­es fal­li­bil­i­ty, a nec­es­sary pre­con­di­tion for con­testa­bil­i­ty. Final­ly, it requires and pro­vides infra­struc­ture for tech­ni­cal flex­i­bil­i­ty so that recov­ery from error is low-cost and con­tin­u­ous improve­ment becomes prac­ti­cal­ly feasible.

The mutual shaping of democratic practices & AI

Zoom­ing out fur­ther, let’s reflect on this pan­el’s over­all theme, pick­ing out three ele­ments: legit­i­ma­tion, rep­re­sen­ta­tion of mar­gin­al­ized groups, and deal­ing with con­flict and con­tes­ta­tion after imple­men­ta­tion and dur­ing use.

Con­testa­bil­i­ty is a lever for demand­ing jus­ti­fi­ca­tions from oper­a­tors, which is a nec­es­sary input for legit­i­ma­tion by sub­jects (Henin & Le Métay­er, 2022). Con­testa­bil­i­ty frames dif­fer­ent actors’ stances as adver­sar­i­al posi­tions on a polit­i­cal field rather than “equal­ly valid” per­spec­tives (Scott, 2023). And final­ly, rela­tions, mon­i­tor­ing, and revis­abil­i­ty are all ways to give voice to and enable respon­sive­ness to con­tes­ta­tions (Genus & Stir­ling, 2018).

And again, all of these things can be fur­thered in the post-deploy­ment phase by adapt­ing the DevOps lens.

Bibliography

  • Alfrink, K., Keller, I., Kortuem, G., & Doorn, N. (2022). Con­testable AI by Design: Towards a Frame­work. Minds and Machines33(4), 613–639. https://doi.org/10/gqnjcs
  • Alfrink, K., Keller, I., Yur­ri­ta Sem­per­e­na, M., Buly­gin, D., Kortuem, G., & Doorn, N. (2024). Envi­sion­ing Con­testa­bil­i­ty Loops: Eval­u­at­ing the Ago­nis­tic Are­na as a Gen­er­a­tive Metaphor for Pub­lic AIShe Ji: The Jour­nal of Design, Eco­nom­ics, and Inno­va­tion10(1), 53–93. https://doi.org/10/gtzwft
  • Geiger, R. S., Tan­don, U., Gakhokidze, A., Song, L., & Irani, L. (2023). Mak­ing Algo­rithms Pub­lic: Reimag­in­ing Audit­ing From Mat­ters of Fact to Mat­ters of Con­cern. Inter­na­tion­al Jour­nal of Com­mu­ni­ca­tion18(0), Arti­cle 0.
  • Genus, A., & Stir­ling, A. (2018). Collingridge and the dilem­ma of con­trol: Towards respon­si­ble and account­able inno­va­tion. Research Pol­i­cy47(1), 61–69. https://doi.org/10/gcs7sn
  • Gilbert, T. K., Lam­bert, N., Dean, S., Zick, T., Snoswell, A., & Mehta, S. (2023). Reward Reports for Rein­force­ment Learn­ing. Pro­ceed­ings of the 2023 AAAI/ACM Con­fer­ence on AI, Ethics, and Soci­ety, 84–130. https://doi.org/10/gs9cnh
  • Henin, C., & Le Métay­er, D. (2022). Beyond explain­abil­i­ty: Jus­ti­fi­a­bil­i­ty and con­testa­bil­i­ty of algo­rith­mic deci­sion sys­tems. AI & SOCIETY37(4), 1397–1410. https://doi.org/10/gmg8pf
  • Him­mel­re­ich, J. (2022). Against “Democ­ra­tiz­ing AI.” AI & SOCIETYhttps://doi.org/10/gr95d5
  • Lam­p­land, M., & Star, S. L. (Eds.). (2008). Stan­dards and Their Sto­ries: How Quan­ti­fy­ing, Clas­si­fy­ing, and For­mal­iz­ing Prac­tices Shape Every­day Life (1st edi­tion). Cor­nell Uni­ver­si­ty Press.
  • Peter, F. (2020). The Grounds of Polit­i­cal Legit­i­ma­cy. Jour­nal of the Amer­i­can Philo­soph­i­cal Asso­ci­a­tion6(3), 372–390. https://doi.org/10/grqfhn
  • Raji, I. D., Kumar, I. E., Horowitz, A., & Selb­st, A. (2022). The Fal­la­cy of AI Func­tion­al­i­ty. 2022 ACM Con­fer­ence on Fair­ness, Account­abil­i­ty, and Trans­paren­cy, 959–972. https://doi.org/10/gqfvf5
  • Rubel, A., Cas­tro, C., & Pham, A. K. (2021). Algo­rithms and auton­o­my: The ethics of auto­mat­ed deci­sion sys­tems. Cam­bridge Uni­ver­si­ty Press.
  • Scott, D. (2023). Diver­si­fy­ing the Delib­er­a­tive Turn: Toward an Ago­nis­tic RRISci­ence, Tech­nol­o­gy, & Human Val­ues48(2), 295–318. https://doi.org/10/gpk2pr
  • Smith, J. D. (2020). Oper­a­tions anti-pat­terns, DevOps solu­tions. Man­ning Publications.
  • Treveil, M. (2020). Intro­duc­ing MLOps: How to scale machine learn­ing in the enter­prise (First edi­tion). O’Reilly.

AI pedagogy through a design lens

At a TU Delft spring sym­po­sium on AI edu­ca­tion, Hosana and I ran a short work­shop titled “AI ped­a­gogy through a design lens.” In it, we iden­ti­fied some of the chal­lenges fac­ing AI teach­ing, par­tic­u­lar­ly out­side of com­put­er sci­ence, and explored how design ped­a­gogy, par­tic­u­lar­ly the prac­tices of stu­dios and mak­ing, may help to address them. The AI & Soci­ety mas­ter elec­tive I’ve been devel­op­ing and teach­ing over the past five years served as a case study. The ses­sion was punc­tu­at­ed by brief brain­storm­ing using an adapt­ed ver­sion of the SQUID gamestorm­ing tech­nique. Below are the slides we used.

Geen transparantie zonder tegenspraak” — betoog voor première documentaire transparante laadpaal

Het onder­staande korte betoog sprak ik uit tij­dens de online pre­miere van de doc­u­men­taire over de transparante laad­paal op don­derdag 18 maart 2021.

Ik had laatst con­tact met een inter­na­tionale “thought leader” op het gebied van “tech ethics”. Hij vertelde mij dat hij heel dankbaar is voor het bestaan van de transparante laad­paal omdat het zo’n goed voor­beeld is van hoe design kan bij­dra­gen aan eerlijke technologie.

Dat is natu­urlijk ontzettend leuk om te horen. En het past in een bredere trend in de indus­trie gericht op het transparant en uitleg­baar mak­en van algo­ritmes. Inmid­dels is het zelfs zo ver dat wet­gev­ing uitleg­baarheid (in som­mige gevallen) ver­plicht stelt.

In de doc­u­men­taire hoor je meerdere mensen vertellen (mijzelf inbe­grepen) waarom het belan­grijk is dat stedelijke algo­ritmes transparant zijn. Thi­js benoemt heel mooi twee rede­nen: Enerz­i­jds het col­lec­tieve belang om democ­ra­tis­che con­t­role op de ontwik­kel­ing van stedelijke algo­ritmes mogelijk te mak­en. Anderz­i­jds is er het indi­vidu­ele belang om je recht te kun­nen halen als een sys­teem een besliss­ing maakt waarmee je het (om wat voor reden dan ook) niet eens bent.

En inder­daad, in bei­de gevallen (col­lec­tieve con­t­role en indi­vidu­ele reme­die) is transparantie een rand­voor­waarde. Ik denk dat we met dit project een hoop prob­le­men qua design en tech­niek hebben opgelost die daar­bij komen kijken. Tegelijk­er­ti­jd doemt er een nieuwe vraag aan de hori­zon op: Als we begri­jpen hoe een slim sys­teem werkt, en we zijn het er niet mee eens, wat dan? Hoe kri­jg je ver­vol­gens daad­w­erke­lijk invloed op de werk­ing van het systeem? 

Ik denk dat we onze focus zullen moeten gaan ver­leggen van transparantie naar wat ik tegen­spraak of in goed Engels “con­testa­bil­i­ty” noem.

Ontwer­pen voor tegen­spraak betekent dat we na moeten gaan denken over de mid­de­len die mensen nodig hebben voor het uitoe­fe­nen van hun recht op menselijke inter­ven­tie. Ja, dit betekent dat we infor­matie moeten aan­lev­eren over het hoe en waarom van indi­vidu­ele beslissin­gen. Transparantie dus. Maar het betekent ook dat we nieuwe kanalen en processen moeten inricht­en waarmee mensen ver­zoeken kun­nen indi­enen voor het herzien van een besliss­ing. We zullen na moeten gaan denken over hoe we dergelijke ver­zoeken beo­orde­len, en hoe we er voor zor­gen dat het slimme sys­teem in kwest­ie “leert” van de sig­nalen die we op deze manier oppikken uit de samenleving.

Je zou kun­nen zeggen dat ontwer­pen van transparantie een­richt­ingsver­keer is. Infor­matie stroomt van de ontwikke­lende par­tij, naar de eindge­bruik­er. Bij het ontwer­pen voor tegen­spraak gaat het om het creëren van een dialoog tussen ontwikke­laars en burgers.

Ik zeg burg­ers want niet alleen klassieke eindge­bruik­ers wor­den ger­aakt door slimme sys­te­men. Aller­lei andere groepen wor­den ook, vaak indi­rect beïnvloed.

Dat is ook een nieuwe ontwerp uitdag­ing. Hoe ontwerp je niet alleen voor de eindge­bruik­er (zoals bij de transparante laad­paal de EV bestu­ur­der) maar ook voor zoge­naamde indi­recte belanghebben­den, bijvoor­beeld bewon­ers van strat­en waar laad­palen geplaatst wor­den, die geen EV rij­den, of zelfs geen auto, maar even­goed een belang hebben bij hoe stoepen en strat­en wor­den ingericht.

Deze ver­bred­ing van het blikveld betekent dat we bij het ontwer­pen voor tegen­spraak nóg een stap verder kun­nen en zelfs moeten gaan dan het mogelijk mak­en van reme­die bij indi­vidu­ele beslissingen.

Want ontwer­pen voor tegen­spraak bij indi­vidu­ele beslissin­gen van een reeds uit­gerold sys­teem is noodza­ke­lijk­er­wi­js post-hoc en reac­tief, en beperkt zich tot één enkele groep belanghebbenden.

Zoals Thi­js ook min of meer benoemt in de doc­u­men­taire beïn­vloed slimme stedelijke infra­struc­tu­ur de lev­ens van ons alle­maal, en je zou kun­nen zeggen dat de design en tech­nis­che keuzes die bij de ontwik­kel­ing daar­van gemaakt wor­den intrin­siek ook poli­tieke keuzes zijn. 

Daarom denk ik dat we er niet omheen kun­nen om het pro­ces dat ten grond­slag ligt aan deze sys­te­men zelf, ook zo in te richt­en dat er ruimte is voor tegen­spraak. In mijn ide­ale wereld is de ontwik­kel­ing van een vol­gende gen­er­atie slimme laad­palen daarom par­tic­i­patief, plu­ri­form en inclusief, net als onze democ­ra­tie dat zelf ook streeft te zijn. 

Hoe we dit soort “con­testable” algo­ritmes pre­cies vorm moeten geven, hoe ontwer­pen voor tegen­spraak moeten gaan werken, is een open vraag. Maar een aan­tal jaren gele­den wist nie­mand nog hoe een transparante laad­paal er uit zou moeten zien, en dat hebben we ook voor elka­ar gekregen.

Update (2021–03-31 16:43): Een opname van het gehele event is nu ook beschik­baar. Het boven­staande betoog start rond 25:14.

Contestable Infrastructures” at Beyond Smart Cities Today

I’ll be at Beyond Smart Cities Today the next cou­ple of days (18–19 Sep­tem­ber). Below is the abstract I sub­mit­ted, plus a bib­li­og­ra­phy of some of the stuff that went into my think­ing for this and relat­ed mat­ters that I won’t have the time to get into.

In the actu­al­ly exist­ing smart city, algo­rith­mic sys­tems are increas­ing­ly used for the pur­pos­es of auto­mat­ed deci­sion-mak­ing, includ­ing as part of pub­lic infra­struc­ture. Algo­rith­mic sys­tems raise a range of eth­i­cal con­cerns, many of which stem from their opac­i­ty. As a result, pre­scrip­tions for improv­ing the account­abil­i­ty, trust­wor­thi­ness and legit­i­ma­cy of algo­rith­mic sys­tems are often based on a trans­paren­cy ide­al. The think­ing goes that if the func­tion­ing and own­er­ship of an algo­rith­mic sys­tem is made per­ceiv­able, peo­ple under­stand them and are in turn able to super­vise them. How­ev­er, there are lim­its to this approach. Algo­rith­mic sys­tems are com­plex and ever-chang­ing socio-tech­ni­cal assem­blages. Ren­der­ing them vis­i­ble is not a straight­for­ward design and engi­neer­ing task. Fur­ther­more such trans­paren­cy does not nec­es­sar­i­ly lead to under­stand­ing or, cru­cial­ly, the abil­i­ty to act on this under­stand­ing. We believe legit­i­mate smart pub­lic infra­struc­ture needs to include the pos­si­bil­i­ty for sub­jects to artic­u­late objec­tions to pro­ce­dures and out­comes. The result­ing “con­testable infra­struc­ture” would cre­ate spaces that open up the pos­si­bil­i­ty for express­ing con­flict­ing views on the smart city. Our project is to explore the design impli­ca­tions of this line of rea­son­ing for the phys­i­cal assets that cit­i­zens encounter in the city. Because after all, these are the per­ceiv­able ele­ments of the larg­er infra­struc­tur­al sys­tems that recede from view.

  • Alkhat­ib, A., & Bern­stein, M. (2019). Street-Lev­el Algo­rithms. 1–13. https://doi.org/10.1145/3290605.3300760
  • Anan­ny, M., & Craw­ford, K. (2018). See­ing with­out know­ing: Lim­i­ta­tions of the trans­paren­cy ide­al and its appli­ca­tion to algo­rith­mic account­abil­i­ty. New Media and Soci­ety, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
  • Cen­ti­vany, A., & Glushko, B. (2016). “Pop­corn tastes good”: Par­tic­i­pa­to­ry pol­i­cy­mak­ing and Reddit’s “AMAged­don.” Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems — Pro­ceed­ings, 1126–1137. https://doi.org/10.1145/2858036.2858516
  • Craw­ford, K. (2016). Can an Algo­rithm be Ago­nis­tic? Ten Scenes from Life in Cal­cu­lat­ed Publics. Sci­ence Tech­nol­o­gy and Human Val­ues, 41(1), 77–92. https://doi.org/10.1177/0162243915589635
  • DiS­al­vo, C. (2010). Design, Democ­ra­cy and Ago­nis­tic Plu­ral­ism. Pro­ceed­ings of the Design Research Soci­ety Con­fer­ence, 366–371.
  • Hilde­brandt, M. (2017). Pri­va­cy As Pro­tec­tion of the Incom­putable Self: Ago­nis­tic Machine Learn­ing. SSRN Elec­tron­ic Jour­nal, 1–33. https://doi.org/10.2139/ssrn.3081776
  • Jack­son, S. J., Gille­spie, T., & Payette, S. (2014). The Pol­i­cy Knot: Re-inte­grat­ing Pol­i­cy, Prac­tice and Design. CSCW Stud­ies of Social Com­put­ing, 588–602. https://doi.org/10.1145/2531602.2531674
  • Jew­ell, M. (2018). Con­test­ing the deci­sion: liv­ing in (and liv­ing with) the smart city. Inter­na­tion­al Review of Law, Com­put­ers and Tech­nol­o­gy. https://doi.org/10.1080/13600869.2018.1457000
  • Lind­blom, L. (2019). Con­sent, Con­testa­bil­i­ty, and Unions. Busi­ness Ethics Quar­ter­ly. https://doi.org/10.1017/beq.2018.25
  • Mit­tel­stadt, B. D., Allo, P., Tad­deo, M., Wachter, S., & Flori­di, L. (2016). The ethics of algo­rithms: Map­ping the debate. Big Data & Soci­ety, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
  • Van de Poel, I. (2016). An eth­i­cal frame­work for eval­u­at­ing exper­i­men­tal tech­nol­o­gy. Sci­ence and Engi­neer­ing Ethics, 22(3), 667–686. https://doi.org/10.1007/s11948-015‑9724‑3

Contestable Infrastructures: Designing for Dissent in Smart Public Objects” at We Make the City 2019

Thi­js Turèl of AMS Insti­tute and myself pre­sent­ed a ver­sion of the talk below at the Cities for Dig­i­tal Rights con­fer­ence on June 19 in Ams­ter­dam dur­ing the We Make the City fes­ti­val. The talk is an attempt to artic­u­late some of the ideas we both have been devel­op­ing for some time around con­testa­bil­i­ty in smart pub­lic infra­struc­ture. As always with this sort of thing, this is intend­ed as a con­ver­sa­tion piece so I wel­come any thoughts you may have.


The basic mes­sage of the talk is that when we start to do auto­mat­ed deci­sion-mak­ing in pub­lic infra­struc­ture using algo­rith­mic sys­tems, we need to design for the inevitable dis­agree­ments that may arise and fur­ther­more, we sug­gest there is an oppor­tu­ni­ty to focus on design­ing for such dis­agree­ments in the phys­i­cal objects that peo­ple encounter in urban space as they make use of infrastructure.

We set the scene by show­ing a num­ber of exam­ples of smart pub­lic infra­struc­ture. A cyclist cross­ing that adapts to weath­er con­di­tions. If it’s rain­ing cyclists more fre­quent­ly get a green light. A pedes­tri­an cross­ing in Tilburg where elder­ly can use their mobile to get more time to cross. And final­ly, the case we are involved with our­selves: smart EV charg­ing in the city of Ams­ter­dam, about which more later.

Image cred­its: Vat­ten­fall, Fietsfan010, De Nieuwe Draai

We iden­ti­fy three trends in smart pub­lic infra­struc­ture: (1) where pre­vi­ous­ly algo­rithms were used to inform pol­i­cy, now they are employed to per­form auto­mat­ed deci­sion-mak­ing on an indi­vid­ual case basis. This rais­es the stakes; (2) dis­trib­uted own­er­ship of these sys­tems as the result of pub­lic-pri­vate part­ner­ships and oth­er com­plex col­lab­o­ra­tion schemes leads to unclear respon­si­bil­i­ty; and final­ly (3) the increas­ing use of machine learn­ing leads to opaque decision-making.

These trends, and algo­rith­mic sys­tems more gen­er­al­ly, raise a num­ber of eth­i­cal con­cerns. They include but are not lim­it­ed to: the use of induc­tive cor­re­la­tions (for exam­ple in the case of machine learn­ing) leads to unjus­ti­fied results; lack of access to and com­pre­hen­sion of a system’s inner work­ings pro­duces opac­i­ty, which in turn leads to a lack of trust in the sys­tems them­selves and the organ­i­sa­tions that use them; bias is intro­duced by a num­ber of fac­tors, includ­ing devel­op­ment team prej­u­dices, tech­ni­cal flaws, bad data and unfore­seen inter­ac­tions with oth­er sys­tems; and final­ly the use of pro­fil­ing, nudg­ing and per­son­al­i­sa­tion leads to dimin­ished human agency. (We high­ly rec­om­mend the arti­cle by Mit­tel­stadt et al. for a com­pre­hen­sive overview of eth­i­cal con­cerns raised by algorithms.)

So for us, the ques­tion that emerges from all this is: How do we organ­ise the super­vi­sion of smart pub­lic infra­struc­ture in a demo­c­ra­t­ic and law­ful way?

There are a num­ber of exist­ing approach­es to this ques­tion. These include legal and reg­u­la­to­ry (e.g. the right to expla­na­tion in the GDPR); audit­ing (e.g. KPMG’s AI in Con­trol” method, BKZ’s transparantielab); pro­cure­ment (e.g. open source claus­es); insourc­ing (e.g. GOV.UK) and design and engi­neer­ing (e.g. our own work on the trans­par­ent charg­ing sta­tion).

We feel there are two impor­tant lim­i­ta­tions with these exist­ing approach­es. The first is a focus on pro­fes­sion­als and the sec­ond is a focus on pre­dic­tion. We’ll dis­cuss each in turn.

Image cred­its: Cities Today

First of all, many solu­tions tar­get a pro­fes­sion­al class, be it accoun­tants, civ­il ser­vants, super­vi­so­ry boards, as well as tech­nol­o­gists, design­ers and so on. But we feel there is a role for the cit­i­zen as well, because the super­vi­sion of these sys­tems is sim­ply too impor­tant to be left to a priv­i­leged few. This role would include iden­ti­fy­ing wrong­do­ing, and sug­gest­ing alternatives. 

There is a ten­sion here, which is that from the per­spec­tive of the pub­lic sec­tor one should only ask cit­i­zens for their opin­ion when you have the inten­tion and the resources to actu­al­ly act on their sug­ges­tions. It can also be a chal­lenge to iden­ti­fy legit­i­mate con­cerns in the flood of feed­back that can some­times occur. From our point of view though, such con­cerns should not be used as an excuse to not engage the pub­lic. If cit­i­zen par­tic­i­pa­tion is con­sid­ered nec­es­sary, the focus should be on free­ing up resources and set­ting up struc­tures that make it fea­si­ble and effective.

The sec­ond lim­i­ta­tion is pre­dic­tion. This is best illus­trat­ed with the Collinridge dilem­ma: in the ear­ly phas­es of new tech­nol­o­gy, when a tech­nol­o­gy and its social embed­ding are still mal­leable, there is uncer­tain­ty about the social effects of that tech­nol­o­gy. In lat­er phas­es, social effects may be clear but then often the tech­nol­o­gy has become so well entrenched in soci­ety that it is hard to over­come neg­a­tive social effects. (This sum­ma­ry is tak­en from an excel­lent van de Poel arti­cle on the ethics of exper­i­men­tal technology.) 

Many solu­tions dis­re­gard the Collingridge dilem­ma and try to pre­dict and pre­vent adverse effects of new sys­tems at design-time. One exam­ple of this approach would be val­ue-sen­si­tive design. Our focus in stead is on use-time. Con­sid­er­ing the fact that smart pub­lic infra­struc­ture tends to be devel­oped on an ongo­ing basis, the ques­tion becomes how to make cit­i­zens a part­ner in this process. And even more specif­i­cal­ly we are inter­est­ed in how this can be made part of the design of the “touch­points” peo­ple actu­al­ly encounter in the streets, as well as their back­stage processes.

Why do we focus on these phys­i­cal objects? Because this is where peo­ple actu­al­ly meet the infra­struc­tur­al sys­tems, of which large parts recede from view. These are the places where they become aware of their pres­ence. They are the prover­bial tip of the iceberg. 

Image cred­its: Sagar Dani

The use of auto­mat­ed deci­sion-mak­ing in infra­struc­ture reduces people’s agency. For this rea­son, resources for agency need to be designed back into these sys­tems. Fre­quent­ly the answer to this ques­tion is premised on a trans­paren­cy ide­al. This may be a pre­req­ui­site for agency, but it is not suf­fi­cient. Trans­paren­cy may help you become aware of what is going on, but it will not nec­es­sar­i­ly help you to act on that knowl­edge. This is why we pro­pose a shift from trans­paren­cy to con­testa­bil­i­ty. (We can high­ly rec­om­mend Anan­ny and Crawford’s arti­cle for more on why trans­paren­cy is insufficient.)

To clar­i­fy what we mean by con­testa­bil­i­ty, con­sid­er the fol­low­ing three exam­ples: When you see the lights on your router blink in the mid­dle of the night when no-one in your house­hold is using the inter­net you can act on this knowl­edge by yank­ing out the device’s pow­er cord. You may nev­er use the emer­gency brake in a train but its pres­ence does give you a sense of con­trol. And final­ly, the cash reg­is­ter receipt pro­vides you with a view into both the pro­ce­dure and the out­come of the super­mar­ket check­out pro­ce­dure and it offers a resource with which you can dis­pute them if some­thing appears to be wrong.

Image cred­its: Aangifte­doen, source unknown for remainder

None of these exam­ples is a per­fect illus­tra­tion of con­testa­bil­i­ty but they hint at some­thing more than trans­paren­cy, or per­haps even some­thing whol­ly sep­a­rate from it. We’ve been inves­ti­gat­ing what their equiv­a­lents would be in the con­text of smart pub­lic infrastructure.

To illus­trate this point fur­ther let us come back to the smart EV charg­ing project we men­tioned ear­li­er. In Ams­ter­dam, pub­lic EV charg­ing sta­tions are becom­ing “smart” which in this case means they auto­mat­i­cal­ly adapt the speed of charg­ing to a num­ber of fac­tors. These include grid capac­i­ty, and the avail­abil­i­ty of solar ener­gy. Addi­tion­al fac­tors can be added in future, one of which under con­sid­er­a­tion is to give pri­or­i­ty to shared cars over pri­vate­ly owned cars. We are involved with an ongo­ing effort to con­sid­er how such charg­ing sta­tions can be redesigned so that peo­ple under­stand what’s going on behind the scenes and can act on this under­stand­ing. The moti­va­tion for this is that if not designed care­ful­ly, the opac­i­ty of smart EV charg­ing infra­struc­ture may be detri­men­tal to social accep­tance of the tech­nol­o­gy. (A first out­come of these efforts is the Trans­par­ent Charg­ing Sta­tion designed by The Incred­i­ble Machine. A fol­low-up project is ongoing.)

Image cred­its: The Incred­i­ble Machine, Kars Alfrink

We have iden­ti­fied a num­ber of dif­fer­ent ways in which peo­ple may object to smart EV charg­ing. They are list­ed in the table below. These types of objec­tions can lead us to fea­ture require­ments for mak­ing the sys­tem contestable. 

Because the list is pre­lim­i­nary, we asked the audi­ence if they could imag­ine addi­tion­al objec­tions, if those exam­ples rep­re­sent­ed new cat­e­gories, and if they would require addi­tion­al fea­tures for peo­ple to be able to act on them. One par­tic­u­lar­ly inter­est­ing sug­ges­tion that emerged was to give local com­mu­ni­ties con­trol over the poli­cies enact­ed by the charge points in their vicin­i­ty. That’s some­thing to fur­ther con­sid­er the impli­ca­tions of.

And that’s where we left it. So to summarise: 

  1. Algo­rith­mic sys­tems are becom­ing part of pub­lic infrastructure.
  2. Smart pub­lic infra­struc­ture rais­es new eth­i­cal concerns.
  3. Many solu­tions to eth­i­cal con­cerns are premised on a trans­paren­cy ide­al, but do not address the issue of dimin­ished agency.
  4. There are dif­fer­ent cat­e­gories of objec­tions peo­ple may have to an algo­rith­mic system’s workings.
  5. Mak­ing a sys­tem con­testable means cre­at­ing resources for peo­ple to object, open­ing up a space for the explo­ration of mean­ing­ful alter­na­tives to its cur­rent implementation.

ThingsCon 2018 workshop ‘Seeing Like a Bridge’

Work­shop in progress with a view of Rot­ter­dam’s Willems­brug across the Maas.

Ear­ly Decem­ber of last year Alec Shuldin­er and myself ran a work­shop at ThingsCon 2018 in Rotterdam.

Here’s the descrip­tion as it was list­ed on the con­fer­ence web­site:

In this work­shop we will take a deep dive into some of the chal­lenges of design­ing smart pub­lic infrastructure.

Smart city ideas are mov­ing from hype into real­i­ty. The every­day things that our con­tem­po­rary world runs on, such as roads, rail­ways and canals are not immune to this devel­op­ment. Basic, “hard” infra­struc­ture is being aug­ment­ed with inter­net-con­nect­ed sens­ing, pro­cess­ing and actu­at­ing capa­bil­i­ties. We are involved as prac­ti­tion­ers and researchers in one such project: the MX3D smart bridge, a pedes­tri­an bridge 3D print­ed from stain­less steel and equipped with a net­work of sensors.

The ques­tion fac­ing every­one involved with these devel­op­ments, from cit­i­zens to pro­fes­sion­als to pol­i­cy mak­ers is how to reap the poten­tial ben­e­fits of these tech­nolo­gies, with­out degrad­ing the urban fab­ric. For this to hap­pen, infor­ma­tion tech­nol­o­gy needs to become more like the city: open-end­ed, flex­i­ble and adapt­able. And we need meth­ods and tools for the diverse range of stake­hold­ers to come togeth­er and col­lab­o­rate on the design of tru­ly intel­li­gent pub­lic infrastructure.

We will explore these ques­tions in this work­shop by first walk­ing you through the archi­tec­ture of the MX3D smart bridge—offering a unique­ly con­crete and prag­mat­ic view into a cut­ting edge smart city project. Sub­se­quent­ly we will togeth­er explore the ques­tion: What should a smart pedes­tri­an bridge that is aware of itself and its sur­round­ings be able to tell us? We will con­clude by shar­ing some of the high­lights from our con­ver­sa­tion, and make note of par­tic­u­lar­ly thorny ques­tions that require fur­ther work.

The work­shop’s struc­ture was quite sim­ple. After a round of intro­duc­tions, Alec intro­duced the MX3D bridge to the par­tic­i­pants. For a sense of what that intro­duc­tion talk was like, I rec­om­mend view­ing this record­ing of a pre­sen­ta­tion he deliv­ered at a recent Pakhuis de Zwi­jger event.

We then ran three rounds of group dis­cus­sion in the style of world cafe. each dis­cus­sion was guid­ed by one ques­tion. Par­tic­i­pants were asked to write, draw and doo­dle on the large sheets of paper cov­er­ing each table. At the end of each round, peo­ple moved to anoth­er table while one per­son remained to share the pre­ced­ing round’s dis­cus­sion with the new group.

The dis­cus­sion ques­tions were inspired by val­ue-sen­si­tive design. I was inter­est­ed to see if peo­ple could come up with alter­na­tive uses for a sen­sor-equipped 3D-print­ed foot­bridge if they first con­sid­ered what in their opin­ion made a city worth liv­ing in. 

The ques­tions we used were:

  1. What spe­cif­ic things do you like about your town? (Places, things to do, etc. Be specific.)
  2. What val­ues under­ly those things? (A val­ue is what a per­son or group of peo­ple con­sid­er impor­tant in life.)
  3. How would you redesign the bridge to sup­port those values?

At the end of the three dis­cus­sion rounds we went around to each table and shared the high­lights of what was pro­duced. We then had a bit of a back and forth about the out­comes and the work­shop approach, after which we wrapped up.

We did get to some inter­est­ing val­ues by start­ing from per­son­al expe­ri­ence. Par­tic­i­pants came from a vari­ety of coun­tries and that was reflect­ed in the range of exam­ples and relat­ed val­ues. The design ideas for the bridge remained some­what abstract. It turned out to be quite a chal­lenge to make the jump from val­ues to dif­fer­ent types of smart bridges. Despite this, we did get nice ideas such as hav­ing the bridge report on water qual­i­ty of the canal it cross­es, derived from the val­ue of care for the environment.

The response from par­tic­i­pants after­wards was pos­i­tive. Peo­ple found it thought-pro­vok­ing, which was def­i­nite­ly the point. Peo­ple were also eager to learn even more about the bridge project. It remains a thing that cap­tures peo­ple’s imag­i­na­tion. For that rea­son alone, it con­tin­ues to be a very pro­duc­tive case to use for the ground­ing of these sorts of discussions.

Unboxing’ at Behavior Design Amsterdam #16

Below is a write-up of the talk I gave at the Behav­ior Design Ams­ter­dam #16 meet­up on Thurs­day, Feb­ru­ary 15, 2018.

'Pandora' by John William Waterhouse (1896)
‘Pan­do­ra’ by John William Water­house (1896)

I’d like to talk about the future of our design prac­tice and what I think we should focus our atten­tion on. It is all relat­ed to this idea of com­plex­i­ty and open­ing up black box­es. We’re going to take the scenic route, though. So bear with me.

Software Design

Two years ago I spent about half a year in Singapore.

While there I worked as prod­uct strate­gist and design­er at a start­up called ARTO, an art rec­om­men­da­tion ser­vice. It shows you a ran­dom sam­ple of art­works, you tell it which ones you like, and it will then start rec­om­mend­ing pieces it thinks you like. In case you were won­der­ing: yes, swip­ing left and right was involved.

We had this inter­est­ing prob­lem of ingest­ing art from many dif­fer­ent sources (most­ly online gal­leries) with meta­da­ta of wild­ly vary­ing lev­els of qual­i­ty. So, using meta­da­ta to fig­ure out which art to show was a bit of a non-starter. It should come as no sur­prise then, that we start­ed look­ing into machine learning—image pro­cess­ing in particular.

And so I found myself work­ing with my engi­neer­ing col­leagues on an art rec­om­men­da­tion stream which was dri­ven at least in part by machine learn­ing. And I quick­ly realised we had a prob­lem. In terms of how we worked togeth­er on this part of the prod­uct, it felt like we had tak­en a bunch of steps back in time. Back to a way of col­lab­o­rat­ing that was less inte­grat­ed and less responsive.

That’s because we have all these nice tools and tech­niques for design­ing tra­di­tion­al soft­ware prod­ucts. But soft­ware is deter­min­is­tic. Machine learn­ing is fun­da­men­tal­ly dif­fer­ent in nature: it is probabilistic.

It was hard for me to take the lead in the design of this part of the prod­uct for two rea­sons. First of all, it was chal­leng­ing to get a first-hand feel of the machine learn­ing fea­ture before it was implemented.

And sec­ond of all, it was hard for me to com­mu­ni­cate or visu­alise the intend­ed behav­iour of the machine learn­ing fea­ture to the rest of the team.

So when I came back to the Nether­lands I decid­ed to dig into this prob­lem of design for machine learn­ing. Turns out I opened up quite the can of worms for myself. But that’s okay.

There are two rea­sons I care about this:

The first is that I think we need more design-led inno­va­tion in the machine learn­ing space. At the moment it is engi­neer­ing-dom­i­nat­ed, which doesn’t nec­es­sar­i­ly lead to use­ful out­comes. But if you want to take the lead in the design of machine learn­ing appli­ca­tions, you need a firm han­dle on the nature of the technology.

The sec­ond rea­son why I think we need to edu­cate our­selves as design­ers on the nature of machine learn­ing is that we need to take respon­si­bil­i­ty for the impact the tech­nol­o­gy has on the lives of peo­ple. There is a lot of talk about ethics in the design indus­try at the moment. Which I con­sid­er a pos­i­tive sign. But I also see a reluc­tance to real­ly grap­ple with what ethics is and what the rela­tion­ship between tech­nol­o­gy and soci­ety is. We seem to want easy answers, which is under­stand­able because we are all very busy peo­ple. But hav­ing spent some time dig­ging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a fea­ture. And we should embrace it.

Machine Learning

At the end of 2016 I attend­ed ThingsCon here in Ams­ter­dam and I was intro­duced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both inter­est­ed in machine learn­ing. So with encour­age­ment from Ianus we decid­ed to put togeth­er a work­shop that would enable indus­tri­al design mas­ter stu­dents to tan­gle with it in a hands-on manner.

About a year lat­er now, this has grown into a thing we call Pro­to­typ­ing the Use­less But­ler. Dur­ing the work­shop, you use machine learn­ing algo­rithms to train a mod­el that takes inputs from a net­work-con­nect­ed arduino’s sen­sors and dri­ves that same arduino’s actu­a­tors. In effect, you can cre­ate inter­ac­tive behav­iour with­out writ­ing a sin­gle line of code. And you get a first hand feel for how com­mon appli­ca­tions of machine learn­ing work. Things like regres­sion, clas­si­fi­ca­tion and dynam­ic time warping.

The thing that makes this work­shop tick is an open source soft­ware appli­ca­tion called Wek­ina­tor. Which was cre­at­ed by Rebec­ca Fiebrink. It was orig­i­nal­ly aimed at per­form­ing artists so that they could build inter­ac­tive instru­ments with­out writ­ing code. But it takes inputs from any­thing and sends out­puts to any­thing. So we appro­pri­at­ed it towards our own ends.

You can find every­thing relat­ed to Use­less But­ler on this GitHub repo.

The think­ing behind this work­shop is that for us design­ers to be able to think cre­ative­ly about appli­ca­tions of machine learn­ing, we need a gran­u­lar under­stand­ing of the nature of the tech­nol­o­gy. The thing with design­ers is, we can’t real­ly learn about such things from books. A lot of design knowl­edge is tac­it, it emerges from our phys­i­cal engage­ment with the world. This is why things like sketch­ing and pro­to­typ­ing are such essen­tial parts of our way of work­ing. And so with use­less but­ler we aim to cre­ate an envi­ron­ment in which you as a design­er can gain tac­it knowl­edge about the work­ings of machine learning.

Sim­ply put, for a lot of us, machine learn­ing is a black box. With Use­less But­ler, we open the black box a bit and let you peer inside. This should improve the odds of design-led inno­va­tion hap­pen­ing in the machine learn­ing space. And it should also help with ethics. But it’s def­i­nite­ly not enough. Knowl­edge about the tech­nol­o­gy isn’t the only issue here. There are more black box­es to open.

Values

Which brings me back to that oth­er black box: ethics. Like I already men­tioned there is a lot of talk in the tech indus­try about how we should “be more eth­i­cal”. But things are often reduced to this notion that design­ers should do no harm. As if ethics is a prob­lem to be fixed in stead of a thing to be practiced.

So I start­ed to talk about this to peo­ple I know in acad­e­mia and more than once this thing called Val­ue Sen­si­tive Design was men­tioned. It should be no sur­prise to any­one that schol­ars have been chew­ing on this stuff for quite a while. One of the ear­li­est ref­er­ences I came across, an essay by Batya Fried­man in Inter­ac­tions is from 1996! This is a les­son to all of us I think. Pay more atten­tion to what the aca­d­e­mics are talk­ing about.

So, at the end of last year I dove into this top­ic. Our host Iskan­der Smit, Rob Mai­jers and myself coor­di­nate a grass­roots com­mu­ni­ty for tech work­ers called Tech Sol­i­dar­i­ty NL. We want to build tech­nol­o­gy that serves the needs of the many, not the few. Val­ue Sen­si­tive Design seemed like a good thing to dig into and so we did.

I’m not going to dive into the details here. There’s a report on the Tech Sol­i­dar­i­ty NL web­site if you’re inter­est­ed. But I will high­light a few things that val­ue sen­si­tive design asks us to con­sid­er that I think help us unpack what it means to prac­tice eth­i­cal design.

First of all, val­ues. Here’s how it is com­mon­ly defined in the literature:

A val­ue refers to what a per­son or group of peo­ple con­sid­er impor­tant in life.”

I like it because it’s com­mon sense, right? But it also makes clear that there can nev­er be one mono­lith­ic def­i­n­i­tion of what ‘good’ is in all cas­es. As we design­ers like to say: “it depends” and when it comes to val­ues things are no different.

Per­son or group” implies there can be var­i­ous stake­hold­ers. Val­ue sen­si­tive design dis­tin­guish­es between direct and indi­rect stake­hold­ers. The for­mer have direct con­tact with the tech­nol­o­gy, the lat­ter don’t but are affect­ed by it nonethe­less. Val­ue sen­si­tive design means tak­ing both into account. So this blows up the con­ven­tion­al notion of a sin­gle user to design for.

Var­i­ous stake­hold­er groups can have com­pet­ing val­ues and so to design for them means to arrive at some sort of trade-off between val­ues. This is a cru­cial point. There is no such thing as a per­fect or objec­tive­ly best solu­tion to eth­i­cal conun­drums. Not in the design of tech­nol­o­gy and not any­where else.

Val­ue sen­si­tive design encour­ages you to map stake­hold­ers and their val­ues. These will be dif­fer­ent for every design project. Anoth­er approach is to use lists like the one pic­tured here as an ana­lyt­i­cal tool to think about how a design impacts var­i­ous values.

Fur­ther­more, dur­ing your design process you might not only think about the short-term impact of a tech­nol­o­gy, but also think about how it will affect things in the long run.

And sim­i­lar­ly, you might think about the effects of a tech­nol­o­gy not only when a few peo­ple are using it, but also when it becomes wild­ly suc­cess­ful and every­body uses it.

There are tools out there that can help you think through these things. But so far much of the work in this area is hap­pen­ing on the aca­d­e­m­ic side. I think there is an oppor­tu­ni­ty for us to cre­ate tools and case stud­ies that will help us edu­cate our­selves on this stuff.

There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the tech­nolo­gies we work with, it helps to dig deep­er into the nature of the rela­tion­ship between tech­nol­o­gy and soci­ety. Yes, it com­pli­cates things. But that is exact­ly the point.

Priv­i­leg­ing sim­ple and scal­able solu­tions over those adapt­ed to local needs is social­ly, eco­nom­i­cal­ly and eco­log­i­cal­ly unsus­tain­able. So I hope you will join me in embrac­ing complexity.

Playful Design for Workplace Change Management’ at PLAYTrack conference 2017 in Aarhus

Lase defender collab at FUSE

At the end of last year I was invit­ed to speak at the PLAY­Track con­fer­ence in Aarhus about the work­place change man­age­ment games made by Hub­bub. It turned out to be a great oppor­tu­ni­ty to recon­nect with the play research community. 

I was very much impressed by the pro­gram assem­bled by the organ­is­ers. Peo­ple came from a wide range of dis­ci­plines and cru­cial­ly, there was ample time to dis­cuss and reflect on the mate­ri­als pre­sent­ed. As I tweet­ed after­wards, this is a thing that most con­fer­ence organ­is­ers get wrong.

I was par­tic­u­lar­ly inspired by the work of Ben­jamin Mardell and Mara Krechevsky at Harvard’s Project ZeroMak­ing Learn­ing Vis­i­ble looks like a great resource for any­one who teach­es. Then there was Reed Stevens from North­west­ern Uni­ver­si­ty whose project FUSE is one of the most sol­id exam­ples of play­ful learn­ing for STEAM I’ve seen thus far. I was also fas­ci­nat­ed by Cia­ra Laverty’s work at PEDAL on observ­ing par­ent-child play. Miguel Sicart deliv­ered anoth­er great provo­ca­tion on the dark side of play­ful design. And final­ly I was delight­ed to hear about and expe­ri­ence for myself some of Amos Blan­ton’s work at the LEGO Foun­da­tion. I should also call out Ben Fin­cham’s many provoca­tive con­tri­bu­tions from the audience.

The abstract for my talk is below, which cov­ers most of what I talked about. I tried to give peo­ple a good sense of: 

  • what the games con­sist­ed of,
  • what we were aim­ing to achieve,
  • how both the fic­tion and the play­er activ­i­ties sup­port­ed these goals,
  • how we made learn­ing out­comes vis­i­ble to our play­ers and clients,
  • and final­ly how we went about design­ing and devel­op­ing these games.

Both projects have sol­id write-ups over at the Hub­bub web­site, so I’ll just point to those here: Code 4 and Rip­ple Effect.

In the final sec­tion of the talk I spent a bit of time reflect­ing on how I would approach projects like this today. After all, it has been sev­en years since we made Code 4, and four years since Rip­ple Effect. That’s ages ago and my per­spec­tive has def­i­nite­ly changes since we made these.

Participatory design

First of all, I would get even more seri­ous about co-design­ing with play­ers at every step. I would recruit rep­re­sen­ta­tives of play­ers and invest them with real influ­ence. In the projects we did, the pri­ma­ry vehi­cle for play­er influ­ence was through playtest­ing. But this is nec­es­sar­i­ly lim­it­ed. I also won’t pre­tend this is at all easy to do in a com­mer­cial context. 

But, these games are ulti­mate­ly about improv­ing work­er pro­duc­tiv­i­ty. So how do we make it so that work­ers share in the real-world prof­its yield­ed by a suc­cess­ful cul­ture change?

I know of the exis­tence of par­tic­i­pa­to­ry design but from my expe­ri­ence it is not a com­mon approach in the indus­try. Why?

Value sensitive design

On a relat­ed note, I would get more seri­ous about what val­ues are sup­port­ed by the sys­tem, in whose inter­est they are and where they come from. Ear­ly field research and work­shops with audi­ence do sur­face some val­ues but val­ues from cus­tomer rep­re­sen­ta­tives tend to dom­i­nate. Again, the com­mer­cial con­text we work in is a poten­tial challenge. 

I know of val­ue sen­si­tive design, but as with par­tic­i­pa­to­ry design, it has yet to catch on in a big way in the indus­try. So again, why is that?

Disintermediation

One thing I con­tin­ue to be inter­est­ed in is to reduce the com­plex­i­ty of a game system’s phys­i­cal affor­dances (which includes its code), and to push even more of the sub­stance of the game into those social allowances that make up the non-mate­r­i­al aspects of the game. This allows for spon­ta­neous rene­go­ti­a­tion of the game by the play­ers. This is dis­in­ter­me­di­a­tion as a strat­e­gy. David Kanaga’s take on games as toys remains huge­ly inspi­ra­tional in this regard, as does Bernard De Koven’s book The Well Played Game.

Gamefulness versus playfulness

Code 4 had more focus on sat­is­fy­ing the need for auton­o­my. Rip­ple Effect had more focus on com­pe­tence, or in any case, it had less empha­sis on auton­o­my. There was less room for ‘play’ around the core dig­i­tal game. It seems to me that mas­ter­ing a sub­jec­tive sim­u­la­tion of a sub­ject is not nec­es­sar­i­ly what a work­place game for cul­ture change should be aim­ing for. So, less game­ful design, more play­ful design.

Adaptation

Final­ly, the agency mod­el does not enable us to stick around for the long haul. But work­place games might be bet­ter suit­ed to a set­up where things aren’t thought of as a one-off project but more of an ongo­ing process. 

In How Build­ings Learn, Stew­art Brand talks about how archi­tects should revis­it build­ings they’ve designed after they are built to learn about how peo­ple are actu­al­ly using them. He also talks about how good build­ings are build­ings that its inhab­i­tants can adapt to their needs. What does that look like in the con­text of a game for work­place cul­ture change?


Play­ful Design for Work­place Change Management

Code 4 (2011, com­mis­sioned by the Tax Admin­is­tra­tion of the Nether­lands) and Rip­ple Effect (2013, com­mis­sioned by Roy­al Dutch Shell) are both games for work­place change man­age­ment designed and devel­oped by Hub­bub, a bou­tique play­ful design agency which oper­at­ed from Utrecht, The Nether­lands and Berlin, Ger­many between 2009 and 2015. These games are exam­ples of how a goal-ori­ent­ed seri­ous game can be used to encour­age play­ful appro­pri­a­tion of work­place infra­struc­ture and social norms, result­ing in an open-end­ed and cre­ative explo­ration of new and inno­v­a­tive ways of working.

Seri­ous game projects are usu­al­ly com­mis­sioned to solve prob­lems. Solv­ing the prob­lem of cul­tur­al change in a straight­for­ward man­ner means view­ing games as a way to per­suade work­ers of a desired future state. They typ­i­cal­ly take videogame form, sim­u­lat­ing the desired new way of work­ing as deter­mined by man­age­ment. To play the game well, play­ers need to mas­ter its sys­tem and by extension—it is assumed—learning happens.

These games can be be enjoy­able expe­ri­ences and an improve­ment on pre­vi­ous forms of work­place learn­ing, but in our view they decrease the pos­si­bil­i­ty space of poten­tial work­place cul­tur­al change. They dimin­ish work­er agency, and they waste the cre­ative and inno­v­a­tive poten­tial of involv­ing them in the inven­tion of an improved work­place culture. 

We instead choose to view work­place games as an oppor­tu­ni­ty to increase the space of pos­si­bil­i­ty. We resist the temp­ta­tion to bake the desired new way of work­ing into the game’s phys­i­cal and dig­i­tal affor­dances. Instead, we leave how to play well up to the play­ers. Since these games are team-based and col­lab­o­ra­tive, play­ers need to nego­ti­ate their way of work­ing around the game among them­selves. In addi­tion, because the games are dis­trib­uted in time—running over a num­ber of weeks—and are playable at play­er dis­cre­tion dur­ing the work­day, play­ers are giv­en license to appro­pri­ate work­place infra­struc­ture and sub­vert social norms towards in-game ends.

We tried to make learn­ing tan­gi­ble in var­i­ous ways. Because the games at the core are web appli­ca­tions to which play­ers log on with indi­vid­ual accounts we were able to col­lect data on play­er behav­iour. To guar­an­tee pri­va­cy, employ­ers did not have direct access to game data­bas­es and only received anonymised reports. We took respon­si­bil­i­ty for play­er learn­ing by facil­i­tat­ing coach­ing ses­sions in which they could safe­ly reflect on their game expe­ri­ences. Round­ing out these efforts, we con­duct­ed sur­veys to gain insight into the play­er expe­ri­ence from a more qual­i­ta­tive and sub­jec­tive perspective.

These games offer a mod­el for a rea­son­ably demo­c­ra­t­ic and eth­i­cal way of doing game-based work­place change man­age­ment. How­ev­er, we would like to see efforts that fur­ther democ­ra­tise their design and development—involving work­ers at every step. We also wor­ry about how games can be used to cre­ate the illu­sion of work­er influ­ence while at the same time soft­ware is deployed through­out the work­place to lim­it their agency. 

Our exam­ples may be inspir­ing but because of these devel­op­ments we feel we can’t con­tin­ue this type of work with­out seri­ous­ly recon­sid­er­ing our cur­rent process­es, tech­nol­o­gy stacks and busi­ness practices—and ulti­mate­ly whether we should be mak­ing games at all.

Prototyping the Useless Butler: Machine Learning for IoT Designers

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Ams­ter­dam 2017, pho­to by nunocruzstreet.com

At ThingsCon Ams­ter­dam 2017, Péter and I ran a sec­ond iter­a­tion of our machine learn­ing work­shop. We improved on our first attempt at TU Delft in a num­ber of ways.

  • We pre­pared exam­ple code for com­mu­ni­cat­ing with Wek­ina­tor from a wifi con­nect­ed Arduino MKR1000 over OSC.
  • We cre­at­ed a pre­de­fined bread­board setup.
  • We devel­oped three exer­cis­es, one for each type of Wek­ina­tor out­put: regres­sion, clas­si­fi­ca­tion and dynam­ic time warping.

In con­trast to the first ver­sion, we had two hours to run through the whole thing, in stead of a day… So we had to cut some cor­ners, and dou­bled down on walk­ing par­tic­i­pants through a num­ber of exer­cis­es so that they would come out of it with some read­i­ly applic­a­ble skills. 

We dubbed the work­shop ‘pro­to­typ­ing the use­less but­ler’, with thanks to Philip van Allen for the sug­ges­tion to frame the exer­cis­es around build­ing some­thing non-pro­duc­tive so that the focus was shift­ed to play and exploration.

All of the code, the cir­cuit dia­gram and slides are over on GitHub. But I’ll sum­marise things here.

  1. We spent a very short amount of time intro­duc­ing machine learn­ing. We used Google’s Teach­able Machine as an exam­ple and con­trast­ed reg­u­lar pro­gram­ming with using machine learn­ing algo­rithms to train mod­els. The point was to pro­vide folks with just enough con­cep­tu­al scaf­fold­ing so that the rest of the work­shop would make sense.
  2. We then intro­duced our ‘tool­chain’ which con­sists of Wek­ina­tor, the Arduino MKR1000 mod­ule and the OSC pro­to­col. The aim of this tool­chain is to allow design­ers who work in the IoT space to get a feel for the mate­r­i­al prop­er­ties of machine learn­ing through hands-on tin­ker­ing. We tried to cre­ate a tool­chain with as few mov­ing parts as pos­si­ble, because each addi­tion­al com­po­nent would intro­duce anoth­er point of fail­ure which might require debug­ging. This tool­chain would enable design­ers to either use machine learn­ing to rapid­ly pro­to­type inter­ac­tive behav­iour with min­i­mal or no pro­gram­ming. It can also be used to pro­to­type prod­ucts that expose inter­ac­tive machine learn­ing fea­tures to end users. (For a spec­u­la­tive exam­ple of one such prod­uct, see Bjørn Kar­man­n’s Objec­ti­fi­er.)
  3. Par­tic­i­pants were then asked to set up all the required parts on their own work­sta­tion. A list can be found on the Use­less But­ler GitHub page.
  4. We then pro­ceed­ed to build the cir­cuit. We pro­vid­ed all the com­po­nents and showed a Fritz­ing dia­gram to help peo­ple along. The basic idea of this cir­cuit, the epony­mous use­less but­ler, was to have a suf­fi­cient­ly rich set of inputs and out­puts with which to play, that would suit all three types of Wek­ina­tor out­put. So we set­tled on a pair of pho­tore­sis­tors or LDRs as inputs and an RGB LED as output.
  5. With the pre­req­ui­sites installed and the cir­cuit built we were ready to walk through the exam­ples. For regres­sion we mapped the con­tin­u­ous stream of read­ings from the two LDRs to three out­puts, one each for the red, green and blue of the LED. For clas­si­fi­ca­tion we put the state of both LDRs into one of four cat­e­gories, each switch­ing the RGB LED to a spe­cif­ic col­or (cyan, magen­ta, yel­low or white). And final­ly, for dynam­ic time warp­ing, we asked Wek­ina­tor to recog­nise one of three ges­tures and switch the RGB LED to one of three states (red, green or off).

When we reflect­ed on the work­shop after­wards, we agreed we now have a proven con­cept. Par­tic­i­pants were able to get the tool­chain up and run­ning and could play around with iter­a­tive­ly train­ing and eval­u­at­ing their mod­el until it behaved as intended. 

How­ev­er, there is still quite a bit of room for improve­ment. On a prac­ti­cal note, quite a bit of time was tak­en up by the build­ing of the cir­cuit, which isn’t the point of the work­shop. One way of deal­ing with this is to bring those to a work­shop pre-built. Doing so would enable us to get to the machine learn­ing quick­er and would open up time and space to also engage with the par­tic­i­pants about the point of it all. 

We’re keen on bring­ing this work­shop to more set­tings in future. If we do, I’m sure we’ll find the oppor­tu­ni­ty to improve on things once more and I will report back here.

Many thanks to Iskan­der and the rest of the ThingsCon team for invit­ing us to the conference.

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Ams­ter­dam 2017, pho­to by nunocruzstreet.com