PhD update – January 2019

Thought I’d post a quick update on my PhD. Since my pre­vi­ous post almost five months have passed. I’ve been devel­op­ing my plan fur­ther, for which you’ll find an updat­ed descrip­tion below. I’ve also put togeth­er my very first con­fer­ence paper, co-authored with my super­vi­sor Gerd Kortuem. It’s a case study of the MX3D smart bridge for Design­ing Inter­ac­tive Sys­tems 2019. We’ll see if it gets accept­ed. But in any case, writ­ing some­thing has been huge­ly edu­ca­tion­al. And once I final­ly fig­ured out what the hell I was doing, it was sort of fun as well. Still kind of a trip to be paid to do this kind of work. Look­ing ahead, I am set­ting goals for this year and the near­er term as well. It’s all very rough still but it will like­ly involve research through design as a method and maybe object ori­ent­ed ontol­ogy as a the­o­ry. All of which will serve to oper­a­tionalise and eval­u­ate the use­ful­ness of the “con­testa­bil­i­ty” con­cept in the con­text of smart city infra­struc­ture. To be continued—and I wel­come all your thoughts!


Design­ing Smart City Infra­struc­ture for Contestability

The use of infor­ma­tion tech­nol­o­gy in cities increas­ing­ly sub­jects cit­i­zens to auto­mat­ed data col­lec­tion, algo­rith­mic deci­sion mak­ing and remote con­trol of phys­i­cal space. Cit­i­zens tend to find these sys­tems and their out­comes hard to under­stand and pre­dict [1]. More­over, the opac­i­ty of smart urban sys­tems pre­cludes full cit­i­zen­ship and obstructs people’s ‘right to the city’ [2].

A com­mon­ly pro­posed solu­tion is to improve cit­i­zens under­stand­ing of sys­tems by mak­ing them more open and trans­par­ent [3]. For exam­ple, GDPR pre­scribes people’s right to expla­na­tion of auto­mat­ed deci­sions they have been sub­ject­ed to. For anoth­er exam­ple, the city of Ams­ter­dam offers a pub­licly acces­si­ble reg­is­ter of urban sen­sors, and is com­mit­ted to open­ing up all the data they collect.

How­ev­er, it is not clear that open­ness and trans­paren­cy in and of itself will yield the desired improve­ments in under­stand­ing and gov­ern­ing of smart city infra­struc­tures [4]. We would like to sug­gest that for a sys­tem to per­ceived as account­able, peo­ple must be able to con­test its workings—from the data it col­lects, to the deci­sions it makes, all the way through to how those deci­sions are act­ed on in the world.

The lead­ing research ques­tion for this PhD there­fore is how to design smart city infrastructure—urban sys­tems aug­ment­ed with inter­net-con­nect­ed sens­ing, pro­cess­ing and actu­at­ing capabilities—for con­testa­bil­i­ty [5]: the extent to which a sys­tem sup­ports the abil­i­ty of those sub­ject­ed to it to oppose its work­ings as wrong or mistaken.

Ref­er­ences

  1. Bur­rell, Jen­na. “How the machine ‘thinks’: Under­stand­ing opac­i­ty in machine learn­ing algo­rithms.” Big Data & Soci­ety 3.1 (2016): 2053951715622512.
  2. Kitchin, Rob, Pao­lo Car­dul­lo, and Cesare Di Feli­cianto­nio. “Cit­i­zen­ship, Jus­tice and the Right to the Smart City.” (2018).
  3. Abdul, Ashraf, et al. “Trends and tra­jec­to­ries for explain­able, account­able and intel­li­gi­ble sys­tems: An hci research agen­da.” Pro­ceed­ings of the 2018 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems. ACM, 2018.
  4. Anan­ny, Mike, and Kate Craw­ford. “See­ing with­out know­ing: Lim­i­ta­tions of the trans­paren­cy ide­al and its appli­ca­tion to algo­rith­mic account­abil­i­ty.” New Media & Soci­ety 20.3 (2018): 973–989.
  5. Hirsch, Tad, et al. “Design­ing con­testa­bil­i­ty: Inter­ac­tion design, machine learn­ing, and men­tal health.” Pro­ceed­ings of the 2017 Con­fer­ence on Design­ing Inter­ac­tive Sys­tems. ACM, 2017.

Books I’ve read in 2018

Goodreads tells me I’ve read 48 books in 2018. I set myself the goal of 36 so it looks like I beat it hand­i­ly. But includ­ed in that count are quite a few role­play­ing game books and comics. If I dis­card those I’m left with 28 titles. Still a decent amount but noth­ing par­tic­u­lar­ly remark­able. Below are a few lists and some notes to go with them.

Most of the non-fic­tion is some­where on the inter­sec­tion of design, tech­nol­o­gy and Left pol­i­tics. A lot of this read­ing was dri­ven by my desire to devel­op some kind of men­tal frame­work for the work we were doing with Tech Sol­i­dar­i­ty NL. More recently—since I start­ed my PhD—I’ve most­ly been read­ing text­books on research method­ol­o­gy. Hid­den from this list is the aca­d­e­m­ic papers I’ve start­ed con­sum­ing as part of this new job. I should fig­ure out a way of shar­ing some of that here or else­where as well.

I took a break from tech­nol­o­gy and indulged in a deep dive into the his­to­ry of the thir­ty year’s war with a mas­sive non-fic­tion treat­ment as well as a clas­sic picaresque set in the same time peri­od. While read­ing these I was tran­si­tion­ing into my new role as a father of twin boys. Some­what relat­ed was a brief his­to­ry of The Nether­lands, which I’ve start­ed rec­om­mend­ing to for­eign­ers who are strug­gling to under­stand our idio­syn­crat­ic lit­tle nation and go beyond superficialities. 

Then there’s the fic­tion, which in the begin­ning of the year con­sist­ed of high­brow weird and his­tor­i­cal nov­els but then ven­tured into clas­sic fan­ta­sy and (utopi­an) sci-fi ter­ri­to­ry. Again, most­ly because of a jus­ti­fi­able desire for some escapism in the sleep deprived evenings and nights.

Hav­ing men­tioned the arrival of our boys a few times it should come as no sur­prise that I also read a cou­ple of par­ent­ing books. These were more than enough for me and real­ly to be hon­est I think par­ent­ing is a thing best learned through prac­tice. Espe­cial­ly if you’re rais­ing two babies at once.

So that’s it. I’ve set myself the mod­est goal of 24 books for this year because I’m quite sure most of my read­ing will be papers and such. Here’s to a year of what I expect will be many more late night and ear­ly morn­ing read­ing ses­sions of escapist weird fiction.

Pre­vi­ous years: 2017, 2016, 2015, 2011, 2009.

Starting a PhD

Today is the first offi­cial work day of my new doc­tor­al researcher posi­tion at Delft Uni­ver­si­ty of Tech­nol­o­gy. After more than two years of lay­ing the ground work, I’m start­ing out on a new challenge. 

I remem­ber sit­ting out­side a Jew­el cof­fee bar in Sin­ga­pore1 and going over the var­i­ous options for what­ev­er would be next after shut­ting down Hub­bub. I knew I want­ed to delve into the impact of machine learn­ing and data sci­ence on inter­ac­tion design. And large­ly through process of elim­i­na­tion I felt the best place for me to do so would be inside of academia.

Back in the Nether­lands, with help from Ianus Keller, I start­ed mak­ing inroads at TU Delft, my first choice for this kind of work. I had vis­it­ed it on and off over the years, coach­ing stu­dents and doing guest lec­tures. I’d felt at home right away.

There were quite a few twists and turns along the way but now here we are. Start­ing this month I am a doc­tor­al can­di­date at Delft Uni­ver­si­ty of Technology’s fac­ul­ty of Indus­tri­al Design Engineering. 

My research is pro­vi­sion­al­ly titled ‘Intel­li­gi­bil­i­ty and Trans­paren­cy of Smart Pub­lic Infra­struc­tures: A Design Ori­ent­ed Approach’. Its main object of study is the MX3D smart bridge. My super­vi­sors are Gerd Kortuem and Neelke Doorn. And it’s all part of the NWO-fund­ed project ‘BRIdg­ing Data in the built Envi­ron­ment (BRIDE)’.

Below is a first rough abstract of the research. But in the months to come this is like­ly to change sub­stan­tial­ly as I start ham­mer­ing out a prop­er research plan. I plan to post the occa­sion­al update on my work here, so if you’re inter­est­ed your best bet is prob­a­bly to do the old RSS thing. There’s social media too, of course. And I might set up a newslet­ter at some point. We’ll see.

If any of this res­onates, do get in touch. I’d love to start a con­ver­sa­tion with as many peo­ple as pos­si­ble about this stuff.

Intel­li­gi­bil­i­ty and Trans­paren­cy of Smart Pub­lic Infra­struc­tures: A Design Ori­ent­ed Approach

This phd will explore how design­ers, tech­nol­o­gists, and cit­i­zens can uti­lize rapid urban man­u­fac­tur­ing and IoT tech­nolo­gies for design­ing urban space that express­es its intel­li­gence from the inter­sec­tion of peo­ple, places, activ­i­ties and tech­nol­o­gy, not mere­ly from the pres­ence of cut­ting-edge tech­nol­o­gy. The key ques­tion is how smart pub­lic infra­struc­ture, i.e. data-dri­ven and algo­rithm-rich pub­lic infra­struc­tures, can be under­stood by lay-people.

The design-ori­ent­ed research will uti­lize a ‘research through design’ approach to devel­op a dig­i­tal expe­ri­ence around the bridge and the sur­round­ing urban space. Dur­ing this extend­ed design and mak­ing process the phd stu­dent will con­duct empir­i­cal research to inves­ti­gate design choic­es and their impli­ca­tions on (1) new forms of par­tic­i­pa­to­ry data-informed design process­es, (2) the tech­nol­o­gy-medi­at­ed expe­ri­ence of urban space, (3) the emerg­ing rela­tion­ship between res­i­dents and “their” bridge, and (4) new forms of data-informed, cit­i­zen led gov­er­nance of pub­lic space.

  1. My Foursquare his­to­ry and 750 Words archive tell me this was on Sat­ur­day, Jan­u­ary 16, 2016. []

An Introduction to Value Sensitive Design

Phnom Bakheng
Phnom Bakheng

At a recent Tech Sol­i­dar­i­ty NL meet­up we dove into Val­ue Sen­si­tive Design. This approach had been on my radar for a while so when we con­clud­ed that for our com­mu­ni­ty it would be use­ful to talk about how to prac­tice eth­i­cal design and devel­op­ment of tech­nol­o­gy, I fig­ured we should check it out. 

Val­ue Sen­si­tive Design has been around for ages. The ear­li­est arti­cle I came across is by Batya Fried­man in a 1996 edi­tion of Inter­ac­tions mag­a­zine. Iron­i­cal­ly, or trag­i­cal­ly, I must say I have only heard about the approach from aca­d­e­mics and design the­o­ry nerds. In indus­try at large, Val­ue Sen­si­tive Design appears to be—to me at least—basically unknown. (A recent excep­tion would be this inter­est­ing mar­riage of design sprints with Val­ue Sen­si­tive Design by Cen­ny­dd Bowles.)

For the meet­up, I read a hand-full of papers and cob­bled togeth­er a deck which attempts to sum­marise this ’framework’—the term favoured by its main pro­po­nents. I went through it and then we had a spir­it­ed dis­cus­sion of how its ideas apply to our dai­ly prac­tice. A report of all of that can be found over at the Tech Sol­i­dar­i­ty NL website.

Below, I have attempt­ed to pull togeth­er the most salient points from what is a rather dense twen­ty-plus-slides deck. I hope it is of some use to those pro­fes­sion­al design­ers and devel­op­ers who are look­ing for bet­ter ways of build­ing tech­nol­o­gy that serves the inter­est of the many, not the few.

What fol­lows is most­ly adapt­ed from the chap­ter “Val­ue Sen­si­tive Design and Infor­ma­tion Sys­tems” in Human–computer inter­ac­tion in man­age­ment infor­ma­tion sys­tems: Foun­da­tions. All quotes are from there unless oth­er­wise noted.

Background

The depar­ture point is the obser­va­tion that “there is a need for an over­ar­ch­ing the­o­ret­i­cal and method­olog­i­cal frame­work with which to han­dle the val­ue dimen­sions of design work.” In oth­er words, some­thing that accounts for what we already know about how to deal with val­ues in design work in terms of the­o­ry and con­cepts, as well as meth­ods and techniques. 

This is of course not a new con­cern. For exam­ple, famed cyber­neti­cist Nor­bert Wiener argued that tech­nol­o­gy could help make us bet­ter human beings, and cre­ate a more just soci­ety. But for it to do so, he argued, we have to take con­trol of the technology.

We have to reject the “wor­ship­ing [of] the new gad­gets which are our own cre­ation as if they were our mas­ters.” (Wiener 1953)

We can find many more sim­i­lar argu­ments through­out the his­to­ry of infor­ma­tion tech­nol­o­gy. Recent­ly such con­cerns have flared up in indus­try as well as soci­ety at large. (Not always for the right rea­sons in my opin­ion, but that is some­thing we will set aside for now.) 

To address these con­cerns, Val­ue Sen­si­tive Design was devel­oped. It is “a the­o­ret­i­cal­ly ground­ed approach to the design of tech­nol­o­gy that accounts for human val­ues in a prin­ci­pled and com­pre­hen­sive man­ner through­out the design process.” It has been applied suc­cess­ful­ly for over 20 years. 

Defining Values

But what is a val­ue? In the lit­er­a­ture it is defined as “what a per­son or group of peo­ple con­sid­er impor­tant in life.” I like this def­i­n­i­tion because it is easy to grasp but also under­lines the slip­pery nature of val­ues. Some things to keep in mind when talk­ing about values:

  • In a nar­row sense, the word “val­ue” refers sim­ply to the eco­nom­ic worth of an object. This is not the mean­ing employed by Val­ue Sen­si­tive Design.
  • Val­ues should not be con­flat­ed with facts (the “fact/value dis­tinc­tion”) espe­cial­ly inso­far as facts do not log­i­cal­ly entail value.
  • Is” does not imply “ought” (the nat­u­ral­is­tic fallacy).
  • Val­ues can­not be moti­vat­ed only by an empir­i­cal account of the exter­nal world, but depend sub­stan­tive­ly on the inter­ests and desires of human beings with­in a cul­tur­al milieu. (So con­trary to what some right-wingers like to say: “Facts do care about your feelings.”)

Investigations

Let’s dig into the way this all works. “Val­ue Sen­si­tive Design is an iter­a­tive method­ol­o­gy that inte­grates con­cep­tu­al, empir­i­cal, and tech­ni­cal inves­ti­ga­tions.” So it dis­tin­guish­es between three types of activ­i­ties (“inves­ti­ga­tions”) and it pre­scribes cycling through these activ­i­ties mul­ti­ple times. Below are list­ed ques­tions and notes that are rel­e­vant to each type of inves­ti­ga­tion. But in brief, this is how I under­stand them: 

  1. Defin­ing the spe­cif­ic val­ues at play in a project;
  2. Observ­ing, mea­sur­ing, and doc­u­ment­ing people’s behav­iour and the con­text of use;
  3. Analysing the ways in which a par­tic­u­lar tech­nol­o­gy sup­ports or hin­ders par­tic­u­lar values.

Conceptual Investigations

  • Who are the direct and indi­rect stake­hold­ers affect­ed by the design at hand?
  • How are both class­es of stake­hold­ers affected?
  • What val­ues are implicated?
  • How should we engage in trade-offs among com­pet­ing val­ues in the design, imple­men­ta­tion, and use of infor­ma­tion sys­tems (e.g., auton­o­my vs. secu­ri­ty, or anonymi­ty vs. trust)?
  • Should moral val­ues (e.g., a right to pri­va­cy) have greater weight than, or even trump, non-moral val­ues (e.g., aes­thet­ic preferences)?

Empirical Investigations

  • How do stake­hold­ers appre­hend indi­vid­ual val­ues in the inter­ac­tive context?
  • How do they pri­ori­tise com­pet­ing val­ues in design trade-offs?
  • How do they pri­ori­tise indi­vid­ual val­ues and usabil­i­ty considerations?
  • Are there dif­fer­ences between espoused prac­tice (what peo­ple say) com­pared with actu­al prac­tice (what peo­ple do)?

And, specif­i­cal­ly focus­ing on organisations:

  • What are organ­i­sa­tions’ moti­va­tions, meth­ods of train­ing and dis­sem­i­na­tion, reward struc­tures, and eco­nom­ic incentives?

Technical Investigations

Not a list of ques­tions here, but some notes:

Val­ue Sen­si­tive Design takes the posi­tion that tech­nolo­gies in gen­er­al, and infor­ma­tion and com­put­er tech­nolo­gies in par­tic­u­lar, have prop­er­ties that make them more or less suit­able for cer­tain activ­i­ties. A giv­en tech­nol­o­gy more read­i­ly sup­ports cer­tain val­ues while ren­der­ing oth­er activ­i­ties and val­ues more dif­fi­cult to realise.

Tech­ni­cal inves­ti­ga­tions involve the proac­tive design of sys­tems to sup­port val­ues iden­ti­fied in the con­cep­tu­al investigation.

Tech­ni­cal inves­ti­ga­tions focus on the tech­nol­o­gy itself. Empir­i­cal inves­ti­ga­tions focus on the indi­vid­u­als, groups, or larg­er social sys­tems that con­fig­ure, use, or are oth­er­wise affect­ed by the technology. 

Significance

Below is a list of things that make Val­ue Sen­si­tive Design dif­fer­ent from oth­er approach­es, par­tic­u­lar­ly ones that pre­ced­ed it such as Com­put­er-Sup­port­ed Coop­er­a­tive Work and Par­tic­i­pa­to­ry Design.

  1. Val­ue Sen­si­tive Design seeks to be proac­tive
  2. Val­ue Sen­si­tive Design enlarges the are­na in which val­ues arise to include not only the work place
  3. Val­ue Sen­si­tive Design con­tributes a unique method­ol­o­gy that employs con­cep­tu­al, empir­i­cal, and tech­ni­cal inves­ti­ga­tions, applied iter­a­tive­ly and integratively
  4. Val­ue Sen­si­tive Design enlarges the scope of human val­ues beyond those of coop­er­a­tion (CSCW) and par­tic­i­pa­tion and democ­ra­cy (Par­tic­i­pa­to­ry Design) to include all val­ues, espe­cial­ly those with moral import.
  5. Val­ue Sen­si­tive Design dis­tin­guish­es between usabil­i­ty and human val­ues with eth­i­cal import.
  6. Val­ue Sen­si­tive Design iden­ti­fies and takes seri­ous­ly two class­es of stake­hold­ers: direct and indirect.
  7. Val­ue Sen­si­tive Design is an inter­ac­tion­al theory
  8. Val­ue Sen­si­tive Design builds from the psy­cho­log­i­cal propo­si­tion that cer­tain val­ues are uni­ver­sal­ly held, although how such val­ues play out in a par­tic­u­lar cul­ture at a par­tic­u­lar point in time can vary considerably

[ad 4] “By moral, we refer to issues that per­tain to fair­ness, jus­tice, human wel­fare and virtue, […] Val­ue Sen­si­tive Design also accounts for con­ven­tions (e.g., stan­dard­i­s­a­tion of pro­to­cols) and per­son­al values”

[ad 5] “Usabil­i­ty refers to char­ac­ter­is­tics of a sys­tem that make it work in a func­tion­al sense, […] not all high­ly usable sys­tems sup­port eth­i­cal values”

[ad 6] “Often, indi­rect stake­hold­ers are ignored in the design process.”

[ad 7] “val­ues are viewed nei­ther as inscribed into tech­nol­o­gy (an endoge­nous the­o­ry), nor as sim­ply trans­mit­ted by social forces (an exoge­nous the­o­ry). […] the inter­ac­tion­al posi­tion holds that while the fea­tures or prop­er­ties that peo­ple design into tech­nolo­gies more read­i­ly sup­port cer­tain val­ues and hin­der oth­ers, the technology’s actu­al use depends on the goals of the peo­ple inter­act­ing with it. […] through human inter­ac­tion, tech­nol­o­gy itself changes over time.”

[ad 8] “the more con­crete­ly (act-based) one con­cep­tu­alis­es a val­ue, the more one will be led to recog­nis­ing cul­tur­al vari­a­tion; con­verse­ly, the more abstract­ly one con­cep­tu­alis­es a val­ue, the more one will be led to recog­nis­ing universals”

How-To

Val­ue Sen­si­tive Design doesn’t pre­scribe a par­tic­u­lar process, which is fine by me, because I believe strong­ly in tai­lor­ing your process to the par­tic­u­lar project at hand. Part of being a thought­ful design­er is design­ing a project’s process as well. How­ev­er, some guid­ance is offered for how to pro­ceed in most cas­es. Here’s a list, plus some notes.

  1. Start with a val­ue, tech­nol­o­gy, or con­text of use
  2. Iden­ti­fy direct and indi­rect stakeholders
  3. Iden­ti­fy ben­e­fits and harms for each stake­hold­er group
  4. Map ben­e­fits and harms onto cor­re­spond­ing values
  5. Con­duct a con­cep­tu­al inves­ti­ga­tion of key values
  6. Iden­ti­fy poten­tial val­ue conflicts
  7. Inte­grate val­ue con­sid­er­a­tions into one’s organ­i­sa­tion­al structure

[ad 1] “We sug­gest start­ing with the aspect that is most cen­tral to your work and interests.” 

[ad 2] “direct stake­hold­ers are those indi­vid­u­als who inter­act direct­ly with the tech­nol­o­gy or with the technology’s out­put. Indi­rect stake­hold­ers are those indi­vid­u­als who are also impact­ed by the sys­tem, though they nev­er inter­act direct­ly with it. […] With­in each of these two over­ar­ch­ing cat­e­gories of stake­hold­ers, there may be sev­er­al sub­groups. […] A sin­gle indi­vid­ual may be a mem­ber of more than one stake­hold­er group or sub­group. […] An organ­i­sa­tion­al pow­er struc­ture is often orthog­o­nal to the dis­tinc­tion between direct and indi­rect stakeholders.”

[ad 3] “one rule of thumb in the con­cep­tu­al inves­ti­ga­tion is to give pri­or­i­ty to indi­rect stake­hold­ers who are strong­ly affect­ed, or to large groups that are some­what affect­ed […] Attend to issues of tech­ni­cal, cog­ni­tive, and phys­i­cal com­pe­ten­cy. […] per­sonas have a ten­den­cy to lead to stereo­types because they require a list of “social­ly coher­ent” attrib­ut­es to be asso­ci­at­ed with the “imag­ined indi­vid­ual.” […] we have devi­at­ed from the typ­i­cal use of per­sonas that maps a sin­gle per­sona onto a sin­gle user group, to allow for a sin­gle per­sona to map onto to mul­ti­ple stake­hold­er groups”

[ad 4] “In some cas­es, the cor­re­spond­ing val­ues will be obvi­ous, but not always.”

[ad 5] “the philo­soph­i­cal onto­log­i­cal lit­er­a­ture can help pro­vide cri­te­ria for what a val­ue is, and there­by how to assess it empirically.”

[ad 6] “val­ue con­flicts should usu­al­ly not be con­ceived of as “either/or” sit­u­a­tions, but as con­straints on the design space.”

[ad 7] “In the real world, of course, human val­ues (espe­cial­ly those with eth­i­cal import) may col­lide with eco­nom­ic objec­tives, pow­er, and oth­er fac­tors. How­ev­er, even in such sit­u­a­tions, Val­ue Sen­si­tive Design should be able to make pos­i­tive con­tri­bu­tions, by show­ing alter­nate designs that bet­ter sup­port endur­ing human values.”

Considering Values

Human values with ethical import often implicated in system design
Human val­ues with eth­i­cal import often impli­cat­ed in sys­tem design

This table is a use­ful heuris­tic tool for val­ues that might be con­sid­ered. The authors note that it is not intend­ed as a com­plete list of human val­ues that might be impli­cat­ed. Anoth­er more elab­o­rate tool of a sim­i­lar sort are the Envi­sion­ing Cards.

For the ethics nerds, it may be inter­est­ing to note that most of the val­ues in this table hinge on the deon­to­log­i­cal and con­se­quen­tial­ist moral ori­en­ta­tions. In addi­tion, the authors have chose sev­er­al oth­er val­ues relat­ed to sys­tem design.

Interviewing Stakeholders

When doing the empir­i­cal inves­ti­ga­tions you’ll prob­a­bly rely on stake­hold­er inter­views quite heav­i­ly. Stake­hold­er inter­views shouldn’t be a new thing to any design pro­fes­sion­al worth their salt. But the authors do offer some prac­ti­cal point­ers to keep in mind.

First of all, keep the inter­view some­what open-end­ed. This means con­duct­ing a semi-struc­tured inter­view. This will allow you to ask the things you want to know, but also cre­ates the oppor­tu­ni­ty for new and unex­pect­ed insights to emerge. 

Laddering—repeatedly ask­ing the ques­tion “Why?” can get you quite far.

The most impor­tant thing, before inter­view­ing stake­hold­ers, is to have a good under­stand­ing of the sub­ject at hand. Demar­cate it using cri­te­ria that can be explained to out­siders. Use descrip­tions of issues or tasks for par­tic­i­pants to engage in, so that the sub­ject of the inves­ti­ga­tion becomes more concrete. 

Technical Investigations

Two things I find inter­est­ing here. First of all, we are encour­aged to map the rela­tion­ship between design trade-offs, val­ue con­flicts and stake­hold­er groups. The goal of this exer­cise is to be able to see how stake­hold­er groups are affect­ed in dif­fer­ent ways.

The sec­ond use­ful sug­ges­tion for tech­ni­cal inves­ti­ga­tions is to build flex­i­bil­i­ty into a prod­uct or service’s tech­ni­cal infra­struc­ture. The rea­son for this is that over time, new val­ues and val­ue con­flicts can emerge. As design­ers we are not always around any­more once a sys­tem is deployed so it is good prac­tice to enable the stake­hold­ers to adapt our design to their evolv­ing needs. (I was very much remind­ed of the approach advo­cat­ed by Stew­art Brand in How Build­ings Learn.)

Conclusion

When dis­cussing mat­ters of ethics in design with peers I often notice a reluc­tance to widen the scope of our prac­tice to include these issues. Fre­quent­ly, folks argue that since it is impos­si­ble to fore­see all the poten­tial con­se­quences of design choic­es, we can’t pos­si­bly be held account­able for all the ter­ri­ble things that can hap­pen as a result of a new tech­nol­o­gy being intro­duced into society.

I think that’s a mis­un­der­stand­ing of what eth­i­cal design is about. We may not always be direct­ly respon­si­ble for the con­se­quences of our design (both good and bad). But we are respon­si­ble for what we choose to make part of our con­cerns as we prac­tice design. This should include the val­ues con­sid­ered impor­tant by the peo­ple impact­ed by our designs. 

In the 1996 arti­cle men­tioned at the start of this post, Fried­man con­cludes as follows:

As with the tra­di­tion­al cri­te­ria of reli­a­bil­i­ty, effi­cien­cy, and cor­rect­ness, we do not require per­fec­tion in val­ue-sen­si­tive design, but a com­mit­ment. And progress.” (Fried­man 1996)

I think that is an apt place to end it here as well.

References

  • Fried­man, Batya. “Val­ue-sen­si­tive design.” inter­ac­tions 3.6 (1996): 16–23.
  • Fried­man, Batya, Peter Kahn, and Alan Born­ing. “Val­ue sen­si­tive design: The­o­ry and meth­ods.” Uni­ver­si­ty of Wash­ing­ton tech­ni­cal report (2002): 02–12.
  • Le Dan­tec, Christo­pher A., Eri­ka She­han Poole, and Susan P. Wyche. “Val­ues as lived expe­ri­ence: evolv­ing val­ue sen­si­tive design in sup­port of val­ue dis­cov­ery.” Pro­ceed­ings of the SIGCHI con­fer­ence on human fac­tors in com­put­ing sys­tems. ACM, 2009.
  • Born­ing, Alan, and Michael Muller. “Next steps for val­ue sen­si­tive design.” Pro­ceed­ings of the SIGCHI con­fer­ence on human fac­tors in com­put­ing sys­tems. ACM, 2012.
  • Frei­d­man, B., P. Kahn, and A. Born­ing. “Val­ue sen­si­tive design and infor­ma­tion sys­tems.” Human–computer inter­ac­tion in man­age­ment infor­ma­tion sys­tems: Foun­da­tions (2006): 348–372.

Books I’ve read in 2017

Return­ing to what is some­thing of an annu­al tra­di­tion, these are the books I’ve read in 2017. I set myself the goal of get­ting to 36 and man­aged 38 in the end. They’re list­ed below with some com­men­tary on par­tic­u­lar­ly mem­o­rable or oth­er­wise note­wor­thy reads. To make things a bit more user friend­ly I’ve gone with four broad buck­ets although as you’ll see with­in each the picks range across gen­res and subjects.

Fiction

I always have one piece of fic­tion or nar­ra­tive non-fic­tion going. I have a long-stand­ing ‘project’ of read­ing cult clas­sics. I can’t set­tle on a top pick for the first cat­e­go­ry so it’s going to have to be a tie between Lowry’s alco­hol-drenched tale of lost love in pre-WWII Mex­i­co, and Salter’s unmatched lyri­cal prose treat­ment of a young couple’s liaisons as imag­ined by a lech­er­ous recluse in post-WWII France.

When I feel like some­thing lighter I tend to seek out sci-fi writ­ten from before I was born. (Con­tem­po­rary sci-fi more often than not dis­ap­points me with its lack of imag­i­na­tion, or worse, nos­tal­gia for futures past. I’m look­ing at you, Cline.) My top pick here would be the Stru­gatsky broth­ers, who blew me away with their weird tale of a world for­ev­er changed by the inex­plic­a­ble vis­it by some­thing tru­ly alien.

I’ve also con­tin­ued to seek out works by women, although I’ve been less strict with myself in this depart­ment than pre­vi­ous years. Here I’m ashamed to admit it took me this long to final­ly read any­thing by Woolf because Mrs Dal­loway is every bit as good as they say it is. I rec­om­mend seek­ing out the anno­tat­ed Pen­guin addi­tion for addi­tion­al insights into the many things she references.

I’ve also some­times picked up a new­er book because it popped up on my radar and I was just real­ly excit­ed about read­ing it. Most notably Dolan’s retelling of the Ili­ad in all its glo­ri­ous, sad and gory detail, updat­ed for today’s sensibilities.

Literary non-fiction

Each time I read a nar­ra­tive treat­ment of his­to­ry or cur­rent affairs I feel like I should be doing more of it. All of these are rec­om­mend­ed but Kapuś­cińs­ki tow­ers over all with his heart-wrench­ing first-per­son account of the Iran­ian revolution.

Non-fiction

A few books on design and tech­nol­o­gy here, although most of my ‘pro­fes­sion­al’ read­ing was con­fined to aca­d­e­m­ic papers this year. I find those to be a more effec­tive way of get­ting a han­dle on a par­tic­u­lar sub­ject. Books pub­lished on my méti­er are noto­ri­ous­ly fluffy. I’ll point out Löw­gren for a tough but reward­ing read on how to do inter­ac­tion design in a non-dog­mat­ic but reflec­tive way.

I got into left­ist pol­i­tics quite heav­i­ly this year and tried to edu­cate myself a bit on con­tem­po­rary anti-cap­i­tal­ist think­ing. Fisher’s book is a most inter­est­ing and also amus­ing diag­no­sis of the cur­rent polit­i­cal and eco­nom­ic world sys­tem through a cul­tur­al lens. It’s a shame he’s no longer with us, I won­der what he would have made of recent events.

Game books

I decid­ed to work my way through a bunch of role­play­ing game books all ‘pow­ered by the apoc­a­lypse’ – a fam­i­ly of games which I have been aware of for quite a while but haven’t had the oppor­tu­ni­ty to play myself. I like read­ing these because I find them odd­ly inspi­ra­tional for pro­fes­sion­al pur­pos­es. But I will point to the orig­i­nal Apoc­a­lypse World as the one must-read as Bak­er remains one of the design­ers I am absolute­ly in awe of for the ways in which he man­ages to com­bine sys­tem and fic­tion in tru­ly inven­tive ways.

  • The Per­ilous Wilds, Jason Lutes
  • Urban Shad­ows: Polit­i­cal Urban Fan­ta­sy Pow­ered by the Apoc­a­lypse, Andrew Medeiros
  • Dun­geon World, Sage LaTorra
  • Apoc­a­lypse World, D. Vin­cent Baker

Poetry

I don’t usu­al­ly read poet­ry for rea­sons sim­i­lar to how I basi­cal­ly stopped read­ing comics ear­li­er: I can’t seem to find a good way of dis­cov­er­ing worth­while things to read. The col­lec­tion below was a gift, and a delight­ful one.

As always, I wel­come sug­ges­tions for what to read next. I’m shoot­ing for 36 again this year and plan to pro­ceed rough­ly as I’ve been doing lately—just mean­der from book to book with a bias towards works that are non-anglo, at least as old as I am, and prefer­ably weird or inventive. 

Pre­vi­ous years: 2016, 2015, 2011, 2009.

Design and machine learning – an annotated reading list

Ear­li­er this year I coached Design for Inter­ac­tion mas­ter stu­dents at Delft Uni­ver­si­ty of Tech­nol­o­gy in the course Research Method­ol­o­gy. The stu­dents organ­ised three sem­i­nars for which I pro­vid­ed the claims and assigned read­ing. In the sem­i­nars they argued about my claims using the Toul­min Mod­el of Argu­men­ta­tion. The read­ings served as sources for back­ing and evidence.

The claims and read­ings were all relat­ed to my nascent research project about machine learn­ing. We delved into both design­ing for machine learn­ing, and using machine learn­ing as a design tool.

Below are the read­ings I assigned, with some notes on each, which should help you decide if you want to dive into them yourself.

Hebron, Patrick. 2016. Machine Learn­ing for Design­ers. Sebastopol: O’Reilly.

The only non-aca­d­e­m­ic piece in this list. This served the pur­pose of get­ting all stu­dents on the same page with regards to what machine learn­ing is, its appli­ca­tions of machine learn­ing in inter­ac­tion design, and com­mon chal­lenges encoun­tered. I still can’t think of any oth­er sin­gle resource that is as good a start­ing point for the sub­ject as this one.

Fiebrink, Rebec­ca. 2016. “Machine Learn­ing as Meta-Instru­ment: Human-Machine Part­ner­ships Shap­ing Expres­sive Instru­men­tal Cre­ation.” In Musi­cal Instru­ments in the 21st Cen­tu­ry, 14:137–51. Sin­ga­pore: Springer Sin­ga­pore. doi:10.1007/978–981–10–2951–6_10.

Fiebrink’s Wek­ina­tor is ground­break­ing, fun and inspir­ing so I had to include some of her writ­ing in this list. This is most­ly of inter­est for those look­ing into the use of machine learn­ing for design and oth­er cre­ative and artis­tic endeav­ours. An impor­tant idea explored here is that tools that make use of (inter­ac­tive, super­vised) machine learn­ing can be thought of as instru­ments. Using such a tool is like play­ing or per­form­ing, explor­ing a pos­si­bil­i­ty space, engag­ing in a dia­logue with the tool. For a tool to feel like an instru­ment requires a tight action-feed­back loop.

Dove, Gra­ham, Kim Hal­skov, Jodi For­l­izzi, and John Zim­mer­man. 2017. UX Design Inno­va­tion: Chal­lenges for Work­ing with Machine Learn­ing as a Design Mate­r­i­al. The 2017 CHI Con­fer­ence. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.

A real­ly good sur­vey of how design­ers cur­rent­ly deal with machine learn­ing. Key take­aways include that in most cas­es, the appli­ca­tion of machine learn­ing is still engi­neer­ing-led as opposed to design-led, which ham­pers the cre­ation of non-obvi­ous machine learn­ing appli­ca­tions. It also makes it hard for design­ers to con­sid­er eth­i­cal impli­ca­tions of design choic­es. A key rea­son for this is that at the moment, pro­to­typ­ing with machine learn­ing is pro­hib­i­tive­ly cumbersome.

Fiebrink, Rebec­ca, Per­ry R Cook, and Dan True­man. 2011. “Human Mod­el Eval­u­a­tion in Inter­ac­tive Super­vised Learn­ing.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.

The sec­ond Fiebrink piece in this list, which is more of a deep dive into how peo­ple use Wek­ina­tor. As with the chap­ter list­ed above this is required read­ing for those work­ing on design tools which make use of inter­ac­tive machine learn­ing. An impor­tant find­ing here is that users of intel­li­gent design tools might have very dif­fer­ent cri­te­ria for eval­u­at­ing the ‘cor­rect­ness’ of a trained mod­el than engi­neers do. Such cri­te­ria are like­ly sub­jec­tive and eval­u­a­tion requires first-hand use of the mod­el in real time. 

Bostrom, Nick, and Eliez­er Yud­kowsky. 2014. “The Ethics of Arti­fi­cial Intel­li­gence.” In The Cam­bridge Hand­book of Arti­fi­cial Intel­li­gence, edit­ed by Kei­th Frank­ish and William M Ram­sey, 316–34. Cam­bridge: Cam­bridge Uni­ver­si­ty Press. doi:10.1017/CBO9781139046855.020.

Bostrom is known for his some­what crazy but thought­pro­vok­ing book on super­in­tel­li­gence and although a large part of this chap­ter is about the ethics of gen­er­al arti­fi­cial intel­li­gence (which at the very least is still a way out), the first sec­tion dis­cuss­es the ethics of cur­rent “nar­row” arti­fi­cial intel­li­gence. It makes for a good check­list of things design­ers should keep in mind when they cre­ate new appli­ca­tions of machine learn­ing. Key insight: when a machine learn­ing sys­tem takes on work with social dimensions—tasks pre­vi­ous­ly per­formed by humans—the sys­tem inher­its its social requirements.

Yang, Qian, John Zim­mer­man, Aaron Ste­in­feld, and Antho­ny Toma­sic. 2016. Plan­ning Adap­tive Mobile Expe­ri­ences When Wire­fram­ing. The 2016 ACM Con­fer­ence. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.

Final­ly, a feet-in-the-mud explo­ration of what it actu­al­ly means to design for machine learn­ing with the tools most com­mon­ly used by design­ers today: draw­ings and dia­grams of var­i­ous sorts. In this case the focus is on using machine learn­ing to make an inter­face adap­tive. It includes an inter­est­ing dis­cus­sion of how to bal­ance the use of implic­it and explic­it user inputs for adap­ta­tion, and how to deal with infer­ence errors. Once again the lim­i­ta­tions of cur­rent sketch­ing and pro­to­typ­ing tools is men­tioned, and relat­ed to the need for design­ers to devel­op tac­it knowl­edge about machine learn­ing. Such tac­it knowl­edge will only be gained when design­ers can work with machine learn­ing in a hands-on manner.

Supplemental material

Floyd, Chris­tiane. 1984. “A Sys­tem­at­ic Look at Pro­to­typ­ing.” In Approach­es to Pro­to­typ­ing, 1–18. Berlin, Hei­del­berg: Springer Berlin Hei­del­berg. doi:10.1007/978–3–642–69796–8_1.

I pro­vid­ed this to stu­dents so that they get some addi­tion­al ground­ing in the var­i­ous kinds of pro­to­typ­ing that are out there. It helps to pre­vent reduc­tive notions of pro­to­typ­ing, and it makes for a nice com­ple­ment to Buxton’s work on sketch­ing.

Ble­vis, E, Y Lim, and E Stolter­man. 2006. “Regard­ing Soft­ware as a Mate­r­i­al of Design.”

Some of the papers refer to machine learn­ing as a “design mate­r­i­al” and this paper helps to under­stand what that idea means. Soft­ware is a mate­r­i­al with­out qual­i­ties (it is extreme­ly mal­leable, it can sim­u­late near­ly any­thing). Yet, it helps to con­sid­er it as a phys­i­cal mate­r­i­al in the metaphor­i­cal sense because we can then apply ways of design think­ing and doing to soft­ware programming.

Status update

This is not exact­ly a now page, but I thought I would write up what I am doing at the moment since last report­ing on my sta­tus in my end-of-year report.

The major­i­ty of my work­days are spent doing free­lance design con­sult­ing. My pri­ma­ry gig has been through Eend at the Dutch Vic­tim Sup­port Foun­da­tion, where until very recent­ly I was part of a team build­ing online ser­vices. I helped out with prod­uct strat­e­gy, set­ting up a lean UX design process, and get­ting an inte­grat­ed agile design and devel­op­ment team up and run­ning. The first ser­vices are now ship­ping so it is time for me to move on, after 10 months of very grat­i­fy­ing work. I real­ly enjoy work­ing in the pub­lic sec­tor and I hope to be doing more of it in future.

So yes, this means I am avail­able and you can hire me to do strat­e­gy and design for soft­ware prod­ucts and ser­vices. Just send me an email.

Short­ly before the Dutch nation­al elec­tions of this year, Iskan­der and I gath­ered a group of fel­low tech work­ers under the ban­ner of “Tech Sol­i­dar­i­ty NL to dis­cuss the con­cern­ing lurch to the right in nation­al pol­i­tics and what our field can do about it. This has devel­oped into a small but active com­mu­ni­ty who gath­er month­ly to edu­cate our­selves and devel­op plans for col­lec­tive action. I am get­ting a huge boost out of this. Fig­ur­ing out how to be a left­ist in this day and age is not easy. The only way to do it is to prac­tice and for that reflec­tion with peers is invalu­able. Build­ing and facil­i­tat­ing a group like this is huge­ly edu­ca­tion­al too. I have learned a lot about how a com­mu­ni­ty is boot-strapped and nurtured.

If you are in the Nether­lands, your pol­i­tics are left of cen­ter, and you work in tech­nol­o­gy, con­sid­er your­self invit­ed to join.

And final­ly, the last major thing on my plate is a con­tin­u­ing effort to secure a PhD posi­tion for myself. I am get­ting great sup­port from peo­ple at Delft Uni­ver­si­ty of Tech­nol­o­gy, in par­tic­u­lar Gerd Kortuem. I am focus­ing on inter­net of things prod­ucts that have fea­tures dri­ven by machine learn­ing. My ulti­mate aim is to devel­op pro­to­typ­ing tools for design and devel­op­ment teams that will help them cre­ate more inno­v­a­tive and more eth­i­cal solu­tions. The first step for this will be to con­duct field research inside com­pa­nies who are cre­at­ing such prod­ucts right now. So I am reach­ing out to peo­ple to see if I can secure a rea­son­able amount of poten­tial col­lab­o­ra­tors for this, which will go a long way in prov­ing the fea­si­bil­i­ty of my whole plan.

If you know of any com­pa­nies that devel­op con­sumer-fac­ing prod­ucts that have a con­nect­ed hard­ware com­po­nent and make use of machine learn­ing to dri­ve fea­tures, do let me know.

That’s about it. Free­lance UX con­sult­ing, left­ist tech-work­er organ­is­ing and design-for-machine-learn­ing research. Quite hap­py with that mix, really.

Curiosity is our product

A few weeks ago I facil­i­tat­ed a dis­cus­sion on ‘advo­ca­cy in a post-truth era’ at the Euro­pean Dig­i­tal Rights Initiative’s annu­al gen­er­al assem­bly. And last night I was part of a dis­cus­sion on fake news at a behav­iour design meet­up in Ams­ter­dam. This was a good occa­sion to pull togeth­er some of my notes and fig­ure out what I think is true about the ‘fake news’ phenomenon.

There is plen­ty of good writ­ing out there explor­ing the his­to­ry and cur­rent state of post-truth polit­i­cal culture. 

Kellyanne Conway’s “alter­na­tive facts” and Michael Gove’s “I think peo­ple have had enough of experts” are just two exam­ples of the right’s appro­pri­a­tion of what I would call epis­te­mo­log­i­cal rel­a­tivism. Post-mod­ernism was fun while it worked to advance our left­ist agen­da. But now that the tables are turned we’re not enjoy­ing it quite as much any­more, are we?

Part of the fact-free pol­i­tics play­book goes back at least as far as big tobacco’s efforts to dis­cred­it the anti-smok­ing lob­by. “Doubt is our prod­uct” still applies to mod­ern day reac­tionary move­ments such as cli­mate change deniers and anti-vaxers.

The dou­ble wham­my of news indus­try com­mer­cial­i­sa­tion and inter­net plat­form con­sol­i­da­tion has cre­at­ed fer­tile ground for coor­di­nat­ed efforts by var­i­ous groups to turn the sow­ing of doubt all the way up to eleven.

There is Russia’s “fire­hose of false­hood” which sends a high vol­ume of mes­sages across a wide range of chan­nels with total dis­re­gard for truth or even con­sis­ten­cy in a rapid, con­tin­u­ous and repet­i­tive fash­ion. They seem to be hav­ing fun desta­bil­is­ing west­ern democ­ra­cies — includ­ing the Nether­lands — with­out any appar­ent end-goal in mind.

And then there is the out­rage mar­ket­ing lever­aged by trolls both minor and major. Piss­ing off main­stream media builds an audi­ence on the fringes and in the under­ground. Jour­nal­ists are held hostage by fig­ures such as Milo because they depend on sto­ries that trig­ger strong emo­tions for dis­tri­b­u­tion, eye­balls, clicks and ulti­mate­ly revenue. 

So, giv­en all of this, what is to be done? First some bad news. Facts, the weapon of choice for lib­er­als, don’t appear to work. This is empir­i­cal­ly evi­dent from recent events, but it also appears to be borne out by psychology. 

Facts are often more com­pli­cat­ed than the untruths they are sup­posed to counter. It is also eas­i­er to remem­ber a sim­ple lie than a com­pli­cat­ed truth. Com­pli­cat­ing mat­ters fur­ther, facts tend to be bor­ing. Final­ly, and most inter­est­ing­ly, there is some­thing called the ‘back­fire effect’: we become more entrenched in our views when con­front­ed with con­tra­dict­ing facts, because they are threat­en­ing to our group identities.

More bad news. Giv­en the speed at which false­hoods spread through our net­works, fact-check­ing is use­less. Fact-check­ing is after-the-fact-check­ing. Worse, when media fact-check false­hoods on their front pages they are sim­ply pro­vid­ing even more air­time to them. From a strate­gic per­spec­tive, when you debunk, you allow your­self to be cap­tured by your opponent’s frame, and you’re also on the defen­sive. In Boy­di­an terms you are caught in their OODA loop, when you should be work­ing to take back the ini­tia­tive, and you should be offer­ing an alter­na­tive narrative. 

I am not hope­ful main­stream media will save us from these dynam­ics giv­en the real­i­ties of the busi­ness mod­els they oper­ate inside of. Jour­nal­ists inside of these organ­i­sa­tions are typ­i­cal­ly over­worked, just hold­ing on for dear life and churn­ing out sto­ries at a rapid clip. In short, there is no time to ori­ent and manoeu­vre. For bad-faith actors, they are sit­ting ducks.

What about lit­er­a­cy? If only peo­ple knew about chur­nal­ism, the atten­tion econ­o­my, and fil­ter bub­bles ‘they’ would become immune to the lies ped­dled by reac­tionar­ies and return to the lib­er­al fold. Per­son­al­ly I find these claims high­ly uncon­vinc­ing not to men­tion condescending. 

My cur­rent work­ing the­o­ry is that we, all of us, buy into the sto­ries that acti­vate one or more of our group iden­ti­ties, regard­less of wether they are fact-based or out­right lies. This is called ‘moti­vat­ed rea­son­ing’. Since this is a fact of psy­chol­o­gy, we are all sus­cep­ti­ble to it, includ­ing lib­er­als who are sup­pos­ed­ly defend­ers of fact-based reasoning.

Seri­ous­ly though, what about lit­er­a­cy? I’m sor­ry, no. There is evi­dence that sci­en­tif­ic lit­er­a­cy actu­al­ly increas­es polar­i­sa­tion. Moti­vat­ed rea­son­ing trumps fac­tu­al knowl­edge you may have. The same research shows how­ev­er that curios­i­ty in turn trumps moti­vat­ed rea­son­ing. The way I under­stand the dis­tinc­tion between lit­er­a­cy and curios­i­ty is that the for­mer is about knowl­edge while the lat­ter is about atti­tude. Moti­vat­ed rea­son­ing isn’t coun­ter­act­ed by know­ing stuff, but by want­i­ng to know stuff.

This is a mixed bag. Offer­ing facts is com­par­a­tive­ly easy. Spark­ing curios­i­ty requires sto­ry­telling which in turn requires imag­i­na­tion. If we’re pre­sent­ed with a fact we are not invit­ed to ask ques­tions. How­ev­er, if we are pre­sent­ed with ques­tions and those ques­tions are wrapped up in sto­ries that cre­ate emo­tion­al stakes, some of the views we hold might be destabilised.

In oth­er words, if doubt is the prod­uct ped­dled by our oppo­nents, then we should start traf­fick­ing in curiosity.

Further reading

Design × AI coffee meetup

If you work in the field of design or arti­fi­cial intel­li­gence and are inter­est­ed in explor­ing the oppor­tu­ni­ties at their inter­sec­tion, con­sid­er your­self invit­ed to an infor­mal cof­fee meet­up on Feb­ru­ary 15, 10am at Brix in Amsterdam.

Erik van der Plui­jm and myself have for a while now been car­ry­ing on a con­ver­sa­tion about AI and design and we felt it was time to expand the cir­cle a bit. We are very curi­ous who else out there shares our excitement.

Ques­tions we are mulling over include: How does the design process change when cre­at­ing intel­li­gent prod­ucts? And: How can teams col­lab­o­rate with intel­li­gent design tools to solve prob­lems in new and inter­est­ing ways?

Any­way, lots to chew on.

No need to sign up or any­thing, just show up and we’ll see what happens.

High-skill robots, low-skill workers

Some notes on what I think I under­stand about tech­nol­o­gy and inequality.

Let’s start with an obvi­ous big ques­tion: is tech­nol­o­gy destroy­ing jobs faster than they can be replaced? On the long term the evi­dence isn’t strong. Humans always appear to invent new things to do. There is no rea­son this time around should be any different.

But in the short term tech­nol­o­gy has con­tributed to an evap­o­ra­tion of mid-skilled jobs. Parts of these jobs are auto­mat­ed entire­ly, parts can be done by few­er peo­ple because of high­er pro­duc­tiv­i­ty gained from tech.

While pro­duc­tiv­i­ty con­tin­ues to grow, jobs are lag­ging behind. The year 2000 appears to have been a turn­ing point. “Some­thing” hap­pened around that time. But no-one knows exact­ly what. 

My hunch is that we’ve seen an emer­gence of a new class of pseu­do-monop­o­lies. Oli­gop­o­lies. And this is com­pound­ed by a ‘win­ner takes all’ dynam­ic that tech­nol­o­gy seems to produce. 

Oth­ers have point­ed to glob­al­i­sa­tion but although this might be a con­tribut­ing fac­tor, the evi­dence does not sup­port the idea that it is the major cause.

So what are we left with?

His­tor­i­cal­ly, look­ing at pre­vi­ous tech­no­log­i­cal upsets, it appears edu­ca­tion makes a big dif­fer­ence. Peo­ple neg­a­tive­ly affect­ed by tech­no­log­i­cal progress should have access to good edu­ca­tion so that they have options. In the US the access to high qual­i­ty edu­ca­tion is not equal­ly divided.

Appar­ent­ly fam­i­ly income is asso­ci­at­ed with edu­ca­tion­al achieve­ment. So if your fam­i­ly is rich, you are more like­ly to become a high skilled indi­vid­ual. And high skilled indi­vid­u­als are priv­i­leged by the tech economy.

And if Piket­ty’s is right, we are approach­ing a real­i­ty in which mon­ey made from wealth ris­es faster than wages. So there is a feed­back loop in place which only exac­er­bates the situation.

One more bul­let: If you think trick­le-down eco­nom­ics, increas­ing the size of the pie will help, you might be mis­tak­en. It appears social mobil­i­ty is helped more by decreas­ing inequal­i­ty in the dis­tri­b­u­tion of income growth.

So some pre­lim­i­nary con­clu­sions: a pro­gres­sive tax on wealth won’t solve the issue. The edu­ca­tion sys­tem will require reform, too. 

I think this is the cen­tral irony of the whole sit­u­a­tion: we are work­ing hard to teach machines how to learn. But we are neglect­ing to improve how peo­ple learn.