High-skill robots, low-skill workers

Some notes on what I think I under­stand about tech­nol­o­gy and inequal­i­ty.

Let’s start with an obvi­ous big ques­tion: is tech­nol­o­gy destroy­ing jobs faster than they can be replaced? On the long term the evi­dence isn’t strong. Humans always appear to invent new things to do. There is no rea­son this time around should be any dif­fer­ent.

But in the short term tech­nol­o­gy has con­tributed to an evap­o­ra­tion of mid-skilled jobs. Parts of these jobs are auto­mat­ed entire­ly, parts can be done by few­er peo­ple because of high­er pro­duc­tiv­i­ty gained from tech.

While pro­duc­tiv­i­ty con­tin­ues to grow, jobs are lag­ging behind. The year 2000 appears to have been a turn­ing point. “Some­thing” hap­pened around that time. But no-one knows exact­ly what.

My hunch is that we’ve seen an emer­gence of a new class of pseu­do-monop­o­lies. Oli­gop­o­lies. And this is com­pound­ed by a ‘win­ner takes all’ dynam­ic that tech­nol­o­gy seems to pro­duce.

Oth­ers have point­ed to glob­al­i­sa­tion but although this might be a con­tribut­ing fac­tor, the evi­dence does not sup­port the idea that it is the major cause.

So what are we left with?

His­tor­i­cal­ly, look­ing at pre­vi­ous tech­no­log­i­cal upsets, it appears edu­ca­tion makes a big dif­fer­ence. Peo­ple neg­a­tive­ly affect­ed by tech­no­log­i­cal progress should have access to good edu­ca­tion so that they have options. In the US the access to high qual­i­ty edu­ca­tion is not equal­ly divid­ed.

Appar­ent­ly fam­i­ly income is asso­ci­at­ed with edu­ca­tion­al achieve­ment. So if your fam­i­ly is rich, you are more like­ly to become a high skilled indi­vid­ual. And high skilled indi­vid­u­als are priv­i­leged by the tech econ­o­my.

And if Piketty’s is right, we are approach­ing a real­i­ty in which mon­ey made from wealth ris­es faster than wages. So there is a feed­back loop in place which only exac­er­bates the sit­u­a­tion.

One more bul­let: If you think trick­le-down eco­nom­ics, increas­ing the size of the pie will help, you might be mis­tak­en. It appears social mobil­i­ty is helped more by decreas­ing inequal­i­ty in the dis­tri­b­u­tion of income growth.

So some pre­lim­i­nary con­clu­sions: a pro­gres­sive tax on wealth won’t solve the issue. The edu­ca­tion sys­tem will require reform, too.

I think this is the cen­tral irony of the whole sit­u­a­tion: we are work­ing hard to teach machines how to learn. But we are neglect­ing to improve how peo­ple learn.

Waiting for the smart city

Nowa­days when we talk about the smart city we don’t nec­es­sar­i­ly talk about smart­ness or cities.

I feel like when the term is used it often obscures more than it reveals.

Here a few rea­sons why.

To begin with, the term sug­gests some­thing that is yet to arrive. Some kind of tech-enabled utopia. But actu­al­ly, cur­rent day cities are already smart to a greater or less­er degree depend­ing on where and how you look.

This is impor­tant because too often we post­pone action as we wait for the smart city to arrive. We don’t have to wait. We can act to improve things right now.

Fur­ther­more, ‘smart city’ sug­gests some­thing mono­lith­ic that can be designed as a whole. But a smart city, like any city, is a huge mess of inter­con­nect­ed things. It resists top­down design.

His­to­ry is lit­tered with failed attempts at author­i­tar­i­an high-mod­ernist city design. Just stop it.

Smart­ness should not be an end but a means.

I read ‘smart’ as a short­hand for ‘tech­no­log­i­cal­ly aug­ment­ed’. A smart city is a city eat­en by soft­ware. All cities are being eat­en (or have been eat­en) by soft­ware to a greater or less­er extent. Uber and Airbnb are obvi­ous exam­ples. Small­er more sub­tle ones abound.

The ques­tion is, smart to what end? Effi­cien­cy? Leg­i­bil­i­ty? Con­trol­la­bil­i­ty? Anti-fragili­ty? Playa­bil­i­ty? Live­abil­i­ty? Sus­tain­abil­i­ty? The answer depends on your out­look.

These are ways in which the smart city label obscures. It obscures agency. It obscures net­works. It obscures intent.

I’m not say­ing don’t ever use it. But in many cas­es you can get by with­out it. You can talk about spe­cif­ic parts that make up the whole of a city, spe­cif­ic tech­nolo­gies and spe­cif­ic aims.


Post­script 1

We can do the same exer­cise with the ‘city’ part of the meme.

The same process that is mak­ing cities smart (soft­ware eat­ing the world) is also mak­ing every­thing else smart. Smart towns. Smart coun­try­sides. The ends are dif­fer­ent. The net­works are dif­fer­ent. The process­es play out in dif­fer­ent ways.

It’s okay to think about cities but don’t think they have a monop­oly on ‘dis­rup­tion’.

Post­script 2

Some of this inspired by clever things I heard Sebas­t­ian Quack say at Play­ful Design for Smart Cities and Usman Haque at ThingsCon Ams­ter­dam.

Artificial intelligence as partner

Some notes on arti­fi­cial intel­li­gence, tech­nol­o­gy as part­ner and relat­ed user inter­face design chal­lenges. Most­ly notes to self, not sure I am adding much to the debate. Just sum­maris­ing what I think is impor­tant to think about more. Warn­ing: Dense with links.

Matt Jones writes about how arti­fi­cial intel­li­gence does not have to be a slave, but can also be part­ner.

I’m per­son­al­ly much more inter­est­ed in machine intel­li­gence as human aug­men­ta­tion rather than the oft-hyped AI assis­tant as a sep­a­rate embod­i­ment.

I would add a third pos­si­bil­i­ty, which is AI as mas­ter. A com­mon fear we humans have and one I think only grow­ing as things like Alpha­Go and new Boston Dynam­ics robots keep hap­pen­ing.

I have had a tweet pinned to my time­line for a while now, which is a quote from Play Mat­ters.

tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actu­al­ly does not just apply to AI but to tech in gen­er­al. Of course, as tech gets smarter and more inde­pen­dent from humans, the idea of a ‘third way’ only grows in impor­tance.

More tweet­ing. A while back, short­ly after AlphaGo’s vic­to­ry, James tweet­ed:

On the one hand, we must insist, as Kas­parov did, on Advanced Go, and then Advanced Every­thing Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear exam­ple of humans and AI part­ner­ing. And it is also an exam­ple of tech­nol­o­gy as a source of expres­sion and a way of being.

Also, in a WIRED arti­cle on Alpha­Go, some­one who had played the AI repeat­ed­ly says his game has improved tremen­dous­ly.

So that is the promise: Arti­fi­cial­ly intel­li­gent sys­tems which work togeth­er with humans for mutu­al ben­e­fit.

Now of course these AIs don’t just arrive into the world ful­ly formed. They are cre­at­ed by humans with par­tic­u­lar goals in mind. So there is a design com­po­nent there. We can design them to be part­ners but we can also design them to be mas­ters or slaves.

As an aside: Maybe AIs that make use of deep learn­ing are par­tic­u­lar­ly well suit­ed to this part­ner mod­el? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynam­ics. There appar­ent­ly is a sig­nif­i­cant dif­fer­ence between holis­tic and reduc­tion­ist approach­es, deep learn­ing being holis­tic. I imag­ine reduc­tion­ist AI might be more depen­dent on humans. But this is just wild spec­u­la­tion. I don’t know if there is any­thing there.

This insis­tence of James on “advanced every­thing else” is a world view. A pol­i­tics. To allow our­selves to be increas­ing­ly entan­gled with these sys­tems, to not be afraid of them. Because if we are afraid, we either want to sub­ju­gate them or they will sub­ju­gate us. It is also about not obscur­ing the sys­tems we are part of. This is a sen­ti­ment also expressed by James in the same series of tweets I quot­ed from ear­li­er:

These emer­gences are also the best mod­el we have ever built for describ­ing the true state of the world as it always already exists.

And there is over­lap here with ideas expressed by Kevin in ‘Design as Par­tic­i­pa­tion’:

[W]e are no longer just using com­put­ers. We are using com­put­ers to use the world. The obscured and com­plex code and engi­neer­ing now engages with peo­ple, resources, civics, com­mu­ni­ties and ecosys­tems. Should design­ers con­tin­ue to priv­i­lege users above all oth­ers in the sys­tem? What would it mean to design for par­tic­i­pants instead? For all the par­tic­i­pants?

AI part­ners might help us to bet­ter see the sys­tems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emer­gence of arti­fi­cial intel­li­gence (only this time with a bud­dy-style user inter­face that actu­al­ly works), this ques­tion of “doing some­thing for me” vs “allow­ing me to do even more” is going to get even more pro­nounced. Both are effec­tive, but the first sucks… or at least, it sucks accord­ing to my own per­son­al pol­i­tics, because I regard indi­vid­ual alien­ation from soci­ety and com­plex sys­tems as one of the huge threats in the 21st cen­tu­ry.

I am remind­ed of the mixed-ini­tia­tive sys­tems being researched in the area of pro­ce­dur­al con­tent gen­er­a­tion for games. I wrote about these a while back on the Hub­bub blog. Such sys­tems are part­ners of design­ers. They give some­thing like super pow­ers. Now imag­ine such pow­ers applied to oth­er prob­lems. Quite excit­ing.

Actu­al­ly, in the afore­men­tioned arti­cle I dis­tin­guish between tools for mak­ing things and tools for inspect­ing pos­si­bil­i­ty spaces. In the first case design­ers manip­u­late more abstract rep­re­sen­ta­tions of the intend­ed out­come and the sys­tem gen­er­ates the actu­al out­put. In the sec­ond case the sys­tem visu­alis­es the range of pos­si­ble out­comes giv­en a par­tic­u­lar con­fig­u­ra­tion of the abstract rep­re­sen­ta­tion. These two are best paired.

From a design per­spec­tive, a lot remains to be fig­ured out. If I look at those mixed-ini­tia­tive tools I am struck by how poor­ly they com­mu­ni­cate what the AI is doing and what its capa­bil­i­ties are. There is a huge user inter­face design chal­lenge there.

For stuff focused on get­ting infor­ma­tion, a con­ver­sa­tion­al UI seems to be the cur­rent local opti­mum for work­ing with an AI. But for tools for cre­ativ­i­ty, to use the two-way split pro­posed by Vic­tor, dif­fer­ent UIs will be required.

What shape will they take? What visu­al lan­guage do we need to express the par­tic­u­lar prop­er­ties of arti­fi­cial intel­li­gence? What approach­es can we take in addi­tion to per­son­i­fy­ing AI as bots or char­ac­ters? I don’t know and I can hard­ly think of any good exam­ples that point towards promis­ing approach­es. Lots to be done.

My plans for 2016

Long sto­ry short: my plan is to make plans.

Hub­bub has gone into hiber­na­tion. After more than six years of lead­ing a bou­tique play­ful design agency I am return­ing to free­lance life. At least for the short term.

I will use the flex­i­bil­i­ty afford­ed by this free­ing up of time to take stock of where I have come from and where I am head­ed. ‘Ori­en­ta­tion is the Schw­er­punkt,’ as Boyd says. I have def­i­nite­ly cycled back through my meta-OODA-loop and am firm­ly back in the sec­ond O.

To make things more inter­est­ing I have exchanged the Nether­lands for Sin­ga­pore. I will be here until August. It is going to be fun to explore the things this city has to offer. I am curi­ous what the tech­nol­o­gy and design scene is like when seen up close. So I hope to do some work local­ly.

I will take on short com­mit­ments. Let’s say no longer than two to three months. Any­thing goes real­ly, but I am par­tic­u­lar­ly inter­est­ed in work relat­ed to cre­ativ­i­ty and learn­ing. I am also keen on get­ting back into teach­ing.

So if you are in Sin­ga­pore, work in tech­nol­o­gy or design and want to have a cup of cof­fee. Drop me a line.

Hap­py 2016!

Nobody does thor­ough­ly argued pre­sen­ta­tions quite like Sebas­t­ian. This is good stuff on ethics and design.

I decid­ed to share some thoughts it sparked via Twit­ter and end­ed up rant­i­ng a bit:

I recent­ly talked about ethics to a bunch of “behav­ior design­ers” and found myself con­clud­ing that any designed sys­tem that does not allow for user appro­pri­a­tion is fun­da­men­tal­ly uneth­i­cal because as you right­ly point out what is the good life is a per­son­al mat­ter. Impos­ing it is an inher­ent­ly vio­lent act. A lot of design is a form of tech­no­log­i­cal­ly medi­at­ed vio­lence. Get­ting peo­ple to do your bid­ding, how­ev­er well intend­ed. Which giv­en my own voca­tion and work in the past is a kind of trou­bling thought to arrive at… Help?

Sebas­t­ian makes his best point on slides 113–114. Eth­i­cal design isn’t about doing the least harm, but about doing the most good. And, to come back to my Twit­ter rant, for me the ulti­mate good is for oth­ers to be free. Hence non-pre­scrip­tive design.

(via Design­ing the Good Life: Ethics and User Expe­ri­ence Design)

Three cool projects out of the Art, Media and Technology faculty

So a week ago I vis­it­ed a project mar­ket at the Art, Media and Tech­nol­o­gy fac­ul­ty in Hil­ver­sum which is part of the Utrecht School of Arts and offers BA and MA cours­es in Inter­ac­tion Design, Game Design & Devel­op­ment and many oth­ers.

The range of projects on show was broad and won­der­ful­ly pre­sent­ed. It proves the school is still able to inte­grate arts and crafts with com­mer­cial and soci­etal rel­e­vant think­ing. All projects (over 40 in total) were by mas­ter of arts stu­dents and com­mis­sioned by real world clients. I’d like to point out three projects I par­tic­u­lar­ly enjoyed:

Koe

A tan­gi­ble inter­face that mod­els a cow’s insides and allows vet­eri­nary stu­dents to train at much ear­li­er stage than they do now. The cow mod­el has real­is­tic organs made of sil­i­con (echoes of Real­doll here) and is hooked up to a large dis­play show­ing a 3D visu­al­iza­tion of the student’s actions inside the cow. Crazy, slight­ly gross but very well done.

Haas

A nar­ra­tive, lit­er­ary game called ‘Haas’ (Dutch for hare) that allows the play­er to intu­itive­ly draw the lev­el around the main char­ac­ter. The game’s engine remind­ed me a bit of Chris Craw­ford’s work in that it tracks all kinds of dra­mat­ic pos­si­bil­i­ties in the game and eval­u­ates which is the most appro­pri­ate at any time based on avail­able char­ac­ters, props, etc. Cute and pret­ty.

Entertaible

A game devel­oped for Philips’ Enter­taible which is a large flat pan­el mul­ti-touch dis­play that can track game pieces’ loca­tion, shape and ori­en­ta­tion and has RFID capa­bil­i­ties as well. The game devel­oped has the play­ers explore a haunt­ed man­sion (stun­ning­ly visu­al­ized by the stu­dents in a style that is rem­i­nis­cent of Pixar) and play a num­ber of inven­tive mini-games. Very pro­fes­sion­al­ly done.

For a taste of the project mar­ket you can check out this pho­to album (from which the pho­tos in this post are tak­en) as well as this video clip by Dutch news­pa­per AD.

Full dis­clo­sure: I cur­rent­ly teach a course in game design for mobile devices and ear­li­er stud­ied inter­ac­tion and game design between 1998 and 2002 at the same school.