Engineering Ethics Reading List

I recent­ly fol­lowed an excel­lent three-day course on engi­neer­ing ethics. It was offered by the TU Delft grad­u­ate school and taught by Behnam Taibi with guest lec­tures from sev­er­al of our faculty.

I found it par­tic­u­lar­ly help­ful to get some sug­ges­tions for fur­ther read­ing that rep­re­sent some of the foun­da­tion­al ideas in the field. I fig­ured it would be use­ful to oth­ers as well to have a point­er to them. 

So here they are. I’ve quick­ly gut­ted these for their mean­ing. The one by Van de Poel I did read entire­ly and can high­ly rec­om­mend for any­one who’s doing design of emerg­ing tech­nolo­gies and wants to escape from the informed con­sent conundrum. 

I intend to dig into the Doorn one, not just because she’s one of my pro­mot­ers but also because resilience is a con­cept that is close­ly relat­ed to my own inter­ests. I’ll also get into the Flori­di one in detail but the con­cept of infor­ma­tion qual­i­ty and the care ethics per­spec­tive on the prob­lem of infor­ma­tion abun­dance and atten­tion scarci­ty I found imme­di­ate­ly applic­a­ble in inter­ac­tion design.

  1. Stil­goe, Jack, Richard Owen, and Phil Mac­naght­en. “Devel­op­ing a frame­work for respon­si­ble inno­va­tion.” Research Pol­i­cy 42.9 (2013): 1568–1580.
  2. Van den Hov­en, Jeroen. “Val­ue sen­si­tive design and respon­si­ble inno­va­tion.” Respon­si­ble inno­va­tion (2013): 75–83.
  3. Hans­son, Sven Ove. “Eth­i­cal cri­te­ria of risk accep­tance.” Erken­nt­nis 59.3 (2003): 291–309.
  4. Van de Poel, Ibo. “An eth­i­cal frame­work for eval­u­at­ing exper­i­men­tal tech­nol­o­gy.” Sci­ence and engi­neer­ing ethics22.3 (2016): 667–686.
  5. Hans­son, Sven Ove. “Philo­soph­i­cal prob­lems in cost–benefit analy­sis.” Eco­nom­ics & Phi­los­o­phy 23.2 (2007): 163–183.
  6. Flori­di, Luciano. “Big Data and infor­ma­tion qual­i­ty.” The phi­los­o­phy of infor­ma­tion qual­i­ty. Springer, Cham, 2014. 303–315.
  7. Doorn, Neelke, Pao­lo Gar­doni, and Colleen Mur­phy. “A mul­ti­dis­ci­pli­nary def­i­n­i­tion and eval­u­a­tion of resilience: The role of social jus­tice in defin­ing resilience.” Sus­tain­able and Resilient Infra­struc­ture (2018): 1–12.

We also got a draft of the intro chap­ter to a book on engi­neer­ing and ethics that Behnam is writ­ing. That looks very promis­ing as well but I can’t share yet for obvi­ous reasons.

Unboxing’ at Behavior Design Amsterdam #16

Below is a write-up of the talk I gave at the Behav­ior Design Ams­ter­dam #16 meet­up on Thurs­day, Feb­ru­ary 15, 2018.

'Pandora' by John William Waterhouse (1896)
‘Pan­do­ra’ by John William Water­house (1896)

I’d like to talk about the future of our design prac­tice and what I think we should focus our atten­tion on. It is all relat­ed to this idea of com­plex­i­ty and open­ing up black box­es. We’re going to take the scenic route, though. So bear with me.

Software Design

Two years ago I spent about half a year in Singapore.

While there I worked as prod­uct strate­gist and design­er at a start­up called ARTO, an art rec­om­men­da­tion ser­vice. It shows you a ran­dom sam­ple of art­works, you tell it which ones you like, and it will then start rec­om­mend­ing pieces it thinks you like. In case you were won­der­ing: yes, swip­ing left and right was involved.

We had this inter­est­ing prob­lem of ingest­ing art from many dif­fer­ent sources (most­ly online gal­leries) with meta­da­ta of wild­ly vary­ing lev­els of qual­i­ty. So, using meta­da­ta to fig­ure out which art to show was a bit of a non-starter. It should come as no sur­prise then, that we start­ed look­ing into machine learning—image pro­cess­ing in particular.

And so I found myself work­ing with my engi­neer­ing col­leagues on an art rec­om­men­da­tion stream which was dri­ven at least in part by machine learn­ing. And I quick­ly realised we had a prob­lem. In terms of how we worked togeth­er on this part of the prod­uct, it felt like we had tak­en a bunch of steps back in time. Back to a way of col­lab­o­rat­ing that was less inte­grat­ed and less responsive.

That’s because we have all these nice tools and tech­niques for design­ing tra­di­tion­al soft­ware prod­ucts. But soft­ware is deter­min­is­tic. Machine learn­ing is fun­da­men­tal­ly dif­fer­ent in nature: it is probabilistic.

It was hard for me to take the lead in the design of this part of the prod­uct for two rea­sons. First of all, it was chal­leng­ing to get a first-hand feel of the machine learn­ing fea­ture before it was implemented.

And sec­ond of all, it was hard for me to com­mu­ni­cate or visu­alise the intend­ed behav­iour of the machine learn­ing fea­ture to the rest of the team.

So when I came back to the Nether­lands I decid­ed to dig into this prob­lem of design for machine learn­ing. Turns out I opened up quite the can of worms for myself. But that’s okay.

There are two rea­sons I care about this:

The first is that I think we need more design-led inno­va­tion in the machine learn­ing space. At the moment it is engi­neer­ing-dom­i­nat­ed, which doesn’t nec­es­sar­i­ly lead to use­ful out­comes. But if you want to take the lead in the design of machine learn­ing appli­ca­tions, you need a firm han­dle on the nature of the technology.

The sec­ond rea­son why I think we need to edu­cate our­selves as design­ers on the nature of machine learn­ing is that we need to take respon­si­bil­i­ty for the impact the tech­nol­o­gy has on the lives of peo­ple. There is a lot of talk about ethics in the design indus­try at the moment. Which I con­sid­er a pos­i­tive sign. But I also see a reluc­tance to real­ly grap­ple with what ethics is and what the rela­tion­ship between tech­nol­o­gy and soci­ety is. We seem to want easy answers, which is under­stand­able because we are all very busy peo­ple. But hav­ing spent some time dig­ging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a fea­ture. And we should embrace it.

Machine Learning

At the end of 2016 I attend­ed ThingsCon here in Ams­ter­dam and I was intro­duced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both inter­est­ed in machine learn­ing. So with encour­age­ment from Ianus we decid­ed to put togeth­er a work­shop that would enable indus­tri­al design mas­ter stu­dents to tan­gle with it in a hands-on manner.

About a year lat­er now, this has grown into a thing we call Pro­to­typ­ing the Use­less But­ler. Dur­ing the work­shop, you use machine learn­ing algo­rithms to train a mod­el that takes inputs from a net­work-con­nect­ed arduino’s sen­sors and dri­ves that same arduino’s actu­a­tors. In effect, you can cre­ate inter­ac­tive behav­iour with­out writ­ing a sin­gle line of code. And you get a first hand feel for how com­mon appli­ca­tions of machine learn­ing work. Things like regres­sion, clas­si­fi­ca­tion and dynam­ic time warping.

The thing that makes this work­shop tick is an open source soft­ware appli­ca­tion called Wek­ina­tor. Which was cre­at­ed by Rebec­ca Fiebrink. It was orig­i­nal­ly aimed at per­form­ing artists so that they could build inter­ac­tive instru­ments with­out writ­ing code. But it takes inputs from any­thing and sends out­puts to any­thing. So we appro­pri­at­ed it towards our own ends.

You can find every­thing relat­ed to Use­less But­ler on this GitHub repo.

The think­ing behind this work­shop is that for us design­ers to be able to think cre­ative­ly about appli­ca­tions of machine learn­ing, we need a gran­u­lar under­stand­ing of the nature of the tech­nol­o­gy. The thing with design­ers is, we can’t real­ly learn about such things from books. A lot of design knowl­edge is tac­it, it emerges from our phys­i­cal engage­ment with the world. This is why things like sketch­ing and pro­to­typ­ing are such essen­tial parts of our way of work­ing. And so with use­less but­ler we aim to cre­ate an envi­ron­ment in which you as a design­er can gain tac­it knowl­edge about the work­ings of machine learning.

Sim­ply put, for a lot of us, machine learn­ing is a black box. With Use­less But­ler, we open the black box a bit and let you peer inside. This should improve the odds of design-led inno­va­tion hap­pen­ing in the machine learn­ing space. And it should also help with ethics. But it’s def­i­nite­ly not enough. Knowl­edge about the tech­nol­o­gy isn’t the only issue here. There are more black box­es to open.

Values

Which brings me back to that oth­er black box: ethics. Like I already men­tioned there is a lot of talk in the tech indus­try about how we should “be more eth­i­cal”. But things are often reduced to this notion that design­ers should do no harm. As if ethics is a prob­lem to be fixed in stead of a thing to be practiced.

So I start­ed to talk about this to peo­ple I know in acad­e­mia and more than once this thing called Val­ue Sen­si­tive Design was men­tioned. It should be no sur­prise to any­one that schol­ars have been chew­ing on this stuff for quite a while. One of the ear­li­est ref­er­ences I came across, an essay by Batya Fried­man in Inter­ac­tions is from 1996! This is a les­son to all of us I think. Pay more atten­tion to what the aca­d­e­mics are talk­ing about.

So, at the end of last year I dove into this top­ic. Our host Iskan­der Smit, Rob Mai­jers and myself coor­di­nate a grass­roots com­mu­ni­ty for tech work­ers called Tech Sol­i­dar­i­ty NL. We want to build tech­nol­o­gy that serves the needs of the many, not the few. Val­ue Sen­si­tive Design seemed like a good thing to dig into and so we did.

I’m not going to dive into the details here. There’s a report on the Tech Sol­i­dar­i­ty NL web­site if you’re inter­est­ed. But I will high­light a few things that val­ue sen­si­tive design asks us to con­sid­er that I think help us unpack what it means to prac­tice eth­i­cal design.

First of all, val­ues. Here’s how it is com­mon­ly defined in the literature:

A val­ue refers to what a per­son or group of peo­ple con­sid­er impor­tant in life.”

I like it because it’s com­mon sense, right? But it also makes clear that there can nev­er be one mono­lith­ic def­i­n­i­tion of what ‘good’ is in all cas­es. As we design­ers like to say: “it depends” and when it comes to val­ues things are no different.

Per­son or group” implies there can be var­i­ous stake­hold­ers. Val­ue sen­si­tive design dis­tin­guish­es between direct and indi­rect stake­hold­ers. The for­mer have direct con­tact with the tech­nol­o­gy, the lat­ter don’t but are affect­ed by it nonethe­less. Val­ue sen­si­tive design means tak­ing both into account. So this blows up the con­ven­tion­al notion of a sin­gle user to design for.

Var­i­ous stake­hold­er groups can have com­pet­ing val­ues and so to design for them means to arrive at some sort of trade-off between val­ues. This is a cru­cial point. There is no such thing as a per­fect or objec­tive­ly best solu­tion to eth­i­cal conun­drums. Not in the design of tech­nol­o­gy and not any­where else.

Val­ue sen­si­tive design encour­ages you to map stake­hold­ers and their val­ues. These will be dif­fer­ent for every design project. Anoth­er approach is to use lists like the one pic­tured here as an ana­lyt­i­cal tool to think about how a design impacts var­i­ous values.

Fur­ther­more, dur­ing your design process you might not only think about the short-term impact of a tech­nol­o­gy, but also think about how it will affect things in the long run.

And sim­i­lar­ly, you might think about the effects of a tech­nol­o­gy not only when a few peo­ple are using it, but also when it becomes wild­ly suc­cess­ful and every­body uses it.

There are tools out there that can help you think through these things. But so far much of the work in this area is hap­pen­ing on the aca­d­e­m­ic side. I think there is an oppor­tu­ni­ty for us to cre­ate tools and case stud­ies that will help us edu­cate our­selves on this stuff.

There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the tech­nolo­gies we work with, it helps to dig deep­er into the nature of the rela­tion­ship between tech­nol­o­gy and soci­ety. Yes, it com­pli­cates things. But that is exact­ly the point.

Priv­i­leg­ing sim­ple and scal­able solu­tions over those adapt­ed to local needs is social­ly, eco­nom­i­cal­ly and eco­log­i­cal­ly unsus­tain­able. So I hope you will join me in embrac­ing complexity.

High-skill robots, low-skill workers

Some notes on what I think I under­stand about tech­nol­o­gy and inequality.

Let’s start with an obvi­ous big ques­tion: is tech­nol­o­gy destroy­ing jobs faster than they can be replaced? On the long term the evi­dence isn’t strong. Humans always appear to invent new things to do. There is no rea­son this time around should be any different.

But in the short term tech­nol­o­gy has con­tributed to an evap­o­ra­tion of mid-skilled jobs. Parts of these jobs are auto­mat­ed entire­ly, parts can be done by few­er peo­ple because of high­er pro­duc­tiv­i­ty gained from tech.

While pro­duc­tiv­i­ty con­tin­ues to grow, jobs are lag­ging behind. The year 2000 appears to have been a turn­ing point. “Some­thing” hap­pened around that time. But no-one knows exact­ly what. 

My hunch is that we’ve seen an emer­gence of a new class of pseu­do-monop­o­lies. Oli­gop­o­lies. And this is com­pound­ed by a ‘win­ner takes all’ dynam­ic that tech­nol­o­gy seems to produce. 

Oth­ers have point­ed to glob­al­i­sa­tion but although this might be a con­tribut­ing fac­tor, the evi­dence does not sup­port the idea that it is the major cause.

So what are we left with?

His­tor­i­cal­ly, look­ing at pre­vi­ous tech­no­log­i­cal upsets, it appears edu­ca­tion makes a big dif­fer­ence. Peo­ple neg­a­tive­ly affect­ed by tech­no­log­i­cal progress should have access to good edu­ca­tion so that they have options. In the US the access to high qual­i­ty edu­ca­tion is not equal­ly divided.

Appar­ent­ly fam­i­ly income is asso­ci­at­ed with edu­ca­tion­al achieve­ment. So if your fam­i­ly is rich, you are more like­ly to become a high skilled indi­vid­ual. And high skilled indi­vid­u­als are priv­i­leged by the tech economy.

And if Piket­ty’s is right, we are approach­ing a real­i­ty in which mon­ey made from wealth ris­es faster than wages. So there is a feed­back loop in place which only exac­er­bates the situation.

One more bul­let: If you think trick­le-down eco­nom­ics, increas­ing the size of the pie will help, you might be mis­tak­en. It appears social mobil­i­ty is helped more by decreas­ing inequal­i­ty in the dis­tri­b­u­tion of income growth.

So some pre­lim­i­nary con­clu­sions: a pro­gres­sive tax on wealth won’t solve the issue. The edu­ca­tion sys­tem will require reform, too. 

I think this is the cen­tral irony of the whole sit­u­a­tion: we are work­ing hard to teach machines how to learn. But we are neglect­ing to improve how peo­ple learn.

Waiting for the smart city

Nowa­days when we talk about the smart city we don’t nec­es­sar­i­ly talk about smart­ness or cities.

I feel like when the term is used it often obscures more than it reveals. 

Here a few rea­sons why. 

To begin with, the term sug­gests some­thing that is yet to arrive. Some kind of tech-enabled utopia. But actu­al­ly, cur­rent day cities are already smart to a greater or less­er degree depend­ing on where and how you look.

This is impor­tant because too often we post­pone action as we wait for the smart city to arrive. We don’t have to wait. We can act to improve things right now.

Fur­ther­more, ‘smart city’ sug­gests some­thing mono­lith­ic that can be designed as a whole. But a smart city, like any city, is a huge mess of inter­con­nect­ed things. It resists top­down design. 

His­to­ry is lit­tered with failed attempts at author­i­tar­i­an high-mod­ernist city design. Just stop it.

Smart­ness should not be an end but a means. 

I read ‘smart’ as a short­hand for ‘tech­no­log­i­cal­ly aug­ment­ed’. A smart city is a city eat­en by soft­ware. All cities are being eat­en (or have been eat­en) by soft­ware to a greater or less­er extent. Uber and Airbnb are obvi­ous exam­ples. Small­er more sub­tle ones abound.

The ques­tion is, smart to what end? Effi­cien­cy? Leg­i­bil­i­ty? Con­trol­la­bil­i­ty? Anti-fragili­ty? Playa­bil­i­ty? Live­abil­i­ty? Sus­tain­abil­i­ty? The answer depends on your outlook.

These are ways in which the smart city label obscures. It obscures agency. It obscures net­works. It obscures intent.

I’m not say­ing don’t ever use it. But in many cas­es you can get by with­out it. You can talk about spe­cif­ic parts that make up the whole of a city, spe­cif­ic tech­nolo­gies and spe­cif­ic aims. 


Post­script 1

We can do the same exer­cise with the ‘city’ part of the meme. 

The same process that is mak­ing cities smart (soft­ware eat­ing the world) is also mak­ing every­thing else smart. Smart towns. Smart coun­try­sides. The ends are dif­fer­ent. The net­works are dif­fer­ent. The process­es play out in dif­fer­ent ways.

It’s okay to think about cities but don’t think they have a monop­oly on ‘dis­rup­tion’.

Post­script 2

Some of this inspired by clever things I heard Sebas­t­ian Quack say at Play­ful Design for Smart Cities and Usman Haque at ThingsCon Ams­ter­dam.

Artificial intelligence as partner

Some notes on arti­fi­cial intel­li­gence, tech­nol­o­gy as part­ner and relat­ed user inter­face design chal­lenges. Most­ly notes to self, not sure I am adding much to the debate. Just sum­maris­ing what I think is impor­tant to think about more. Warn­ing: Dense with links.

Matt Jones writes about how arti­fi­cial intel­li­gence does not have to be a slave, but can also be partner.

I’m per­son­al­ly much more inter­est­ed in machine intel­li­gence as human aug­men­ta­tion rather than the oft-hyped AI assis­tant as a sep­a­rate embodiment.

I would add a third pos­si­bil­i­ty, which is AI as mas­ter. A com­mon fear we humans have and one I think only grow­ing as things like Alpha­Go and new Boston Dynam­ics robots keep happening.

I have had a tweet pinned to my time­line for a while now, which is a quote from Play Mat­ters.

tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being” 

So this idea actu­al­ly does not just apply to AI but to tech in gen­er­al. Of course, as tech gets smarter and more inde­pen­dent from humans, the idea of a ‘third way’ only grows in importance. 

More tweet­ing. A while back, short­ly after AlphaGo’s vic­to­ry, James tweet­ed:

On the one hand, we must insist, as Kas­parov did, on Advanced Go, and then Advanced Every­thing Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear exam­ple of humans and AI part­ner­ing. And it is also an exam­ple of tech­nol­o­gy as a source of expres­sion and a way of being.

Also, in a WIRED arti­cle on Alpha­Go, some­one who had played the AI repeat­ed­ly says his game has improved tremendously. 

So that is the promise: Arti­fi­cial­ly intel­li­gent sys­tems which work togeth­er with humans for mutu­al benefit. 

Now of course these AIs don’t just arrive into the world ful­ly formed. They are cre­at­ed by humans with par­tic­u­lar goals in mind. So there is a design com­po­nent there. We can design them to be part­ners but we can also design them to be mas­ters or slaves.

As an aside: Maybe AIs that make use of deep learn­ing are par­tic­u­lar­ly well suit­ed to this part­ner mod­el? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynam­ics. There appar­ent­ly is a sig­nif­i­cant dif­fer­ence between holis­tic and reduc­tion­ist approach­es, deep learn­ing being holis­tic. I imag­ine reduc­tion­ist AI might be more depen­dent on humans. But this is just wild spec­u­la­tion. I don’t know if there is any­thing there.

This insis­tence of James on “advanced every­thing else” is a world view. A pol­i­tics. To allow our­selves to be increas­ing­ly entan­gled with these sys­tems, to not be afraid of them. Because if we are afraid, we either want to sub­ju­gate them or they will sub­ju­gate us. It is also about not obscur­ing the sys­tems we are part of. This is a sen­ti­ment also expressed by James in the same series of tweets I quot­ed from earlier:

These emer­gences are also the best mod­el we have ever built for describ­ing the true state of the world as it always already exists.

And there is over­lap here with ideas expressed by Kevin in ‘Design as Par­tic­i­pa­tion’:

[W]e are no longer just using com­put­ers. We are using com­put­ers to use the world. The obscured and com­plex code and engi­neer­ing now engages with peo­ple, resources, civics, com­mu­ni­ties and ecosys­tems. Should design­ers con­tin­ue to priv­i­lege users above all oth­ers in the sys­tem? What would it mean to design for par­tic­i­pants instead? For all the participants?

AI part­ners might help us to bet­ter see the sys­tems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emer­gence of arti­fi­cial intel­li­gence (only this time with a bud­dy-style user inter­face that actu­al­ly works), this ques­tion of “doing some­thing for me” vs “allow­ing me to do even more” is going to get even more pro­nounced. Both are effec­tive, but the first sucks… or at least, it sucks accord­ing to my own per­son­al pol­i­tics, because I regard indi­vid­ual alien­ation from soci­ety and com­plex sys­tems as one of the huge threats in the 21st century.

I am remind­ed of the mixed-ini­tia­tive sys­tems being researched in the area of pro­ce­dur­al con­tent gen­er­a­tion for games. I wrote about these a while back on the Hub­bub blog. Such sys­tems are part­ners of design­ers. They give some­thing like super pow­ers. Now imag­ine such pow­ers applied to oth­er prob­lems. Quite exciting.

Actu­al­ly, in the afore­men­tioned arti­cle I dis­tin­guish between tools for mak­ing things and tools for inspect­ing pos­si­bil­i­ty spaces. In the first case design­ers manip­u­late more abstract rep­re­sen­ta­tions of the intend­ed out­come and the sys­tem gen­er­ates the actu­al out­put. In the sec­ond case the sys­tem visu­alis­es the range of pos­si­ble out­comes giv­en a par­tic­u­lar con­fig­u­ra­tion of the abstract rep­re­sen­ta­tion. These two are best paired. 

From a design per­spec­tive, a lot remains to be fig­ured out. If I look at those mixed-ini­tia­tive tools I am struck by how poor­ly they com­mu­ni­cate what the AI is doing and what its capa­bil­i­ties are. There is a huge user inter­face design chal­lenge there. 

For stuff focused on get­ting infor­ma­tion, a con­ver­sa­tion­al UI seems to be the cur­rent local opti­mum for work­ing with an AI. But for tools for cre­ativ­i­ty, to use the two-way split pro­posed by Vic­tor, dif­fer­ent UIs will be required.

What shape will they take? What visu­al lan­guage do we need to express the par­tic­u­lar prop­er­ties of arti­fi­cial intel­li­gence? What approach­es can we take in addi­tion to per­son­i­fy­ing AI as bots or char­ac­ters? I don’t know and I can hard­ly think of any good exam­ples that point towards promis­ing approach­es. Lots to be done.

My plans for 2016

Long sto­ry short: my plan is to make plans.

Hub­bub has gone into hiber­na­tion. After more than six years of lead­ing a bou­tique play­ful design agency I am return­ing to free­lance life. At least for the short term.

I will use the flex­i­bil­i­ty afford­ed by this free­ing up of time to take stock of where I have come from and where I am head­ed. ‘Ori­en­ta­tion is the Schw­er­punkt,’ as Boyd says. I have def­i­nite­ly cycled back through my meta-OODA-loop and am firm­ly back in the sec­ond O.

To make things more inter­est­ing I have exchanged the Nether­lands for Sin­ga­pore. I will be here until August. It is going to be fun to explore the things this city has to offer. I am curi­ous what the tech­nol­o­gy and design scene is like when seen up close. So I hope to do some work locally.

I will take on short com­mit­ments. Let’s say no longer than two to three months. Any­thing goes real­ly, but I am par­tic­u­lar­ly inter­est­ed in work relat­ed to cre­ativ­i­ty and learn­ing. I am also keen on get­ting back into teaching.

So if you are in Sin­ga­pore, work in tech­nol­o­gy or design and want to have a cup of cof­fee. Drop me a line.

Hap­py 2016!

Nobody does thor­ough­ly argued pre­sen­ta­tions quite like Sebas­t­ian. This is good stuff on ethics and design.

I decid­ed to share some thoughts it sparked via Twit­ter and end­ed up rant­i­ng a bit:

I recent­ly talked about ethics to a bunch of “behav­ior design­ers” and found myself con­clud­ing that any designed sys­tem that does not allow for user appro­pri­a­tion is fun­da­men­tal­ly uneth­i­cal because as you right­ly point out what is the good life is a per­son­al mat­ter. Impos­ing it is an inher­ent­ly vio­lent act. A lot of design is a form of tech­no­log­i­cal­ly medi­at­ed vio­lence. Get­ting peo­ple to do your bid­ding, how­ev­er well intend­ed. Which giv­en my own voca­tion and work in the past is a kind of trou­bling thought to arrive at… Help?

Sebas­t­ian makes his best point on slides 113–114. Eth­i­cal design isn’t about doing the least harm, but about doing the most good. And, to come back to my Twit­ter rant, for me the ulti­mate good is for oth­ers to be free. Hence non-pre­scrip­tive design.

(via Design­ing the Good Life: Ethics and User Expe­ri­ence Design)

Three cool projects out of the Art, Media and Technology faculty

So a week ago I vis­it­ed a project mar­ket at the Art, Media and Tech­nol­o­gy fac­ul­ty in Hil­ver­sum which is part of the Utrecht School of Arts and offers BA and MA cours­es in Inter­ac­tion Design, Game Design & Devel­op­ment and many others. 

The range of projects on show was broad and won­der­ful­ly pre­sent­ed. It proves the school is still able to inte­grate arts and crafts with com­mer­cial and soci­etal rel­e­vant think­ing. All projects (over 40 in total) were by mas­ter of arts stu­dents and com­mis­sioned by real world clients. I’d like to point out three projects I par­tic­u­lar­ly enjoyed:

Koe

A tan­gi­ble inter­face that mod­els a cow’s insides and allows vet­eri­nary stu­dents to train at much ear­li­er stage than they do now. The cow mod­el has real­is­tic organs made of sil­i­con (echoes of Real­doll here) and is hooked up to a large dis­play show­ing a 3D visu­al­iza­tion of the stu­den­t’s actions inside the cow. Crazy, slight­ly gross but very well done. 

Haas

A nar­ra­tive, lit­er­ary game called ‘Haas’ (Dutch for hare) that allows the play­er to intu­itive­ly draw the lev­el around the main char­ac­ter. The game’s engine remind­ed me a bit of Chris Craw­ford’s work in that it tracks all kinds of dra­mat­ic pos­si­bil­i­ties in the game and eval­u­ates which is the most appro­pri­ate at any time based on avail­able char­ac­ters, props, etc. Cute and pretty.

Entertaible

A game devel­oped for Philips’ Enter­taible which is a large flat pan­el mul­ti-touch dis­play that can track game pieces’ loca­tion, shape and ori­en­ta­tion and has RFID capa­bil­i­ties as well. The game devel­oped has the play­ers explore a haunt­ed man­sion (stun­ning­ly visu­al­ized by the stu­dents in a style that is rem­i­nis­cent of Pixar) and play a num­ber of inven­tive mini-games. Very pro­fes­sion­al­ly done.

For a taste of the project mar­ket you can check out this pho­to album (from which the pho­tos in this post are tak­en) as well as this video clip by Dutch news­pa­per AD.

Full dis­clo­sure: I cur­rent­ly teach a course in game design for mobile devices and ear­li­er stud­ied inter­ac­tion and game design between 1998 and 2002 at the same school.