Machine Learning for Designers’ workshop

On Wednes­day Péter Kun, Hol­ly Rob­bins and myself taught a one-day work­shop on machine learn­ing at Delft Uni­ver­si­ty of Tech­nol­o­gy. We had about thir­ty master’s stu­dents from the indus­tri­al design engi­neer­ing fac­ul­ty. The aim was to get them acquaint­ed with the tech­nol­o­gy through hands-on tin­ker­ing with the Wek­ina­tor as cen­tral teach­ing tool.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Background

The rea­son­ing behind this work­shop is twofold.

On the one hand I expect design­ers will find them­selves work­ing on projects involv­ing machine learn­ing more and more often. The tech­nol­o­gy has cer­tain prop­er­ties that dif­fer from tra­di­tion­al soft­ware. Most impor­tant­ly, machine learn­ing is prob­a­bilis­tic in stead of deter­min­is­tic. It is impor­tant that design­ers under­stand this because oth­er­wise they are like­ly to make bad deci­sions about its appli­ca­tion.

The sec­ond rea­son is that I have a strong sense machine learn­ing can play a role in the aug­men­ta­tion of the design process itself. So-called intel­li­gent design tools could make design­ers more effi­cient and effec­tive. They could also enable the cre­ation of designs that would oth­er­wise be impos­si­ble or very hard to achieve.

The work­shop explored both ideas.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Format

The struc­ture was rough­ly as fol­lows:

In the morn­ing we start­ed out pro­vid­ing a very broad intro­duc­tion to the tech­nol­o­gy. We talked about the very basic premise of (super­vised) learn­ing. Name­ly, pro­vid­ing exam­ples of inputs and desired out­puts and train­ing a mod­el based on those exam­ples. To make these con­cepts tan­gi­ble we then intro­duced the Wek­ina­tor and walked the stu­dents through get­ting it up and run­ning using basic exam­ples from the web­site. The final step was to invite them to explore alter­na­tive inputs and out­puts (such as game con­trollers and Arduino boards).

In the after­noon we pro­vid­ed a design brief, ask­ing the stu­dents to pro­to­type a data-enabled object with the set of tools they had acquired in the morn­ing. We assist­ed with tech­ni­cal hur­dles where nec­es­sary (of which there were more than a few) and closed out the day with demos and a group dis­cus­sion reflect­ing on their expe­ri­ences with the tech­nol­o­gy.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Results

As I tweet­ed on the way home that evening, the results were… inter­est­ing.

Not all groups man­aged to put some­thing togeth­er in the admit­ted­ly short amount of time they were pro­vid­ed with. They were most often stymied by get­ting an Arduino to talk to the Wek­ina­tor. Max was often picked as a go-between because the Wek­ina­tor receives OSC mes­sages over UDP, where­as the quick­est way to get an Arduino to talk to a com­put­er is over ser­i­al. But Max in my expe­ri­ence is a fick­le beast and would more than once crap out on us.

The groups that did build some­thing main­ly assem­bled pro­to­types from the exam­ples on hand. Which is fine, but since we were main­ly work­ing with the exam­ples from the Wek­ina­tor web­site they tend­ed towards the inter­ac­tive instru­ment side of things. We were hop­ing for explo­rations of IoT prod­uct con­cepts. For that more hand-rolling was required and this was only achiev­able for the stu­dents on the high­er end of the tech­ni­cal exper­tise spec­trum (and the more tena­cious ones).

The dis­cus­sion yield­ed some inter­est­ing insights into men­tal mod­els of the tech­nol­o­gy and how they are affect­ed by hands-on expe­ri­ence. A com­ment I heard more than once was: Why is this con­sid­ered learn­ing at all? The Wek­ina­tor was not per­ceived to be learn­ing any­thing. When chal­lenged on this by reit­er­at­ing the under­ly­ing prin­ci­ples it became clear the black box nature of the Wek­ina­tor ham­pers appre­ci­a­tion of some of the very real achieve­ments of the tech­nol­o­gy. It seems (for our stu­dents at least) machine learn­ing is stuck in a grey area between too-high expec­ta­tions and too-low recog­ni­tion of its capa­bil­i­ties.

Next steps

These results, and oth­ers, point towards some obvi­ous improve­ments which can be made to the work­shop for­mat, and to teach­ing design stu­dents about machine learn­ing more broad­ly.

  1. We can improve the toolset so that some of the heavy lift­ing involved with get­ting the var­i­ous parts to talk to each oth­er is made eas­i­er and more reli­able.
  2. We can build exam­ples that are geared towards the prac­tice of design­ing IoT prod­ucts and are ready for adap­ta­tion and hack­ing.
  3. And final­ly, and prob­a­bly most chal­leng­ing­ly, we can make the work­ings of machine learn­ing more trans­par­ent so that it becomes eas­i­er to devel­op a feel for its capa­bil­i­ties and short­com­ings.

We do intend to improve and teach the work­shop again. If you’re inter­est­ed in host­ing one (either in an edu­ca­tion­al or pro­fes­sion­al con­text) let me know. And stay tuned for updates on this and oth­er efforts to get design­ers to work in a hands-on man­ner with machine learn­ing.

Spe­cial thanks to the bril­liant Ianus Keller for con­nect­ing me to Péter and for allow­ing us to pilot this crazy idea at IDE Acad­e­my.

References

Sources used dur­ing prepa­ra­tion and run­ning of the work­shop:

  • The Wek­ina­tor – the UI is infu­ri­at­ing­ly poor but when it comes to get­ting start­ed with machine learn­ing this tool is unmatched.
  • Arduino – I have become par­tic­u­lar­ly fond of the MKR1000 board. Add a lithi­um-poly­mer bat­tery and you have every­thing you need to pro­to­type IoT prod­ucts.
  • OSC for ArduinoCNMAT’s imple­men­ta­tion of the open sound con­trol (OSC) encod­ing. Key puz­zle piece for get­ting the above two tools talk­ing to each oth­er.
  • Machine Learn­ing for Design­ers – my pre­ferred intro­duc­tion to the tech­nol­o­gy from a design­er­ly per­spec­tive.
  • A Visu­al Intro­duc­tion to Machine Learn­ing – a very acces­si­ble visu­al expla­na­tion of the basic under­pin­nings of com­put­ers apply­ing sta­tis­ti­cal learn­ing.
  • Remote Con­trol Theremin – an exam­ple project I pre­pared for the work­shop demo­ing how to have the Wek­ina­tor talk to an Arduino MKR1000 with OSC over UDP.

Design × AI coffee meetup

If you work in the field of design or arti­fi­cial intel­li­gence and are inter­est­ed in explor­ing the oppor­tu­ni­ties at their inter­sec­tion, con­sid­er your­self invit­ed to an infor­mal cof­fee meet­up on Feb­ru­ary 15, 10am at Brix in Ams­ter­dam.

Erik van der Plui­jm and myself have for a while now been car­ry­ing on a con­ver­sa­tion about AI and design and we felt it was time to expand the cir­cle a bit. We are very curi­ous who else out there shares our excite­ment.

Ques­tions we are mulling over include: How does the design process change when cre­at­ing intel­li­gent prod­ucts? And: How can teams col­lab­o­rate with intel­li­gent design tools to solve prob­lems in new and inter­est­ing ways?

Any­way, lots to chew on.

No need to sign up or any­thing, just show up and we’ll see what hap­pens.

High-skill robots, low-skill workers

Some notes on what I think I under­stand about tech­nol­o­gy and inequal­i­ty.

Let’s start with an obvi­ous big ques­tion: is tech­nol­o­gy destroy­ing jobs faster than they can be replaced? On the long term the evi­dence isn’t strong. Humans always appear to invent new things to do. There is no rea­son this time around should be any dif­fer­ent.

But in the short term tech­nol­o­gy has con­tributed to an evap­o­ra­tion of mid-skilled jobs. Parts of these jobs are auto­mat­ed entire­ly, parts can be done by few­er peo­ple because of high­er pro­duc­tiv­i­ty gained from tech.

While pro­duc­tiv­i­ty con­tin­ues to grow, jobs are lag­ging behind. The year 2000 appears to have been a turn­ing point. “Some­thing” hap­pened around that time. But no-one knows exact­ly what.

My hunch is that we’ve seen an emer­gence of a new class of pseu­do-monop­o­lies. Oli­gop­o­lies. And this is com­pound­ed by a ‘win­ner takes all’ dynam­ic that tech­nol­o­gy seems to pro­duce.

Oth­ers have point­ed to glob­al­i­sa­tion but although this might be a con­tribut­ing fac­tor, the evi­dence does not sup­port the idea that it is the major cause.

So what are we left with?

His­tor­i­cal­ly, look­ing at pre­vi­ous tech­no­log­i­cal upsets, it appears edu­ca­tion makes a big dif­fer­ence. Peo­ple neg­a­tive­ly affect­ed by tech­no­log­i­cal progress should have access to good edu­ca­tion so that they have options. In the US the access to high qual­i­ty edu­ca­tion is not equal­ly divid­ed.

Appar­ent­ly fam­i­ly income is asso­ci­at­ed with edu­ca­tion­al achieve­ment. So if your fam­i­ly is rich, you are more like­ly to become a high skilled indi­vid­ual. And high skilled indi­vid­u­als are priv­i­leged by the tech econ­o­my.

And if Piketty’s is right, we are approach­ing a real­i­ty in which mon­ey made from wealth ris­es faster than wages. So there is a feed­back loop in place which only exac­er­bates the sit­u­a­tion.

One more bul­let: If you think trick­le-down eco­nom­ics, increas­ing the size of the pie will help, you might be mis­tak­en. It appears social mobil­i­ty is helped more by decreas­ing inequal­i­ty in the dis­tri­b­u­tion of income growth.

So some pre­lim­i­nary con­clu­sions: a pro­gres­sive tax on wealth won’t solve the issue. The edu­ca­tion sys­tem will require reform, too.

I think this is the cen­tral irony of the whole sit­u­a­tion: we are work­ing hard to teach machines how to learn. But we are neglect­ing to improve how peo­ple learn.

Move 37

Design­ers make choic­es. They should be able to pro­vide ratio­nales for those choic­es. (Although some­times they can’t.) Being able to explain the think­ing that went into a design move to your­self, your team­mates and clients is part of being a pro­fes­sion­al.

Move 37. This was the move Alpha­Go made which took every­one by sur­prise because it appeared so wrong at first.

The inter­est­ing thing is that in hind­sight it appeared Alpha­Go had good rea­sons for this move. Based on a cal­cu­la­tion of odds, basi­cal­ly.

If asked at the time, would Alpha­Go have been able to pro­vide this ratio­nale?

It’s a thing that pops up in a lot of the read­ing I am doing around AI. This idea of trans­paren­cy. In some fields you don’t just want an AI to pro­vide you with a deci­sion, but also with the argu­ments sup­port­ing that deci­sion. Obvi­ous exam­ples would include a sys­tem that helps diag­nose dis­ease. You want it to pro­vide more than just the diag­no­sis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cul­tur­al and also legal require­ment.

It’s inter­est­ing.

Although lives don’t depend on it, the same might apply to intel­li­gent design tools. If I am work­ing with a sys­tem and it is offer­ing me design direc­tions or solu­tions, I want to know why it is sug­gest­ing these things as well. Because my rea­son for pick­ing one over the oth­er depends not just on the sur­face lev­el prop­er­ties of the design but also the under­ly­ing rea­sons. It might be impor­tant because I need to be able to tell stake­hold­ers about it.

An added side effect of this is that a design­er work­ing with such a sys­tem is be exposed to machine rea­son­ing about design choic­es. This could inform their own future think­ing too.

Trans­par­ent AI might help peo­ple improve them­selves. A black box can’t teach you much about the craft it’s per­form­ing. Look­ing at out­comes can be inspi­ra­tional or help­ful, but the process­es that lead up to them can be equal­ly infor­ma­tive. If not more so.

Imag­ine work­ing with an intel­li­gent design tool and get­ting the equiv­a­lent of an Alpha­Go move 37 moment. Huge­ly inspi­ra­tional. Game chang­er.

This idea gets me much more excit­ed than automat­ing design tasks does.

Books I’ve read in 2016

I’ve read 32 books, which is four short of my goal and also four less than the pre­vi­ous year. It’s still not a bad score though and qual­i­ty wise the list below con­tains many gems.

I resolved to read most­ly books by women and minor­i­ty authors. This lead to quite a few sur­pris­ing expe­ri­ences which I am cer­tain­ly grate­ful for. I think I’ll con­tin­ue to push myself to seek out such books in the year to come.

There are only a few comics in the list. I sort of fell off the comics band­wag­on this year main­ly because I just can’t seem to find a good place to dis­cov­er things to read.

Any­way, here’s the list, with links to my reviews on Goodreads. A * denotes a par­tic­u­lar favourite.

Favourite music albums of 2016

I guess this year final­ly marked the end of my album lis­ten­ing behav­iour. Spotify’s Dis­cov­er and Dai­ly Mix fea­tures were the one-two punch that knocked it out. In addi­tion I some­how stopped scrob­bling to Last.fm in March. It’s switched back on now but the dam­age is done.

So the data I do have is incom­plete. I did still delib­er­ate­ly put on a num­ber of albums this year. But I won’t post them in order of lis­tens like I did last year. This is sub­jec­tive, unsort­ed and hand-picked. I will even sneak in a few albums that were pub­lished towards the end of 2015.

My sources includ­ed Pitchfork’s list of best new albums which used to be how I dis­cov­ered new music and still wields some influ­ence. I cross-ref­er­enced with Spotify’s top songs of 2016.

So first Spo­ti­fy tells me what to lis­ten to and then it gives me a list of things I actu­al­ly lis­tened to. This is get­ting weird…

Any­way, here they are. A * marks a par­tic­u­lar favourite.

  • A Tribe Called Quest – We Got It From Here… *
  • Solange – A Seat At the Table
  • Hamil­ton Lei­thauser + Ros­tam – I Had A Dream That You Were Mine
  • The Avalanch­es – Wild­flower *
  • Blood Orange – Free­town Sound
  • Whit­ney – Light Upon the Lake
  • Car Seat Head­rest – Teens Of Denial *
  • Chance The Rap­per – Col­or­ing Book *
  • ANOHNIHOPELESSNESS
  • Moody­mann – DJ-Kicks *
  • Grimes – Art Angels *
  • Float­ing Points – Elae­nia
  • The Range – Poten­tial *
  • Sepal­cure – Fold­ing Time
  • Jami­la Woods – HEAVN

Here’s a playlist which includes a cou­ple of more albums if you want to have a lis­ten.

A year of two crashes

A year ago today I was in Bali.

We spent the bet­ter part of Decem­ber 2015 there. It wasn’t real­ly a hol­i­day, but we weren’t real­ly work­ing either. I was wrap­ping up a few final Hub­bub things back then. But for the most part life was qui­et. Very qui­et. We would get up real­ly ear­ly. We would buy some veg­eta­bles and things from a lady who would dri­ve into town every morn­ing with a load from the mar­ket.

I’d swim, exer­cise, med­i­tate, have break­fast and do some work. Writ­ing and read­ing most­ly. By the end of the morn­ing we would cook lunch. The major meal of the day. In the after­noon we wouldn’t do much of any­thing because of the heat. Decem­ber is rainy sea­son in Bali and it gets incred­i­bly hot and humid. Towards dusk we would often take a walk. We would have an ear­ly light din­ner and enter­tain our­selves with the antics of tokay geck­os. We would turn in ear­ly.

Now I am writ­ing this back in our home in Utrecht. In many ways my life has returned to the way it was before that month in Bali. But in oth­er ways it has changed. I used to run a small agency and would be in the stu­dio almost every day. Now I am free­lanc­ing and I split my time between work­ing on site at clients, work­ing from home and meet­ing up with peo­ple in town. I enjoy the vari­ety.

I used to be in the busi­ness of design­ing games and play­things for learn­ing and oth­er pur­pos­es. Now I am back to my old voca­tion of inter­ac­tion design and in the­o­ry I can and work on any­thing.

Towards the end of Hubbub’s run I felt boxed in. Now I feel like I can pur­sue what­ev­er inter­ests me.

Right now, under the ban­ner of Eend I am help­ing the Dutch vic­tim sup­port foun­da­tion devel­op new dig­i­tal ser­vices. I spend about three days a week work­ing on site as part of cross-dis­ci­pli­nary agile team made up of a mix of inter­nal and exter­nal peo­ple. It’s good, impor­tant work and I can con­tribute a lot.

The time that remains I divide between the usu­al free­lancer things like admin, net­work­ing and so on, and devel­op­ing a plan for a PhD.

I’ve been blog­ging on and off about intel­li­gent design tools this year and that is no coin­ci­dence. I am con­sid­er­ing going into research full­time to work in that space. It is still ear­ly days but I am hav­ing fun read­ing up on the sub­ject, writ­ing, mak­ing plans, and talk­ing to peo­ple in acad­e­mia about it.

In between this ‘new nor­mal’ and those qui­et days in Bali was a year of two crash­es. I basi­cal­ly start­ed from scratch in many ways twice this year and I feel like it has helped me get reori­ent­ed.

Crash one.

In Jan­u­ary we moved to Sin­ga­pore. We would end up spend­ing sev­en months there. In that time I joined a start­up called ARTO. I helped build a team, devel­op a design and devel­op­ment process and act­ed as prod­uct man­ag­er and prod­uct design­er. We launched a first ver­sion of the prod­uct in that peri­od and we pushed out a cou­ple of new fea­tures as well. The last thing I did was find a replace­ment for myself.

In between work­ing on ARTO I taught a two-part engage­ment design work­shop with Michael and helped Edo and his team build ArtHit. I got into run­ning and ate my way through the abun­dance of amaz­ing food Sin­ga­pore has to offer.

Of all the things I enjoyed about Sin­ga­pore, its cos­mopoli­tanism has to be the absolute high­light. I worked with peo­ple from Myan­mar, Malaysia, Viet­nam and India. I made friends with peo­ple from many more places. Dis­cov­er­ing the things we have in com­mon and the things that set us apart was a con­tin­u­ous source of enjoy­ment.

And like that, just when we were get­ting set­tled and had got­ten into a rou­tine of sorts and start­ed to feel at home it was time to go back to the Nether­lands. (But not before spend­ing a cou­ple of weeks explor­ing Viet­nam and Cam­bo­dia. More great food and gor­geous sights.)

Crash two.

It is weird to have cul­ture shock in a town you’ve spent most of your life in but that was what it felt like for about the first month back in Utrecht. Sep­tem­ber felt very sim­i­lar to Jan­u­ary. I had no work and was net­work­ing like a mad­man and just play­ing the num­bers game. Hop­ing I would bump into some­thing. And of course, as it always does even­tu­al­ly, things worked out.

I con­sid­er myself blessed to be able to take these risks and more or less trust things will turn out okay. I know that if they don’t there are always peo­ple around me who will sup­port me if worse comes to worse.

2017 looks to be a year of more sta­bil­i­ty although one can nev­er be sure. World events as well as occur­rences in my per­son­al cir­cles this year have shown me once again there are no guar­an­tees in life.

But I plan to build on what I’ve start­ed these past few months and see where it takes me. It is time to shift from ori­ent­ing to decid­ing and act­ing. And for the fore­see­able future I plan to keep the cur­rent ‘sys­tem’ run­ning.

So no more crash­es for the time being. Although I am sure there will come a time when the need for it aris­es again.

Waiting for the smart city

Nowa­days when we talk about the smart city we don’t nec­es­sar­i­ly talk about smart­ness or cities.

I feel like when the term is used it often obscures more than it reveals.

Here a few rea­sons why.

To begin with, the term sug­gests some­thing that is yet to arrive. Some kind of tech-enabled utopia. But actu­al­ly, cur­rent day cities are already smart to a greater or less­er degree depend­ing on where and how you look.

This is impor­tant because too often we post­pone action as we wait for the smart city to arrive. We don’t have to wait. We can act to improve things right now.

Fur­ther­more, ‘smart city’ sug­gests some­thing mono­lith­ic that can be designed as a whole. But a smart city, like any city, is a huge mess of inter­con­nect­ed things. It resists top­down design.

His­to­ry is lit­tered with failed attempts at author­i­tar­i­an high-mod­ernist city design. Just stop it.

Smart­ness should not be an end but a means.

I read ‘smart’ as a short­hand for ‘tech­no­log­i­cal­ly aug­ment­ed’. A smart city is a city eat­en by soft­ware. All cities are being eat­en (or have been eat­en) by soft­ware to a greater or less­er extent. Uber and Airbnb are obvi­ous exam­ples. Small­er more sub­tle ones abound.

The ques­tion is, smart to what end? Effi­cien­cy? Leg­i­bil­i­ty? Con­trol­la­bil­i­ty? Anti-fragili­ty? Playa­bil­i­ty? Live­abil­i­ty? Sus­tain­abil­i­ty? The answer depends on your out­look.

These are ways in which the smart city label obscures. It obscures agency. It obscures net­works. It obscures intent.

I’m not say­ing don’t ever use it. But in many cas­es you can get by with­out it. You can talk about spe­cif­ic parts that make up the whole of a city, spe­cif­ic tech­nolo­gies and spe­cif­ic aims.


Post­script 1

We can do the same exer­cise with the ‘city’ part of the meme.

The same process that is mak­ing cities smart (soft­ware eat­ing the world) is also mak­ing every­thing else smart. Smart towns. Smart coun­try­sides. The ends are dif­fer­ent. The net­works are dif­fer­ent. The process­es play out in dif­fer­ent ways.

It’s okay to think about cities but don’t think they have a monop­oly on ‘dis­rup­tion’.

Post­script 2

Some of this inspired by clever things I heard Sebas­t­ian Quack say at Play­ful Design for Smart Cities and Usman Haque at ThingsCon Ams­ter­dam.

Playful Design for Smart Cities

Ear­li­er this week I escaped the mis­er­able weath­er and food of the Nether­lands to spend a cou­ple of days in Barcelona, where I attend­ed the ‘Play­ful Design for Smart Cities’ work­shop at RMIT Europe.

I helped Jus­si Holopainen run a work­shop in which par­tic­i­pants from indus­try, gov­ern­ment and acad­e­mia togeth­er defined projects aimed at fur­ther explor­ing this idea of play­ful design with­in the con­text of smart cities, with­out falling into the trap of solu­tion­ism.

Before the work­shop I pre­sent­ed a sum­ma­ry of my chap­ter in The Game­ful World, along with some of my cur­rent think­ing on it. There were also great talks by Judith Ack­er­mann, Flo­ri­an ‘Floyd’ Müller, and Gilly Kar­jevsky and Sebas­t­ian Quack.

Below are the slides for my talk and links to all the arti­cles, books and exam­ples I explic­it­ly and implic­it­ly ref­er­enced through­out.

Adapting intelligent tools for creativity

I read Alper’s book on con­ver­sa­tion­al user inter­faces over the week­end and was struck by this para­graph:

The holy grail of a con­ver­sa­tion­al sys­tem would be one that’s aware of itself — one that knows its own mod­el and inter­nal struc­ture and allows you to change all of that by talk­ing to it. Imag­ine being able to tell Siri to tone it down a bit with the jokes and that it would then actu­al­ly do that.”

His point stuck with me because I think this is of par­tic­u­lar impor­tance to cre­ative tools. These need to be flex­i­ble so that a vari­ety of peo­ple can use them in dif­fer­ent cir­cum­stances. This adapt­abil­i­ty is what lends a tool depth.

The depth I am think­ing of in cre­ative tools is sim­i­lar to the one in games, which appears to be derived from a kind of semi-ordered­ness. In short, you’re look­ing for a sweet spot between too sim­ple and too com­plex.

And of course, you need good defaults.

Back to adap­ta­tion. This can hap­pen in at least two ways on the inter­face lev­el: modal or mod­e­less. A sim­ple exam­ple of the for­mer would be to go into a pref­er­ences win­dow to change the behav­iour of your draw­ing pack­age. Sim­i­lar­ly, mod­e­less adap­ta­tion hap­pens when you rearrange some pan­els to bet­ter suit the task at hand.

Return­ing to Siri, the equiv­a­lence of mod­e­less adap­ta­tion would be to tell her to tone it down when her sense of humor irks you.

For the modal solu­tion, imag­ine a humor slid­er in a set­tings screen some­where. This would be a ter­ri­ble solu­tion because it offers a poor map­ping of a con­trol to a per­son­al­i­ty trait. Can you pin­point on a scale of 1 to 10 your pre­ferred amount of humor in your hypo­thet­i­cal per­son­al assis­tant? And any­way, doesn’t it depend on a lot of sit­u­a­tion­al things such as your mood, the par­tic­u­lar task you’re try­ing to com­plete and so on? In short, this requires some­thing more sit­u­at­ed and adap­tive.

So just being able to tell Siri to tone it down would be the equiv­a­lent of rear­rang­ing your Pho­to­shop palets. And in a next inter­ac­tion Siri might care­ful­ly try some humor again to gauge your response. And if you encour­age her, she might be more humor­ous again.

Enough about fun­ny Siri for now because it’s a bit of a sil­ly exam­ple.

Fun­ny Siri, although she’s a bit of a Sil­ly exam­ple, does illus­trate anoth­er prob­lem I am try­ing to wrap my head around. How does an intel­li­gent tool for cre­ativ­i­ty com­mu­ni­cate its inter­nal state? Because it is prob­a­bilis­tic, it can’t be eas­i­ly mapped to a graph­ic infor­ma­tion dis­play. And so our old way of manip­u­lat­ing state, and more specif­i­cal­ly adapt­ing a tool to our needs becomes very dif­fer­ent too.

It seems to be best for an intel­li­gent sys­tem to be open to sug­ges­tions from users about how to behave. Adapt­ing an intel­li­gent cre­ative tool is less like rear­rang­ing your work­space and more like coor­di­nat­ing with a cowork­er.

My ide­al is for this to be done in the same mode (and so using the same con­trols) as when doing the work itself. I expect this to allow for more flu­id inter­ac­tions, going back and forth between doing the work at hand, and meta-com­mu­ni­ca­tion about how the sys­tem sup­ports the work. I think if we look at how peo­ple col­lab­o­rate this hap­pens a lot, com­mu­ni­ca­tion and meta-com­mu­ni­ca­tion going on con­tin­u­ous­ly in the same chan­nels.

We don’t need a self-aware arti­fi­cial intel­li­gence to do this. We need to apply what com­put­er sci­en­tists call super­vised learn­ing. The basic idea is to pro­vide a sys­tem with exam­ple inputs and desired out­puts, and let it infer the nec­es­sary rules from them. If the results are unsat­is­fac­to­ry, you sim­ply con­tin­ue train­ing it until it per­forms well enough.

A super fun exam­ple of this approach is the Wek­ina­tor, a piece of machine learn­ing soft­ware for cre­at­ing musi­cal instru­ments. Below is a video in which Wekinator’s cre­ator Rebec­ca Fiebrink per­forms sev­er­al demos.

Here we have an intel­li­gent sys­tem learn­ing from exam­ples. A per­son manip­u­lat­ing data in stead of code to get to a par­tic­u­lar desired behav­iour. But what Wek­ina­tor lacks and what I expect will be required for this type of thing to real­ly catch on is for the train­ing to hap­pen in the same mode or medi­um as the per­for­mance. The tech­nol­o­gy seems to be get­ting there, but there are many inter­ac­tion design prob­lems remain­ing to be solved.