Prototyping the Useless Butler: Machine Learning for IoT Designers

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Ams­ter­dam 2017, pho­to by nunocruzstreet.com

At ThingsCon Ams­ter­dam 2017, Péter and I ran a sec­ond iter­a­tion of our machine learn­ing work­shop. We improved on our first attempt at TU Delft in a num­ber of ways.

  • We pre­pared exam­ple code for com­mu­ni­cat­ing with Wek­ina­tor from a wifi con­nect­ed Arduino MKR1000 over OSC.
  • We cre­at­ed a pre­de­fined bread­board set­up.
  • We devel­oped three exer­cis­es, one for each type of Wek­ina­tor out­put: regres­sion, clas­si­fi­ca­tion and dynam­ic time warp­ing.

In con­trast to the first ver­sion, we had two hours to run through the whole thing, in stead of a day… So we had to cut some cor­ners, and dou­bled down on walk­ing par­tic­i­pants through a num­ber of exer­cis­es so that they would come out of it with some read­i­ly applic­a­ble skills.

We dubbed the work­shop ‘pro­to­typ­ing the use­less but­ler’, with thanks to Philip van Allen for the sug­ges­tion to frame the exer­cis­es around build­ing some­thing non-pro­duc­tive so that the focus was shift­ed to play and explo­ration.

All of the code, the cir­cuit dia­gram and slides are over on GitHub. But I’ll sum­marise things here.

  1. We spent a very short amount of time intro­duc­ing machine learn­ing. We used Google’s Teach­able Machine as an exam­ple and con­trast­ed reg­u­lar pro­gram­ming with using machine learn­ing algo­rithms to train mod­els. The point was to pro­vide folks with just enough con­cep­tu­al scaf­fold­ing so that the rest of the work­shop would make sense.
  2. We then intro­duced our ‘tool­chain’ which con­sists of Wek­ina­tor, the Arduino MKR1000 mod­ule and the OSC pro­to­col. The aim of this tool­chain is to allow design­ers who work in the IoT space to get a feel for the mate­r­i­al prop­er­ties of machine learn­ing through hands-on tin­ker­ing. We tried to cre­ate a tool­chain with as few mov­ing parts as pos­si­ble, because each addi­tion­al com­po­nent would intro­duce anoth­er point of fail­ure which might require debug­ging. This tool­chain would enable design­ers to either use machine learn­ing to rapid­ly pro­to­type inter­ac­tive behav­iour with min­i­mal or no pro­gram­ming. It can also be used to pro­to­type prod­ucts that expose inter­ac­tive machine learn­ing fea­tures to end users. (For a spec­u­la­tive exam­ple of one such prod­uct, see Bjørn Karmann’s Objec­ti­fi­er.)
  3. Par­tic­i­pants were then asked to set up all the required parts on their own work­sta­tion. A list can be found on the Use­less But­ler GitHub page.
  4. We then pro­ceed­ed to build the cir­cuit. We pro­vid­ed all the com­po­nents and showed a Fritz­ing dia­gram to help peo­ple along. The basic idea of this cir­cuit, the epony­mous use­less but­ler, was to have a suf­fi­cient­ly rich set of inputs and out­puts with which to play, that would suit all three types of Wek­ina­tor out­put. So we set­tled on a pair of pho­tore­sis­tors or LDRs as inputs and an RGB LED as out­put.
  5. With the pre­req­ui­sites installed and the cir­cuit built we were ready to walk through the exam­ples. For regres­sion we mapped the con­tin­u­ous stream of read­ings from the two LDRs to three out­puts, one each for the red, green and blue of the LED. For clas­si­fi­ca­tion we put the state of both LDRs into one of four cat­e­gories, each switch­ing the RGB LED to a spe­cif­ic col­or (cyan, magen­ta, yel­low or white). And final­ly, for dynam­ic time warp­ing, we asked Wek­ina­tor to recog­nise one of three ges­tures and switch the RGB LED to one of three states (red, green or off).

When we reflect­ed on the work­shop after­wards, we agreed we now have a proven con­cept. Par­tic­i­pants were able to get the tool­chain up and run­ning and could play around with iter­a­tive­ly train­ing and eval­u­at­ing their mod­el until it behaved as intend­ed.

How­ev­er, there is still quite a bit of room for improve­ment. On a prac­ti­cal note, quite a bit of time was tak­en up by the build­ing of the cir­cuit, which isn’t the point of the work­shop. One way of deal­ing with this is to bring those to a work­shop pre-built. Doing so would enable us to get to the machine learn­ing quick­er and would open up time and space to also engage with the par­tic­i­pants about the point of it all.

We’re keen on bring­ing this work­shop to more set­tings in future. If we do, I’m sure we’ll find the oppor­tu­ni­ty to improve on things once more and I will report back here.

Many thanks to Iskan­der and the rest of the ThingsCon team for invit­ing us to the con­fer­ence.

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Ams­ter­dam 2017, pho­to by nunocruzstreet.com

Machine Learning for Designers’ workshop

On Wednes­day Péter Kun, Hol­ly Rob­bins and myself taught a one-day work­shop on machine learn­ing at Delft Uni­ver­si­ty of Tech­nol­o­gy. We had about thir­ty master’s stu­dents from the indus­tri­al design engi­neer­ing fac­ul­ty. The aim was to get them acquaint­ed with the tech­nol­o­gy through hands-on tin­ker­ing with the Wek­ina­tor as cen­tral teach­ing tool.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Background

The rea­son­ing behind this work­shop is twofold.

On the one hand I expect design­ers will find them­selves work­ing on projects involv­ing machine learn­ing more and more often. The tech­nol­o­gy has cer­tain prop­er­ties that dif­fer from tra­di­tion­al soft­ware. Most impor­tant­ly, machine learn­ing is prob­a­bilis­tic in stead of deter­min­is­tic. It is impor­tant that design­ers under­stand this because oth­er­wise they are like­ly to make bad deci­sions about its appli­ca­tion.

The sec­ond rea­son is that I have a strong sense machine learn­ing can play a role in the aug­men­ta­tion of the design process itself. So-called intel­li­gent design tools could make design­ers more effi­cient and effec­tive. They could also enable the cre­ation of designs that would oth­er­wise be impos­si­ble or very hard to achieve.

The work­shop explored both ideas.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Format

The struc­ture was rough­ly as fol­lows:

In the morn­ing we start­ed out pro­vid­ing a very broad intro­duc­tion to the tech­nol­o­gy. We talked about the very basic premise of (super­vised) learn­ing. Name­ly, pro­vid­ing exam­ples of inputs and desired out­puts and train­ing a mod­el based on those exam­ples. To make these con­cepts tan­gi­ble we then intro­duced the Wek­ina­tor and walked the stu­dents through get­ting it up and run­ning using basic exam­ples from the web­site. The final step was to invite them to explore alter­na­tive inputs and out­puts (such as game con­trollers and Arduino boards).

In the after­noon we pro­vid­ed a design brief, ask­ing the stu­dents to pro­to­type a data-enabled object with the set of tools they had acquired in the morn­ing. We assist­ed with tech­ni­cal hur­dles where nec­es­sary (of which there were more than a few) and closed out the day with demos and a group dis­cus­sion reflect­ing on their expe­ri­ences with the tech­nol­o­gy.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Rob­bins

Results

As I tweet­ed on the way home that evening, the results were… inter­est­ing.

Not all groups man­aged to put some­thing togeth­er in the admit­ted­ly short amount of time they were pro­vid­ed with. They were most often stymied by get­ting an Arduino to talk to the Wek­ina­tor. Max was often picked as a go-between because the Wek­ina­tor receives OSC mes­sages over UDP, where­as the quick­est way to get an Arduino to talk to a com­put­er is over ser­i­al. But Max in my expe­ri­ence is a fick­le beast and would more than once crap out on us.

The groups that did build some­thing main­ly assem­bled pro­to­types from the exam­ples on hand. Which is fine, but since we were main­ly work­ing with the exam­ples from the Wek­ina­tor web­site they tend­ed towards the inter­ac­tive instru­ment side of things. We were hop­ing for explo­rations of IoT prod­uct con­cepts. For that more hand-rolling was required and this was only achiev­able for the stu­dents on the high­er end of the tech­ni­cal exper­tise spec­trum (and the more tena­cious ones).

The dis­cus­sion yield­ed some inter­est­ing insights into men­tal mod­els of the tech­nol­o­gy and how they are affect­ed by hands-on expe­ri­ence. A com­ment I heard more than once was: Why is this con­sid­ered learn­ing at all? The Wek­ina­tor was not per­ceived to be learn­ing any­thing. When chal­lenged on this by reit­er­at­ing the under­ly­ing prin­ci­ples it became clear the black box nature of the Wek­ina­tor ham­pers appre­ci­a­tion of some of the very real achieve­ments of the tech­nol­o­gy. It seems (for our stu­dents at least) machine learn­ing is stuck in a grey area between too-high expec­ta­tions and too-low recog­ni­tion of its capa­bil­i­ties.

Next steps

These results, and oth­ers, point towards some obvi­ous improve­ments which can be made to the work­shop for­mat, and to teach­ing design stu­dents about machine learn­ing more broad­ly.

  1. We can improve the toolset so that some of the heavy lift­ing involved with get­ting the var­i­ous parts to talk to each oth­er is made eas­i­er and more reli­able.
  2. We can build exam­ples that are geared towards the prac­tice of design­ing IoT prod­ucts and are ready for adap­ta­tion and hack­ing.
  3. And final­ly, and prob­a­bly most chal­leng­ing­ly, we can make the work­ings of machine learn­ing more trans­par­ent so that it becomes eas­i­er to devel­op a feel for its capa­bil­i­ties and short­com­ings.

We do intend to improve and teach the work­shop again. If you’re inter­est­ed in host­ing one (either in an edu­ca­tion­al or pro­fes­sion­al con­text) let me know. And stay tuned for updates on this and oth­er efforts to get design­ers to work in a hands-on man­ner with machine learn­ing.

Spe­cial thanks to the bril­liant Ianus Keller for con­nect­ing me to Péter and for allow­ing us to pilot this crazy idea at IDE Acad­e­my.

References

Sources used dur­ing prepa­ra­tion and run­ning of the work­shop:

  • The Wek­ina­tor – the UI is infu­ri­at­ing­ly poor but when it comes to get­ting start­ed with machine learn­ing this tool is unmatched.
  • Arduino – I have become par­tic­u­lar­ly fond of the MKR1000 board. Add a lithi­um-poly­mer bat­tery and you have every­thing you need to pro­to­type IoT prod­ucts.
  • OSC for ArduinoCNMAT’s imple­men­ta­tion of the open sound con­trol (OSC) encod­ing. Key puz­zle piece for get­ting the above two tools talk­ing to each oth­er.
  • Machine Learn­ing for Design­ers – my pre­ferred intro­duc­tion to the tech­nol­o­gy from a design­er­ly per­spec­tive.
  • A Visu­al Intro­duc­tion to Machine Learn­ing – a very acces­si­ble visu­al expla­na­tion of the basic under­pin­nings of com­put­ers apply­ing sta­tis­ti­cal learn­ing.
  • Remote Con­trol Theremin – an exam­ple project I pre­pared for the work­shop demo­ing how to have the Wek­ina­tor talk to an Arduino MKR1000 with OSC over UDP.

Adapting intelligent tools for creativity

I read Alper’s book on con­ver­sa­tion­al user inter­faces over the week­end and was struck by this para­graph:

The holy grail of a con­ver­sa­tion­al sys­tem would be one that’s aware of itself — one that knows its own mod­el and inter­nal struc­ture and allows you to change all of that by talk­ing to it. Imag­ine being able to tell Siri to tone it down a bit with the jokes and that it would then actu­al­ly do that.”

His point stuck with me because I think this is of par­tic­u­lar impor­tance to cre­ative tools. These need to be flex­i­ble so that a vari­ety of peo­ple can use them in dif­fer­ent cir­cum­stances. This adapt­abil­i­ty is what lends a tool depth.

The depth I am think­ing of in cre­ative tools is sim­i­lar to the one in games, which appears to be derived from a kind of semi-ordered­ness. In short, you’re look­ing for a sweet spot between too sim­ple and too com­plex.

And of course, you need good defaults.

Back to adap­ta­tion. This can hap­pen in at least two ways on the inter­face lev­el: modal or mod­e­less. A sim­ple exam­ple of the for­mer would be to go into a pref­er­ences win­dow to change the behav­iour of your draw­ing pack­age. Sim­i­lar­ly, mod­e­less adap­ta­tion hap­pens when you rearrange some pan­els to bet­ter suit the task at hand.

Return­ing to Siri, the equiv­a­lence of mod­e­less adap­ta­tion would be to tell her to tone it down when her sense of humor irks you.

For the modal solu­tion, imag­ine a humor slid­er in a set­tings screen some­where. This would be a ter­ri­ble solu­tion because it offers a poor map­ping of a con­trol to a per­son­al­i­ty trait. Can you pin­point on a scale of 1 to 10 your pre­ferred amount of humor in your hypo­thet­i­cal per­son­al assis­tant? And any­way, doesn’t it depend on a lot of sit­u­a­tion­al things such as your mood, the par­tic­u­lar task you’re try­ing to com­plete and so on? In short, this requires some­thing more sit­u­at­ed and adap­tive.

So just being able to tell Siri to tone it down would be the equiv­a­lent of rear­rang­ing your Pho­to­shop palets. And in a next inter­ac­tion Siri might care­ful­ly try some humor again to gauge your response. And if you encour­age her, she might be more humor­ous again.

Enough about fun­ny Siri for now because it’s a bit of a sil­ly exam­ple.

Fun­ny Siri, although she’s a bit of a Sil­ly exam­ple, does illus­trate anoth­er prob­lem I am try­ing to wrap my head around. How does an intel­li­gent tool for cre­ativ­i­ty com­mu­ni­cate its inter­nal state? Because it is prob­a­bilis­tic, it can’t be eas­i­ly mapped to a graph­ic infor­ma­tion dis­play. And so our old way of manip­u­lat­ing state, and more specif­i­cal­ly adapt­ing a tool to our needs becomes very dif­fer­ent too.

It seems to be best for an intel­li­gent sys­tem to be open to sug­ges­tions from users about how to behave. Adapt­ing an intel­li­gent cre­ative tool is less like rear­rang­ing your work­space and more like coor­di­nat­ing with a cowork­er.

My ide­al is for this to be done in the same mode (and so using the same con­trols) as when doing the work itself. I expect this to allow for more flu­id inter­ac­tions, going back and forth between doing the work at hand, and meta-com­mu­ni­ca­tion about how the sys­tem sup­ports the work. I think if we look at how peo­ple col­lab­o­rate this hap­pens a lot, com­mu­ni­ca­tion and meta-com­mu­ni­ca­tion going on con­tin­u­ous­ly in the same chan­nels.

We don’t need a self-aware arti­fi­cial intel­li­gence to do this. We need to apply what com­put­er sci­en­tists call super­vised learn­ing. The basic idea is to pro­vide a sys­tem with exam­ple inputs and desired out­puts, and let it infer the nec­es­sary rules from them. If the results are unsat­is­fac­to­ry, you sim­ply con­tin­ue train­ing it until it per­forms well enough.

A super fun exam­ple of this approach is the Wek­ina­tor, a piece of machine learn­ing soft­ware for cre­at­ing musi­cal instru­ments. Below is a video in which Wekinator’s cre­ator Rebec­ca Fiebrink per­forms sev­er­al demos.

Here we have an intel­li­gent sys­tem learn­ing from exam­ples. A per­son manip­u­lat­ing data in stead of code to get to a par­tic­u­lar desired behav­iour. But what Wek­ina­tor lacks and what I expect will be required for this type of thing to real­ly catch on is for the train­ing to hap­pen in the same mode or medi­um as the per­for­mance. The tech­nol­o­gy seems to be get­ting there, but there are many inter­ac­tion design prob­lems remain­ing to be solved.