Prototyping the Useless Butler: Machine Learning for IoT Designers

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Ams­ter­dam 2017, pho­to by nunocruzstreet.com

At ThingsCon Ams­ter­dam 2017, Péter and I ran a sec­ond iter­a­tion of our machine learn­ing work­shop. We improved on our first attempt at TU Delft in a num­ber of ways.

  • We pre­pared exam­ple code for com­mu­ni­cat­ing with Wek­ina­tor from a wifi con­nect­ed Arduino MKR1000 over OSC.
  • We cre­at­ed a pre­de­fined bread­board setup.
  • We devel­oped three exer­cis­es, one for each type of Wek­ina­tor out­put: regres­sion, clas­si­fi­ca­tion and dynam­ic time warping.

In con­trast to the first ver­sion, we had two hours to run through the whole thing, in stead of a day… So we had to cut some cor­ners, and dou­bled down on walk­ing par­tic­i­pants through a num­ber of exer­cis­es so that they would come out of it with some read­i­ly applic­a­ble skills. 

We dubbed the work­shop ‘pro­to­typ­ing the use­less but­ler’, with thanks to Philip van Allen for the sug­ges­tion to frame the exer­cis­es around build­ing some­thing non-pro­duc­tive so that the focus was shift­ed to play and exploration.

All of the code, the cir­cuit dia­gram and slides are over on GitHub. But I’ll sum­marise things here.

  1. We spent a very short amount of time intro­duc­ing machine learn­ing. We used Google’s Teach­able Machine as an exam­ple and con­trast­ed reg­u­lar pro­gram­ming with using machine learn­ing algo­rithms to train mod­els. The point was to pro­vide folks with just enough con­cep­tu­al scaf­fold­ing so that the rest of the work­shop would make sense.
  2. We then intro­duced our ‘tool­chain’ which con­sists of Wek­ina­tor, the Arduino MKR1000 mod­ule and the OSC pro­to­col. The aim of this tool­chain is to allow design­ers who work in the IoT space to get a feel for the mate­r­i­al prop­er­ties of machine learn­ing through hands-on tin­ker­ing. We tried to cre­ate a tool­chain with as few mov­ing parts as pos­si­ble, because each addi­tion­al com­po­nent would intro­duce anoth­er point of fail­ure which might require debug­ging. This tool­chain would enable design­ers to either use machine learn­ing to rapid­ly pro­to­type inter­ac­tive behav­iour with min­i­mal or no pro­gram­ming. It can also be used to pro­to­type prod­ucts that expose inter­ac­tive machine learn­ing fea­tures to end users. (For a spec­u­la­tive exam­ple of one such prod­uct, see Bjørn Kar­man­n’s Objec­ti­fi­er.)
  3. Par­tic­i­pants were then asked to set up all the required parts on their own work­sta­tion. A list can be found on the Use­less But­ler GitHub page.
  4. We then pro­ceed­ed to build the cir­cuit. We pro­vid­ed all the com­po­nents and showed a Fritz­ing dia­gram to help peo­ple along. The basic idea of this cir­cuit, the epony­mous use­less but­ler, was to have a suf­fi­cient­ly rich set of inputs and out­puts with which to play, that would suit all three types of Wek­ina­tor out­put. So we set­tled on a pair of pho­tore­sis­tors or LDRs as inputs and an RGB LED as output.
  5. With the pre­req­ui­sites installed and the cir­cuit built we were ready to walk through the exam­ples. For regres­sion we mapped the con­tin­u­ous stream of read­ings from the two LDRs to three out­puts, one each for the red, green and blue of the LED. For clas­si­fi­ca­tion we put the state of both LDRs into one of four cat­e­gories, each switch­ing the RGB LED to a spe­cif­ic col­or (cyan, magen­ta, yel­low or white). And final­ly, for dynam­ic time warp­ing, we asked Wek­ina­tor to recog­nise one of three ges­tures and switch the RGB LED to one of three states (red, green or off).

When we reflect­ed on the work­shop after­wards, we agreed we now have a proven con­cept. Par­tic­i­pants were able to get the tool­chain up and run­ning and could play around with iter­a­tive­ly train­ing and eval­u­at­ing their mod­el until it behaved as intended. 

How­ev­er, there is still quite a bit of room for improve­ment. On a prac­ti­cal note, quite a bit of time was tak­en up by the build­ing of the cir­cuit, which isn’t the point of the work­shop. One way of deal­ing with this is to bring those to a work­shop pre-built. Doing so would enable us to get to the machine learn­ing quick­er and would open up time and space to also engage with the par­tic­i­pants about the point of it all. 

We’re keen on bring­ing this work­shop to more set­tings in future. If we do, I’m sure we’ll find the oppor­tu­ni­ty to improve on things once more and I will report back here.

Many thanks to Iskan­der and the rest of the ThingsCon team for invit­ing us to the conference.

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Ams­ter­dam 2017, pho­to by nunocruzstreet.com

Machine Learning for Designers’ workshop

On Wednes­day Péter Kun, Hol­ly Rob­bins and myself taught a one-day work­shop on machine learn­ing at Delft Uni­ver­si­ty of Tech­nol­o­gy. We had about thir­ty master’s stu­dents from the indus­tri­al design engi­neer­ing fac­ul­ty. The aim was to get them acquaint­ed with the tech­nol­o­gy through hands-on tin­ker­ing with the Wek­ina­tor as cen­tral teach­ing tool.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Background

The rea­son­ing behind this work­shop is twofold. 

On the one hand I expect design­ers will find them­selves work­ing on projects involv­ing machine learn­ing more and more often. The tech­nol­o­gy has cer­tain prop­er­ties that dif­fer from tra­di­tion­al soft­ware. Most impor­tant­ly, machine learn­ing is prob­a­bilis­tic in stead of deter­min­is­tic. It is impor­tant that design­ers under­stand this because oth­er­wise they are like­ly to make bad deci­sions about its application. 

The sec­ond rea­son is that I have a strong sense machine learn­ing can play a role in the aug­men­ta­tion of the design process itself. So-called intel­li­gent design tools could make design­ers more effi­cient and effec­tive. They could also enable the cre­ation of designs that would oth­er­wise be impos­si­ble or very hard to achieve.

The work­shop explored both ideas.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Format

The struc­ture was rough­ly as follows: 

In the morn­ing we start­ed out pro­vid­ing a very broad intro­duc­tion to the tech­nol­o­gy. We talked about the very basic premise of (super­vised) learn­ing. Name­ly, pro­vid­ing exam­ples of inputs and desired out­puts and train­ing a mod­el based on those exam­ples. To make these con­cepts tan­gi­ble we then intro­duced the Wek­ina­tor and walked the stu­dents through get­ting it up and run­ning using basic exam­ples from the web­site. The final step was to invite them to explore alter­na­tive inputs and out­puts (such as game con­trollers and Arduino boards).

In the after­noon we pro­vid­ed a design brief, ask­ing the stu­dents to pro­to­type a data-enabled object with the set of tools they had acquired in the morn­ing. We assist­ed with tech­ni­cal hur­dles where nec­es­sary (of which there were more than a few) and closed out the day with demos and a group dis­cus­sion reflect­ing on their expe­ri­ences with the technology.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Results

As I tweet­ed on the way home that evening, the results were… interesting. 

Not all groups man­aged to put some­thing togeth­er in the admit­ted­ly short amount of time they were pro­vid­ed with. They were most often stymied by get­ting an Arduino to talk to the Wek­ina­tor. Max was often picked as a go-between because the Wek­ina­tor receives OSC mes­sages over UDP, where­as the quick­est way to get an Arduino to talk to a com­put­er is over ser­i­al. But Max in my expe­ri­ence is a fick­le beast and would more than once crap out on us.

The groups that did build some­thing main­ly assem­bled pro­to­types from the exam­ples on hand. Which is fine, but since we were main­ly work­ing with the exam­ples from the Wek­ina­tor web­site they tend­ed towards the inter­ac­tive instru­ment side of things. We were hop­ing for explo­rations of IoT prod­uct con­cepts. For that more hand-rolling was required and this was only achiev­able for the stu­dents on the high­er end of the tech­ni­cal exper­tise spec­trum (and the more tena­cious ones).

The dis­cus­sion yield­ed some inter­est­ing insights into men­tal mod­els of the tech­nol­o­gy and how they are affect­ed by hands-on expe­ri­ence. A com­ment I heard more than once was: Why is this con­sid­ered learn­ing at all? The Wek­ina­tor was not per­ceived to be learn­ing any­thing. When chal­lenged on this by reit­er­at­ing the under­ly­ing prin­ci­ples it became clear the black box nature of the Wek­ina­tor ham­pers appre­ci­a­tion of some of the very real achieve­ments of the tech­nol­o­gy. It seems (for our stu­dents at least) machine learn­ing is stuck in a grey area between too-high expec­ta­tions and too-low recog­ni­tion of its capabilities.

Next steps

These results, and oth­ers, point towards some obvi­ous improve­ments which can be made to the work­shop for­mat, and to teach­ing design stu­dents about machine learn­ing more broadly. 

  1. We can improve the toolset so that some of the heavy lift­ing involved with get­ting the var­i­ous parts to talk to each oth­er is made eas­i­er and more reliable.
  2. We can build exam­ples that are geared towards the prac­tice of design­ing IoT prod­ucts and are ready for adap­ta­tion and hacking.
  3. And final­ly, and prob­a­bly most chal­leng­ing­ly, we can make the work­ings of machine learn­ing more trans­par­ent so that it becomes eas­i­er to devel­op a feel for its capa­bil­i­ties and shortcomings.

We do intend to improve and teach the work­shop again. If you’re inter­est­ed in host­ing one (either in an edu­ca­tion­al or pro­fes­sion­al con­text) let me know. And stay tuned for updates on this and oth­er efforts to get design­ers to work in a hands-on man­ner with machine learning.

Spe­cial thanks to the bril­liant Ianus Keller for con­nect­ing me to Péter and for allow­ing us to pilot this crazy idea at IDE Acad­e­my.

References

Sources used dur­ing prepa­ra­tion and run­ning of the workshop:

  • The Wek­ina­tor – the UI is infu­ri­at­ing­ly poor but when it comes to get­ting start­ed with machine learn­ing this tool is unmatched.
  • Arduino – I have become par­tic­u­lar­ly fond of the MKR1000 board. Add a lithi­um-poly­mer bat­tery and you have every­thing you need to pro­to­type IoT products.
  • OSC for ArduinoCNMAT’s imple­men­ta­tion of the open sound con­trol (OSC) encod­ing. Key puz­zle piece for get­ting the above two tools talk­ing to each other.
  • Machine Learn­ing for Design­ers – my pre­ferred intro­duc­tion to the tech­nol­o­gy from a design­er­ly perspective.
  • A Visu­al Intro­duc­tion to Machine Learn­ing – a very acces­si­ble visu­al expla­na­tion of the basic under­pin­nings of com­put­ers apply­ing sta­tis­ti­cal learning.
  • Remote Con­trol Theremin – an exam­ple project I pre­pared for the work­shop demo­ing how to have the Wek­ina­tor talk to an Arduino MKR1000 with OSC over UDP.

Engagement design worksheets

Engagement design workshop at General Assembly Singapore

In June/July of this year I helped Michael Fil­lié teach two class­es about engage­ment design at Gen­er­al Assem­bly Sin­ga­pore. The first was the­o­ret­i­cal and the sec­ond prac­ti­cal. For the prac­ti­cal class we cre­at­ed a cou­ple of work­sheets which par­tic­i­pants used in groups to grad­u­al­ly build a design con­cept for a new prod­uct or prod­uct improve­ment aimed at long-term engage­ment. Below are the work­sheets along with some notes on how to use them. I’m hop­ing they may be use­ful in your own practice.

A prac­ti­cal note: Each of these work­sheets is designed to be print­ed on A1 paper. (Click on the images to get the PDFs.) We worked on them using post-it notes so that it is easy to add, change or remove things as you go.

Problem statement and persona

01-problem-statement-and-persona

We start­ed with under­stand­ing the prob­lem and the user. This work­sheet is an adap­ta­tion of the per­sona sheet by Strat­e­gyz­er. To use it you begin at the top, flesh­ing out the prob­lem in the form of stat­ing the engage­ment chal­lenge, and the busi­ness goals. Then, you select a user seg­ment which is rel­e­vant to the problem.

The mid­dle sec­tion of the sheet is used to describe them in the form of a per­sona. Start with putting a face on them. Give the per­sona a name and add some demo­graph­ic details rel­e­vant for the user’s behav­iour. Then, move on to explor­ing what their envi­ron­ment looks and sounds like and what they are think­ing and feel­ing. Final­ly, try to describe what issues the user is hav­ing that are addressed by the prod­uct and what the user stands to gain from using the product.

The third sec­tion of this sheet is used to wrap up the first exer­cise by doing a quick gap analy­sis of what the busi­ness would like to see in terms of user behav­iour and what the user is cur­rent­ly doing. This will help pin down the engage­ment design con­cept fleshed out in the next exercises.

Engagement loop

02-engagement-loop

Exer­cise two builds on the under­stand­ing of the prob­lem and the user and offers a struc­tured way of think­ing through a pos­si­ble solu­tion. For this we use the engage­ment loop mod­el devel­oped by Sebas­t­ian Deter­d­ing. There are dif­fer­ent places we can start here but one that often works well is to start imag­in­ing the Big Hairy Auda­cious Goal the user is look­ing to achieve. This is the chal­lenge. It is a thing (usu­al­ly a skill) the user can improve at. Note this chal­lenge down in the mid­dle. Then, work­ing around the chal­lenge, describe a mea­sur­able goal the user can achieve on their way to mas­ter­ing the chal­lenge. Describe the action the user can take with the prod­uct towards that goal, and the feed­back the prod­uct will give them to let them know their action has suc­ceed­ed and how much clos­er it has got­ten them to the goal. Final­ly and cru­cial­ly, try to describe what kind of moti­va­tion the user is dri­ven by and make sure the goals, actions and feed­back make sense in that light. If not, adjust things until it all clicks.

Storyboard

03-storyboard

The final exer­cise is devot­ed to visu­al­is­ing and telling a sto­ry about the engage­ment loop we devel­oped in the abstract in the pre­vi­ous block. It is a typ­i­cal sto­ry­board, but we have con­strained it to a set of sto­ry beats you must hit to build a sat­is­fy­ing nar­ra­tive. We go from intro­duc­ing the user and their chal­lenge, to how the prod­uct com­mu­ni­cates the goal and action to what a user does with it and how they get feed­back on that to (fast-for­ward) how they feel when they ulti­mate­ly mas­ter the chal­lenge. It makes the design con­cept relat­able to out­siders and can serve as a jump­ing off point for fur­ther design and development.

Use, adapt and share

Togeth­er, these three exer­cis­es and work­sheets are a great way to think through an engage­ment design prob­lem. We used them for teach­ing but I can also imag­ine teams using them to explore a solu­tion to a prob­lem they might be hav­ing with an exist­ing prod­uct, or as a way to kick­start the devel­op­ment of a new product.

We’ve built on oth­er people’s work for these so it only makes sense to share them again for oth­ers to use and build on. If you do use them I would love to hear about your experiences. 

Week 146

Crazy, crazy week I am glad to have sur­vived. But wait, it’s not done yet. Tomor­row (sat­ur­day) I’ll be run­ning a work­shop in Lei­d­sche Rijn with local young folk, for Cultuur19. The aim is to design a lit­tle social game that’ll func­tion as a viral mar­ket­ing tac­tic for our upcom­ing urban games design work­shop in the same dis­trict. This is a Hub­bub mis­sion, and I am glad to have the sup­port of Karel who — besides cook­ing up crazy plans at Fource­Labs — is an occa­sion­al agent of Hubbub.

This was my last week work­ing on site with Layar because I’m head­ing to Copen­hagen on sun­day. I’ll be stay­ing there for a few weeks, work­ing there — for Layar still, pos­si­bly for Social Square — lec­tur­ing at CIID and apart from that just tak­ing it a lit­tle slow­er. My apart­ment is around the cor­ner from the Laun­dro­mat Café in Nør­re­bro so that should be no problem.

I was at Waag Soci­ety’s beau­ti­ful The­atrum Anatomicum last wednes­day to cohost a work­shop on games and archi­tec­ture as part of the Best Scene in Town project ini­ti­at­ed by 7scenes. I pre­sent­ed three bold pre­dic­tions for the future of games in the city. Look for a write-up of that one at the Hub­bub blog soon. The teams came up with inter­est­ing con­cepts for games in Ams­ter­dam and I enjoyed work­ing with all of them.

Going back to the start of this week, I turned 30 on mon­day. A water­shed moment of some sort I guess. Some­what appro­pri­ate­ly, we announced This hap­pened – Utrecht #6 that day too. Check out the pro­gram, I am real pleased with our speakers. 

Now let’s just hope that vol­cano does­n’t mess with my flight in sun­day and the next note will be com­ing to you from love­ly CPH.

This pervasive games workshop I ran at this conference

A few things I got peo­ple to do at this year’s NLGD Fes­ti­val of Games:

Paper sword fight

Fight each oth­er with paper swords…

Hunting for a frisbee with lunch-boxes on their heads

…and run around with lunch-box­es on their heads.1

This was all part of a work­shop I ran, titled ‘Play­ful Tin­ker­ing’. The mys­te­ri­ous Mink ette — who amongst many things is a design­er at Six to Start — and I got peo­ple to rapid­ly pro­to­type per­va­sive games that were be played at the con­fer­ence venue the day after. The best game won a mag­nif­i­cent tro­phy shaped like a spring rider.

Some exer­cis­es we did dur­ing the workshop:

  • Play a name game Mink ette had made up short­ly before the work­shop in no time at all. This is good for sev­er­al things: phys­i­cal warm-up, break­ing the ice, demon­strate the kinds of games the ses­sion is about.
  • Walk around the room and write down imag­i­nary game titles as well as names of games you used to play as a child. Good for emp­ty­ing heads and warm­ing up mentally.
  • Walk around again, pick a post-it that intrigues you. Then guess what the game is about, and have oth­ers to fill in the blanks where need. Then play the game. This is most­ly just for fun. (Noth­ing wrong with that.)
  • Analyse the games, break them up into their basic parts. Change one of those parts and play the game again. See what effect the change has. This is to get a sense of what games design is about, and how chang­ing a rule impacts the play­er experience.

Participants brainstorming game ideas

Par­tic­i­pants brain­storm­ing game ideas

Peo­ple then formed groups and worked on an orig­i­nal game. We pushed them to rapid­ly gen­er­ate a first rule­set that could be playtest­ed with the oth­er groups. After this they did anoth­er design sprint, and playtest­ed again out­side the room, “in the wild”. All of this in less than four hours. Whew!

The games that were made:

  • A game that involved hunt­ing for peo­ple that matched the descrip­tions on post-its that were hid­den around the venue. You first need­ed to find a post-it, then find the per­son that matched the descrip­tion on it and final­ly take a pho­to of them for points. This game was so quick to play it already ran at the par­ty, hours after the work­shop finished.
  • Crowd Con­trol’ — com­pete with oth­er play­ers to get the largest per­cent­age of a group of peo­ple to do what you are doing (like nod­ding your head). This game won the tro­phy, in part because of the fero­cious play­er recruit­ment style the run­ners employed dur­ing the playtest.
  • A sail­ing game, where you tried to maneu­ver an imag­i­nary boat from one end of a space to the oth­er. Your move­ment was con­strained by the “wind”, which was a func­tion of the amount of peo­ple on either side of your boat. It fea­tured an ingen­u­ous mea­sur­ing mechan­ic which used an impro­vised rope made from a torn up con­fer­ence tote bag.
  • The lunch­box thing was impro­vised dur­ing the lunch before the playtest. A stu­dent also brought in a game he was work­ing on for his grad­u­a­tion to playtest.

We set up the playtest itself as follows: 

The room was open to any­one pass­ing by. Each game got their own sta­tion where they could recruit play­ers, explain the rules, keep score, etc. Mink ette and I hand­ed each play­er a red, blue and yel­low tid­dly­wink. They could use this to vote on their favorite game in three sep­a­rate cat­e­gories, by hand­ing the run­ners a tid­dly­wink. Peo­ple could play more than once, and vote as often as they liked. We also kept track of how much play­ers each game got. We hand­ed out prizes to win­ners in the dif­fer­ent cat­e­gories (a lucky dip box loaded with piña­ta fillers). The most played game got the grand prize — the spring rid­er tro­phy I cre­at­ed with help from my sis­ter and fab­ri­cat­ed at the local fablab.2

The spring rider trophy and tiddlywinks all set for the playtest

Spring rid­er tro­phy and tid­dly­winks ready for some playtest­ing action

It was a plea­sure to have the elu­sive Mink ette over for the ride. I loved the way she explained what per­va­sive games were all about — being able to play any­time, any­where with any­thing. I was also impressed with the way she man­aged to get peo­ple to do strange things with­out think­ing twice.

We had a very ded­i­cat­ed group of par­tic­i­pants, most of whom stuck around for the whole ses­sion and returned again for the playtest the next day. I’m very grate­ful for their enthu­si­asm. The whole expe­ri­ence was very reward­ing, I’m keen on doing this more often at events and apply­ing what I learnt to the work­shops I run as part of my own games design practice.

Happy, happy winners!

Hap­py win­ners of the spring rid­er tro­phy flanked by Mink ette and yours truly

  1. May­hem ini­ti­at­ed by Evert and Marin­ka. []
  2. I still need to write up the process of the tro­phy’s cre­ation. []

On sketching

Catch­ing up with this slight­ly neglect­ed blog (it’s been 6 weeks since the last prop­er post). I’d like to start by telling you about a small thing I helped out with last week. Peter Boers­ma1 asked me to help out with one of his UX Cock­tail Hours. He was inspired by a recent IxDA Stu­dio event where, in stead of just chat­ting and drink­ing, design­ers actu­al­ly made stuff. (Gasp!) Peter want­ed to do a work­shop where atten­dees col­lab­o­rat­ed on sketch­ing a solu­tion to a giv­en design problem.

Part of my con­tri­bu­tion to the evening was a short pre­sen­ta­tion on the the­o­ry and prac­tice of sketch­ing. On the the­o­ry side, I ref­er­enced Bill Bux­ton’s list of qual­i­ties that define what a sketch is2, and empha­sized that this means a sketch can be done in any mate­r­i­al, not nec­es­sar­i­ly pen­cil and paper. Fur­ther­more I dis­cussed why sketch­ing works, using part of an arti­cle on embod­ied inter­ac­tion3. The main point there, as far as I am con­cerned is that when sketch­ing, as design­ers we have the ben­e­fit of ‘back­talk’ from our mate­ri­als, which can pro­vide us with new insights. I wrapped up the pre­sen­ta­tion with a case study of a project I did a while back with the Ams­ter­dam-based agency Info.nl4 for a social web start-up aimed at inde­pen­dent pro­fes­sion­als. In the project I went quite far in using sketch­es to not only devel­op the design, but also col­lab­o­ra­tive­ly con­struct it with the client, tech­nol­o­gists and others.

The whole thing was record­ed; you can find a video of the talk at Vimeo (thanks to Iskan­der and Alper). I also uploaded the slides to SlideShare (sans notes).

The sec­ond, and most inter­est­ing part of the evening was the work­shop itself. This was set up as fol­lows: Peter and I had pre­pared a fic­tion­al case, con­cern­ing peer-to-peer ener­gy. We used the Dutch com­pa­ny Qur­rent as an exam­ple, and asked the par­tic­i­pants to con­cep­tu­alise a way to encour­age use of Qurrent’s prod­uct range. The aim was to have peo­ple be more ener­gy effi­cient, and share sur­plus ener­gy they had gen­er­at­ed with the Qur­rent com­mu­ni­ty. The par­tic­i­pants split up in teams of around ten peo­ple each, and went to work. We gave them around one hour to design a solu­tion, using only pen and paper. After­wards, they pre­sent­ed the out­come of their work to each oth­er. For each team, we asked one par­tic­i­pant to cri­tique the work by men­tion­ing one thing he or she liked, and one thing that could be improved. The team was then giv­en a chance to reply. We also asked each team to briefly reflect on their work­ing process. At the end of the evening every­one was giv­en a chance to vote for their favourite design. The win­ner received a prize.5

Wrap­ping up, I think what I liked most about the work­shop was see­ing the many dif­fer­ent ways the teams approached the prob­lem (many of the par­tic­i­pants did not know each oth­er before­hand). Group dynam­ics var­ied huge­ly. I think it was valu­able to have each team share their expe­ri­ences on this front with each oth­er. One thing that I think we could improve was the case itself; next time I would like to pro­vide par­tic­i­pants with a more focused, more rich­ly detailed brief­ing for them to sink their teeth in. That might result in an assign­ment that is more about struc­ture and behav­iour (or even inter­face) and less about con­cepts and val­ues. It would be good to see how sketch­ing func­tions in such a context.

  1. the Nether­lands’ tallest IA and one of sev­er­al famous Peters who work in UX []
  2. tak­en from his won­der­ful book Sketch­ing User Expe­ri­ences []
  3. titled How Bod­ies Mat­ter (PDF) by Kle­mer and Takaya­ma []
  4. who were also the hosts of this event []
  5. I think it’s inter­est­ing to note that the win­ner had a remark­able con­cept, but in my opin­ion was not the best exam­ple of the pow­er of sketch­ing. Appar­ent­ly the audi­ence val­ued prod­uct over process. []

The theory and practice of urban game design

A few weeks ago NLGD asked me to help out with an urban games ‘sem­i­nar’ that they had com­mis­sioned in col­lab­o­ra­tion with the Dutch Game Gar­den. A group of around 50 stu­dents from two game design cours­es at the Utrecht School of the Arts1 were asked to design a game for the upcom­ing Fes­ti­val of Games in Utrecht. The work­shop last­ed a week. My involve­ment con­sist­ed of a short lec­ture, fol­lowed by sev­er­al design exer­cis­es designed to help the stu­dents get start­ed on Mon­day. On Fri­day, I was part of the jury that deter­mined which game will be played at the festival.

Lec­ture

In the lec­ture I briefly intro­duced some thinkers in urban­ism that I find of inter­est to urban game design­ers. I talked about Jane Jacobs’ view of the city as a liv­ing organ­ism that is grown from the bot­tom up. I also men­tioned Kevin Lynch’s work around wayfind­ing and the ele­ments that make up people’s men­tal maps of cities. I touched upon the need to have a good grasp of social inter­ac­tion pat­terns2. Final­ly, I advised the stu­dents to be fru­gal when it comes to the inclu­sion of tech­nol­o­gy in the stu­dents’ game designs. A good ques­tion to always ask your­self is: can I have as much fun with­out this gadget?

I wrapped up the lec­ture by look­ing at 5 games, some well-known, oth­ers less so: Big Urban Game, Con­Qwest, Pac-Man­hat­tan, The Soho Project and The Com­fort of Strangers. There are many more good exam­ples, of course, but each of these helped in high­light­ing a spe­cif­ic aspect of urban games design.

Work­shop

Next, I ran a work­shop of around 3 hours with the stu­dents, con­sist­ing of two exer­cis­es (plus one they could com­plete after­wards in their own time). The first one is the most inter­est­ing to dis­cuss here. It’s a game-like elic­i­ta­tion tech­nique called VNA3, which derives its name from the card types in the deck it is made up of: verbs, nouns and adjectives.

Students doing a VNA exercise

The way it works is that you take turns draw­ing a card from the deck and make up a one-sen­tence idea involv­ing the term. The first per­son to go draws a verb, the sec­ond per­son a noun and the third an adjec­tive. Each per­son builds on the idea of his or her pre­cur­sor. The con­cept that results from the three-card sequence is writ­ten down, and the next per­son draws a verb card again.4 The exer­cise resem­bles cadavre exquis, the biggest dif­fer­ence being that here, the terms are predetermined.

VNA is a great ice-break­er. The stu­dents were divid­ed into teams of five and, because a side-goal of the sem­i­nar was to encour­age col­lab­o­ra­tion between stu­dents from the dif­fer­ent cours­es, they often did not know each oth­er. Thanks to this exer­cise they became acquaint­ed, but with­in a cre­ative con­text. The exer­cise also priv­i­leges vol­ume of ideas over their qual­i­ty, which is per­fect in the ear­ly stages of con­cep­tu­al­iza­tion. Last but not least, it is a lot of fun; many stu­dents asked where they could get the deck of cards.

Jury­ing

On Fri­day, I (togeth­er with the oth­er jury mem­bers) was treat­ed to ten pre­sen­ta­tions by the stu­dents. Each had pre­pared a video con­tain­ing footage of pro­to­typ­ing and play-test­ing ses­sions, as well as an ele­va­tor pitch. A lot of them were quite good, espe­cial­ly con­sid­er­ing the fact that many stu­dents had not cre­at­ed an urban game before, or had­n’t even played one. But one game real­ly stood out for me. It employed a sim­ple mechan­ic: mak­ing chains of peo­ple by hold­ing hands. A chain was start­ed by play­ers, but required the help of passers-by to com­plete. Watch­ing the videos of chains being com­plet­ed evoked a strong pos­i­tive emo­tion­al response, not only with myself, but also my fel­low jurors. What’s more impor­tant though, is that the game clear­ly engen­dered hap­pi­ness in its par­tic­i­pants, includ­ing the peo­ple who joined in as it was being played. 

An urban game being played

In one video sequence, we see a near-com­plet­ed chain of peo­ple in a mall, shout­ing requests at peo­ple to join in. A lone man has been observ­ing the spec­ta­cle from a dis­tance for some time. Sud­den­ly, he steps for­ward, and joins hands with the oth­ers. The chain is com­plet­ed. A huge cheer emerges from the group, hands are raised in the air and applause fol­lows, the man join­ing in. Then he walks off towards the cam­era, grin­ning, two thumbs up. I could not help but grin back.5

Happy urban game participant

  1. Game Design and Devel­op­ment and Design for Vir­tu­al The­atre and Games []
  2. point­ing to this resource, that was dis­cussed at length on the IGDA ARG SIG []
  3. devel­oped by Annakaisa Kul­ti­ma []
  4. An inter­est­ing aside is that the deck was orig­i­nal­ly designed to be used for the cre­ation of casu­al mobile games. The words were cho­sen accord­ing­ly. Despite this, or per­haps because of this, they are quite suit­able to the design of urban games. []
  5. To clar­i­fy, this was not the game that got select­ed for the Fes­ti­val of Games. There were some issues with the game as a whole. It was short-list­ed though. Anoth­er excel­lent game, involv­ing mechan­ics inspired by pho­to safari, was the win­ner. []

A day of playing around with multi-touch and RoomWare

Last Sat­ur­day I attend­ed a RoomWare work­shop. The peo­ple of Can­Touch were there too, and brought one of their pro­to­type mul­ti-touch tables. The aim for the day was to come up with appli­ca­tions of RoomWare (open source soft­ware that can sense pres­ence of peo­ple in spaces) and mul­ti-touch. I attend­ed pri­mar­i­ly because it was a good oppor­tu­ni­ty to spend a day mess­ing around with a table.

Atten­dance was mul­ti­fac­eted, so while pro­gram­mers were putting togeth­er a proof-of-con­cept, design­ers (such as Alexan­der Zeh, James Burke and I) came up with con­cepts for new inter­ac­tions. The proof-of-con­cept was up and run­ning at the end of then day: The table could sense who was in the room and dis­play his or her Flickr pho­tos, which you could then move around, scale, rotate, etc. in the typ­i­cal mul­ti-touch fashion.

The con­cepts design­ers came up with main­ly focused on pulling in Last.fm data (again using RoomWare’s sens­ing capa­bil­i­ties) and dis­play­ing it for group-based explo­ration. Here’s a sto­ry­board I quick­ly whipped up of one such application:

RoomWare + CanTouch + Last.fm

The sto­ry­board shows how you can add your­self from a list of peo­ple present in the room. Your top artists flock around you. When more peo­ple are added, lines are drawn between you. The thick­ness of the line rep­re­sents how sim­i­lar your tastes are, accord­ing to Last.fm’s taste-o-meter. Also, shared top artists flock in such a way as to be clos­est to all relat­ed peo­ple. Final­ly, artists can be act­ed on to lis­ten to music.

When I was sketch­ing this, it became appar­ent that ori­en­ta­tion of ele­ments should fol­low very dif­fer­ent rules from reg­u­lar screens. I chose to sketch things so that they all point out­wards, with the mid­dle of the table as the ori­en­ta­tion point.

By spend­ing a day immersed in mul­ti-touch stuff, some inter­est­ing design chal­lenges became apparent:

  • With table­top sur­faces, stuff is clos­er or fur­ther away phys­i­cal­ly. Prox­im­i­ty of ele­ments can be unin­ten­tion­al­ly inter­pret­ed as say­ing some­thing about aspects such as impor­tance, rel­e­vance, etc. Design­ers need to be even more aware of place­ment than before, plus con­ven­tions from ver­ti­cal­ly ori­ent­ed screens no longer apply. Top-of-screen becomes fur­thest away and there­fore least promi­nent in stead of most important. 
  • With group-based inter­ac­tions, it becomes tricky to deter­mine who to address and where to address him or her. Some­times the sys­tem should address the group as a whole. When 5 peo­ple are stand­ing around a table, text-based inter­faces become prob­lem­at­ic since what is leg­i­ble from one end of the table is unin­tel­li­gi­ble from the oth­er. New con­ven­tions need to be devel­oped for this as well. Alexan­der and I phi­los­o­phized about plac­ing text along cir­cles and ani­mat­ing them so that they cir­cu­late around the table, for instance.
  • Besides these, many oth­er inter­face chal­lenges present them­selves. One cru­cial piece of infor­ma­tion for solv­ing many of these is know­ing where peo­ple are locat­ed around the table. This issue can be approached from dif­fer­ent angles. By incor­po­rat­ing sen­sors in the table, detec­tion may be auto­mat­ed and inter­faces could me made to adapt auto­mat­i­cal­ly. This is the tech­no-cen­tric angle. I am not con­vinced this is the way to go, because it dimin­ish­es people’s con­trol over the expe­ri­ence. I would pre­fer to make the inter­face itself adjustable in nat­ur­al ways, so that peo­ple can mold the rep­re­sen­ta­tion to suit their con­text. With sit­u­at­ed tech­nolo­gies like this, auto-mag­i­cal adap­ta­tion is an “AI-hard” prob­lem, and the price of fail­ure is a severe­ly degrad­ed user expe­ri­ence from which peo­ple can­not recov­er because the sys­tem won’t let them.

All in all the work­shop was a won­der­ful day of tin­ker­ing with like-mind­ed indi­vid­u­als from rad­i­cal­ly dif­fer­ent back­grounds. As a design­er, I think this is one of the best way be involved with open source projects. On a day like this, tech­nol­o­gists can be exposed to new inter­ac­tion con­cepts while they are hack­ing away. At the same time design­ers get that rare oppor­tu­ni­ty to play around with tech­nol­o­gy as it is shaped. Quick-and-dirty sketch­es like the ones Alexan­der and I came up with are def­i­nite­ly the way to com­mu­ni­cate ideas. The goal is to sug­gest, not to describe, after all. Tech­nol­o­gists should feel free to elab­o­rate and build on what design­ers come up with and vice-ver­sa. I am curi­ous to see which parts of what we came up with will find their way into future RoomWare projects.