Design and machine learning – an annotated reading list

Ear­li­er this year I coached Design for Inter­ac­tion mas­ter stu­dents at Delft Uni­ver­si­ty of Tech­nol­o­gy in the course Research Method­ol­o­gy. The stu­dents organ­ised three sem­i­nars for which I pro­vid­ed the claims and assigned read­ing. In the sem­i­nars they argued about my claims using the Toul­min Mod­el of Argu­men­ta­tion. The read­ings served as sources for back­ing and evi­dence.

The claims and read­ings were all relat­ed to my nascent research project about machine learn­ing. We delved into both design­ing for machine learn­ing, and using machine learn­ing as a design tool.

Below are the read­ings I assigned, with some notes on each, which should help you decide if you want to dive into them your­self.

Hebron, Patrick. 2016. Machine Learn­ing for Design­ers. Sebastopol: O’Reilly.

The only non-aca­d­e­m­ic piece in this list. This served the pur­pose of get­ting all stu­dents on the same page with regards to what machine learn­ing is, its appli­ca­tions of machine learn­ing in inter­ac­tion design, and com­mon chal­lenges encoun­tered. I still can’t think of any oth­er sin­gle resource that is as good a start­ing point for the sub­ject as this one.

Fiebrink, Rebec­ca. 2016. “Machine Learn­ing as Meta-Instru­ment: Human-Machine Part­ner­ships Shap­ing Expres­sive Instru­men­tal Cre­ation.” In Musi­cal Instru­ments in the 21st Cen­tu­ry, 14:137–51. Sin­ga­pore: Springer Sin­ga­pore. doi:10.1007/978–981–10–2951–6_10.

Fiebrink’s Wek­ina­tor is ground­break­ing, fun and inspir­ing so I had to include some of her writ­ing in this list. This is most­ly of inter­est for those look­ing into the use of machine learn­ing for design and oth­er cre­ative and artis­tic endeav­ours. An impor­tant idea explored here is that tools that make use of (inter­ac­tive, super­vised) machine learn­ing can be thought of as instru­ments. Using such a tool is like play­ing or per­form­ing, explor­ing a pos­si­bil­i­ty space, engag­ing in a dia­logue with the tool. For a tool to feel like an instru­ment requires a tight action-feed­back loop.

Dove, Gra­ham, Kim Hal­skov, Jodi For­l­izzi, and John Zim­mer­man. 2017. UX Design Inno­va­tion: Chal­lenges for Work­ing with Machine Learn­ing as a Design Mate­r­i­al. The 2017 CHI Con­fer­ence. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.

A real­ly good sur­vey of how design­ers cur­rent­ly deal with machine learn­ing. Key take­aways include that in most cas­es, the appli­ca­tion of machine learn­ing is still engi­neer­ing-led as opposed to design-led, which ham­pers the cre­ation of non-obvi­ous machine learn­ing appli­ca­tions. It also makes it hard for design­ers to con­sid­er eth­i­cal impli­ca­tions of design choic­es. A key rea­son for this is that at the moment, pro­to­typ­ing with machine learn­ing is pro­hib­i­tive­ly cum­ber­some.

Fiebrink, Rebec­ca, Per­ry R Cook, and Dan True­man. 2011. “Human Mod­el Eval­u­a­tion in Inter­ac­tive Super­vised Learn­ing.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.

The sec­ond Fiebrink piece in this list, which is more of a deep dive into how peo­ple use Wek­ina­tor. As with the chap­ter list­ed above this is required read­ing for those work­ing on design tools which make use of inter­ac­tive machine learn­ing. An impor­tant find­ing here is that users of intel­li­gent design tools might have very dif­fer­ent cri­te­ria for eval­u­at­ing the ‘cor­rect­ness’ of a trained mod­el than engi­neers do. Such cri­te­ria are like­ly sub­jec­tive and eval­u­a­tion requires first-hand use of the mod­el in real time.

Bostrom, Nick, and Eliez­er Yud­kowsky. 2014. “The Ethics of Arti­fi­cial Intel­li­gence.” In The Cam­bridge Hand­book of Arti­fi­cial Intel­li­gence, edit­ed by Kei­th Frank­ish and William M Ram­sey, 316–34. Cam­bridge: Cam­bridge Uni­ver­si­ty Press. doi:10.1017/CBO9781139046855.020.

Bostrom is known for his some­what crazy but thought­pro­vok­ing book on super­in­tel­li­gence and although a large part of this chap­ter is about the ethics of gen­er­al arti­fi­cial intel­li­gence (which at the very least is still a way out), the first sec­tion dis­cuss­es the ethics of cur­rent “nar­row” arti­fi­cial intel­li­gence. It makes for a good check­list of things design­ers should keep in mind when they cre­ate new appli­ca­tions of machine learn­ing. Key insight: when a machine learn­ing sys­tem takes on work with social dimensions—tasks pre­vi­ous­ly per­formed by humans—the sys­tem inher­its its social require­ments.

Yang, Qian, John Zim­mer­man, Aaron Ste­in­feld, and Antho­ny Toma­sic. 2016. Plan­ning Adap­tive Mobile Expe­ri­ences When Wire­fram­ing. The 2016 ACM Con­fer­ence. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.

Final­ly, a feet-in-the-mud explo­ration of what it actu­al­ly means to design for machine learn­ing with the tools most com­mon­ly used by design­ers today: draw­ings and dia­grams of var­i­ous sorts. In this case the focus is on using machine learn­ing to make an inter­face adap­tive. It includes an inter­est­ing dis­cus­sion of how to bal­ance the use of implic­it and explic­it user inputs for adap­ta­tion, and how to deal with infer­ence errors. Once again the lim­i­ta­tions of cur­rent sketch­ing and pro­to­typ­ing tools is men­tioned, and relat­ed to the need for design­ers to devel­op tac­it knowl­edge about machine learn­ing. Such tac­it knowl­edge will only be gained when design­ers can work with machine learn­ing in a hands-on man­ner.

Supplemental material

Floyd, Chris­tiane. 1984. “A Sys­tem­at­ic Look at Pro­to­typ­ing.” In Approach­es to Pro­to­typ­ing, 1–18. Berlin, Hei­del­berg: Springer Berlin Hei­del­berg. doi:10.1007/978–3–642–69796–8_1.

I pro­vid­ed this to stu­dents so that they get some addi­tion­al ground­ing in the var­i­ous kinds of pro­to­typ­ing that are out there. It helps to pre­vent reduc­tive notions of pro­to­typ­ing, and it makes for a nice com­ple­ment to Buxton’s work on sketch­ing.

Ble­vis, E, Y Lim, and E Stolter­man. 2006. “Regard­ing Soft­ware as a Mate­r­i­al of Design.”

Some of the papers refer to machine learn­ing as a “design mate­r­i­al” and this paper helps to under­stand what that idea means. Soft­ware is a mate­r­i­al with­out qual­i­ties (it is extreme­ly mal­leable, it can sim­u­late near­ly any­thing). Yet, it helps to con­sid­er it as a phys­i­cal mate­r­i­al in the metaphor­i­cal sense because we can then apply ways of design think­ing and doing to soft­ware pro­gram­ming.

Generating UI design variations

AI design tool for UI design alternatives

I am still think­ing about AI and design. How is the design process of AI prod­ucts dif­fer­ent? How is the user expe­ri­ence of AI prod­ucts dif­fer­ent? Can design tools be improved with AI?

When it comes to improv­ing design tools with AI my start­ing point is game design and devel­op­ment. What fol­lows is a quick sketch of one idea, just to get it out of my sys­tem.

Mixed-ini­tia­tive’ tools for pro­ce­dur­al gen­er­a­tion (such as Tana­gra) allow design­ers to cre­ate high-lev­el struc­tures which a machine uses to pro­duce full-fledged game con­tent (such as lev­els). It hap­pens in a real-time. There is a con­tin­u­ous back-and-forth between design­er and machine.

Soft­ware user inter­faces, on mobile in par­tic­u­lar, are increas­ing­ly fre­quent­ly assem­bled from ready-made com­po­nents accord­ing to more or less well-described rules tak­en from design lan­guages such as Mate­r­i­al Design. These design lan­guages are cur­rent­ly pri­mar­i­ly described for human con­sump­tion. But it should be a small step to make a design lan­guage machine-read­able.

So I see an oppor­tu­ni­ty here where a design­er might assem­ble a UI like they do now, and a machine can do sev­er­al things. For exam­ple it can test for adher­ence to design lan­guage rules, sug­gest cor­rec­tions or even auto-cor­rect as the design­er works.

More inter­est­ing­ly, a machine might take one UI mock­up, and pro­vide the design­er with sev­er­al more pos­si­ble vari­a­tions. To do this it could use dif­fer­ent lay­outs, or alter­na­tive com­po­nents that serve a same or sim­i­lar pur­pose to the ones used.

In high pres­sure work envi­ron­ments where time is scarce, cor­ners are often cut in the diver­gence phase of design. Machines could aug­ment design­ers so that gen­er­at­ing many design alter­na­tives becomes less labo­ri­ous both men­tal­ly and phys­i­cal­ly. Ide­al­ly, machines would sur­prise and even inspire us. And the final say would still be ours.

Artificial intelligence as partner

Some notes on arti­fi­cial intel­li­gence, tech­nol­o­gy as part­ner and relat­ed user inter­face design chal­lenges. Most­ly notes to self, not sure I am adding much to the debate. Just sum­maris­ing what I think is impor­tant to think about more. Warn­ing: Dense with links.

Matt Jones writes about how arti­fi­cial intel­li­gence does not have to be a slave, but can also be part­ner.

I’m per­son­al­ly much more inter­est­ed in machine intel­li­gence as human aug­men­ta­tion rather than the oft-hyped AI assis­tant as a sep­a­rate embod­i­ment.

I would add a third pos­si­bil­i­ty, which is AI as mas­ter. A com­mon fear we humans have and one I think only grow­ing as things like Alpha­Go and new Boston Dynam­ics robots keep hap­pen­ing.

I have had a tweet pinned to my time­line for a while now, which is a quote from Play Mat­ters.

tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actu­al­ly does not just apply to AI but to tech in gen­er­al. Of course, as tech gets smarter and more inde­pen­dent from humans, the idea of a ‘third way’ only grows in impor­tance.

More tweet­ing. A while back, short­ly after AlphaGo’s vic­to­ry, James tweet­ed:

On the one hand, we must insist, as Kas­parov did, on Advanced Go, and then Advanced Every­thing Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear exam­ple of humans and AI part­ner­ing. And it is also an exam­ple of tech­nol­o­gy as a source of expres­sion and a way of being.

Also, in a WIRED arti­cle on Alpha­Go, some­one who had played the AI repeat­ed­ly says his game has improved tremen­dous­ly.

So that is the promise: Arti­fi­cial­ly intel­li­gent sys­tems which work togeth­er with humans for mutu­al ben­e­fit.

Now of course these AIs don’t just arrive into the world ful­ly formed. They are cre­at­ed by humans with par­tic­u­lar goals in mind. So there is a design com­po­nent there. We can design them to be part­ners but we can also design them to be mas­ters or slaves.

As an aside: Maybe AIs that make use of deep learn­ing are par­tic­u­lar­ly well suit­ed to this part­ner mod­el? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynam­ics. There appar­ent­ly is a sig­nif­i­cant dif­fer­ence between holis­tic and reduc­tion­ist approach­es, deep learn­ing being holis­tic. I imag­ine reduc­tion­ist AI might be more depen­dent on humans. But this is just wild spec­u­la­tion. I don’t know if there is any­thing there.

This insis­tence of James on “advanced every­thing else” is a world view. A pol­i­tics. To allow our­selves to be increas­ing­ly entan­gled with these sys­tems, to not be afraid of them. Because if we are afraid, we either want to sub­ju­gate them or they will sub­ju­gate us. It is also about not obscur­ing the sys­tems we are part of. This is a sen­ti­ment also expressed by James in the same series of tweets I quot­ed from ear­li­er:

These emer­gences are also the best mod­el we have ever built for describ­ing the true state of the world as it always already exists.

And there is over­lap here with ideas expressed by Kevin in ‘Design as Par­tic­i­pa­tion’:

[W]e are no longer just using com­put­ers. We are using com­put­ers to use the world. The obscured and com­plex code and engi­neer­ing now engages with peo­ple, resources, civics, com­mu­ni­ties and ecosys­tems. Should design­ers con­tin­ue to priv­i­lege users above all oth­ers in the sys­tem? What would it mean to design for par­tic­i­pants instead? For all the par­tic­i­pants?

AI part­ners might help us to bet­ter see the sys­tems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emer­gence of arti­fi­cial intel­li­gence (only this time with a bud­dy-style user inter­face that actu­al­ly works), this ques­tion of “doing some­thing for me” vs “allow­ing me to do even more” is going to get even more pro­nounced. Both are effec­tive, but the first sucks… or at least, it sucks accord­ing to my own per­son­al pol­i­tics, because I regard indi­vid­ual alien­ation from soci­ety and com­plex sys­tems as one of the huge threats in the 21st cen­tu­ry.

I am remind­ed of the mixed-ini­tia­tive sys­tems being researched in the area of pro­ce­dur­al con­tent gen­er­a­tion for games. I wrote about these a while back on the Hub­bub blog. Such sys­tems are part­ners of design­ers. They give some­thing like super pow­ers. Now imag­ine such pow­ers applied to oth­er prob­lems. Quite excit­ing.

Actu­al­ly, in the afore­men­tioned arti­cle I dis­tin­guish between tools for mak­ing things and tools for inspect­ing pos­si­bil­i­ty spaces. In the first case design­ers manip­u­late more abstract rep­re­sen­ta­tions of the intend­ed out­come and the sys­tem gen­er­ates the actu­al out­put. In the sec­ond case the sys­tem visu­alis­es the range of pos­si­ble out­comes giv­en a par­tic­u­lar con­fig­u­ra­tion of the abstract rep­re­sen­ta­tion. These two are best paired.

From a design per­spec­tive, a lot remains to be fig­ured out. If I look at those mixed-ini­tia­tive tools I am struck by how poor­ly they com­mu­ni­cate what the AI is doing and what its capa­bil­i­ties are. There is a huge user inter­face design chal­lenge there.

For stuff focused on get­ting infor­ma­tion, a con­ver­sa­tion­al UI seems to be the cur­rent local opti­mum for work­ing with an AI. But for tools for cre­ativ­i­ty, to use the two-way split pro­posed by Vic­tor, dif­fer­ent UIs will be required.

What shape will they take? What visu­al lan­guage do we need to express the par­tic­u­lar prop­er­ties of arti­fi­cial intel­li­gence? What approach­es can we take in addi­tion to per­son­i­fy­ing AI as bots or char­ac­ters? I don’t know and I can hard­ly think of any good exam­ples that point towards promis­ing approach­es. Lots to be done.

Prototyping in the browser

When you are design­ing a web site or web app I think you should pro­to­type in the brows­er. Why? You might as well ask why pro­to­type at all. Answer: To enable con­tin­u­ous test­ing and refine­ment of your design. Since you are design­ing for the web it makes sense to do this test­ing and refine­ment with an arte­fact com­posed of the web’s mate­r­i­al.

There are many ways to do pro­to­typ­ing. A com­mon way is to make wire­frames and then make them ‘click­able’. But when I am design­ing a web site or a web app and I get to the point where it is time to do wire­frames I often pre­fer to go straight to the brows­er.

Before this step I have sketched out all the screens on paper of course. I have done mul­ti­ple sketch­es of each page. I’ve had them cri­tiqued by team mem­bers and I have reworked them.

Drawing pictures of web pages

But then I open my draw­ing pro­gram—Sketch, in my case—and my heart sinks. Not because Sketch sucks. Sketch is great. But it some­how feels wrong to draw pic­tures of web pages on my screen. I find it cum­ber­some. My draw­ing pro­gram does not behave like a brows­er. That is to say in stead of defin­ing a bunch of rules for ele­ments and hav­ing the brows­er fig­ure out how to ren­der them on a page togeth­er I need to fol­low those rules myself in my head as I put each ele­ment in its place.

And don’t get me start­ed on how wire­frames are sup­posed to be with­out visu­al design. That is non­sense. If you are using con­trast, rep­e­ti­tion, align­ment and prox­im­i­ty, you are doing lay­out. That is visu­al design. I can’t stand wire­frames with a bad visu­al hier­ar­chy.

If I per­se­vere, and I have a set of wire­frames in my draw­ing pro­gram, they are sta­t­ic. I can’t use them. I then need to export them to some oth­er often clunky pro­gram to make the pic­tures click­able. Which always results in a poor resem­blance of the actu­al expe­ri­ence. (I use Mar­vel. It’s okay but it is hard­ly a joy to use. For mobile apps I still use it, for web sites I pre­fer not to.)

Prototyping in the browser

When I pro­to­type in the brows­er I don’t have to deal with these issues. I am doing lay­out in a way that is native to the medi­um. And once I have some pages set up they are imme­di­ate­ly usable. So I can hand it to some­one, a team mem­ber or a test par­tic­i­pant, and let them play with it.

That is why, for web sites and web apps, I skip wire­frames alto­geth­er and pro­to­type in the brows­er. I do not know how com­mon this is in the indus­try nowa­days. So I thought I would share my approach here. It may be of use to some.

It used to be the case that it was quite a bit of has­sle to get up and run­ning with a brows­er pro­to­type so nat­u­ral­ly open­ing a draw­ing pack­age seemed more attrac­tive. Not so any­more. Tools have come a long way. Case in point: My set­up nowa­days involves zero screw­ing around on the com­mand line.

CodeKit

The core of it is a paid-for Mac app called CodeK­it, a so-called task man­ag­er. It allows you to install a front-end devel­op­ment frame­work I like called Zurb Foun­da­tion with a cou­ple of clicks and has a built in web serv­er so you can play with your pro­to­type on any device on your local net­work. As you make changes to the code of your pro­to­type it gets auto­mat­i­cal­ly updat­ed on all your devices. No more man­u­al refresh­ing. Saves a huge amount of time.

I know you can do most of what CodeK­it does for you with stuff like Grunt but that involves tedious con­fig­u­ra­tion and work­ing the com­mand line. This is fine when you’re a devel­op­er, but not fine when you are a design­er. I want to be up and run­ning as fast as pos­si­ble. CodeK­it allows me to do that and has some oth­er fea­tures built in that are ide­al for pro­to­typ­ing which I will talk about more below. Long sto­ry short: CodeK­it has saved me a huge amount of time and is well worth the mon­ey.

Okay so on with the show. Yes, this whole pro­to­typ­ing in the brows­er thing involves ‘cod­ing’. But hon­est­ly, if you can’t write some HTML and CSS you real­ly shouldn’t be doing design for the web in the first place. I don’t care if you con­sid­er your­self a UX design­er and some­how above all this low­ly tech­ni­cal stuff. You are not. Nobody is say­ing you should become a fron­tend devel­op­er but you need to have an acquain­tance with the mate­ri­als your prod­uct is made of. Fol­low a few cours­es on Codecadamy or some­thing. There real­ly isn’t an excuse any­more these days for not know­ing this stuff. If you want to lev­el up, learn SASS.

Zurb Foundation

I like Zurb Foun­da­tion because it offers a coher­ent and com­pre­hen­sive library of ele­ments which cov­ers almost all the com­mon pat­terns found in web sites and apps. It offers a grid and some default typog­ra­phy styles as well. All of it doesn’t look flashy at all which is how I like it when I am pro­to­typ­ing. A pro­to­type at this stage does not require per­son­al­i­ty yet. Just a clear visu­al hier­ar­chy. Work­ing with Foun­da­tion is almost like play­ing with LEGO. You just click togeth­er the stuff you need. It’s pain­less and looks and works great.

I hard­ly do any styling but the few changes I do want to make I can eas­i­ly add to Foundation’s app.scss using SASS. I usu­al­ly have a few styles in there for tweak­ing some mar­gins on par­tic­u­lar ele­ments, for exam­ple a foot­er. But I try to focus on the struc­ture and behav­iour of my pages and for that I am most­ly doing HTML.

GitHub

Test­ing local­ly I already men­tioned. For that, CodeK­it has you cov­ered. Of course, you want to be able to share your pro­to­type with oth­ers. For this I like to use GitHub and their Pages fea­ture. Once again, using their desk­top client, this involves zero com­mand line work. You just add the fold­er with your CodeK­it project as a new repos­i­to­ry and sync it with GitHub. Then you need to add a branch named ‘gh-pages’ and do ‘update from mas­ter’. Presto, your pro­to­type is now on the web for any­one with the URL to see and use. Per­fect if you’re work­ing in a dis­trib­uted team.

Don’t be intim­i­dat­ed by using GitHub. Their on-board­ing is pret­ty impres­sive nowa­days. You’ll be up and run­ning in no time. Using ver­sion con­trol, even if it is just you work­ing on the pro­to­type, adds some much need­ed struc­ture and con­trol over changes. And when you are col­lab­o­rat­ing on your pro­to­type with team mem­bers it is indis­pens­able.

But in most cas­es I am the only one build­ing the pro­to­type so I just work on the mas­ter branch and once every while I update the gh-pages branch from mas­ter and sync it and I am done. If you use Slack you can add a GitHub bot to a chan­nel and have your team mem­bers receive an auto­mat­ic update every time you change the pro­to­type.

The Kit Language

If your project is of any size beyond the very small you will like­ly have repeat­ing ele­ments in your design. Head­ers, foot­ers, recur­ring wid­gets and so on. CodeK­it has recent­ly added sup­port for some­thing called the Kit Lan­guage. This adds sup­port for imports and vari­ables to reg­u­lar HTML. It is absolute­ly great for pro­to­typ­ing. For each repeat­ing ele­ment you cre­ate a ‘par­tial’ and import it wher­ev­er you need it. Vari­ables are great for chang­ing the con­tents of such repeat­ing ele­ments. CodeK­it com­piles it all into plain sta­t­ic HTML for you so your pro­to­type runs any­where.

The Kit Lan­guage real­ly was the miss­ing piece of the puz­zle for me. With it in place I am very com­fort­able rec­om­mend­ing this way of work­ing to any­one.

So that’s my set­up: CodeK­it, Zurb Foun­da­tion and GitHub. Togeth­er they make for a very pleas­ant and pro­duc­tive way to do pro­to­typ­ing in the brows­er. I don’t imag­ine myself going back to draw­ing pic­tures of web pages any­time soon.

Writing for conversational user interfaces

Last year at Hub­bub we worked on two projects fea­tur­ing a con­ver­sa­tion­al user inter­face. I thought I would share a few notes on how we did the writ­ing for them. Because for con­ver­sa­tion­al user inter­faces a large part of the design is in the writ­ing.

At the moment, there aren’t real­ly that many tools well suit­ed for doing this. Twine comes to mind but it is real­ly more focused on pub­lish­ing as opposed to author­ing. So while we were work­ing on these projects we just grabbed what­ev­er we were famil­iar with and felt would get the job done.

I actu­al­ly think there is an oppor­tu­ni­ty here. If this con­ver­sa­tion­al ui thing takes off design­ers would ben­e­fit a lot from bet­ter tools to sketch and pro­to­type them. After all this is the only way to fig­ure out if a con­ver­sa­tion­al user inter­face is suit­able for a par­tic­u­lar project. In the words of Bill Bux­ton:

Every­thing is best for some­thing and worst for some­thing else.”

Okay so below are my notes. The two projects are KOKORO (a code­name) and Free Birds. We have yet to pub­lish exten­sive­ly on both, so a quick descrip­tion is in order.

KOKORO is a dig­i­tal coach for teenagers to help them man­age and improve their men­tal health. It is cur­rent­ly a pro­to­type mobile web app not pub­licly avail­able. (The engine we built to dri­ve it is avail­able on GitHub, though.)

Free Birds (Vri­je Vogels in Dutch) is a game about civ­il lib­er­ties for fam­i­lies vis­it­ing a war and resis­tance muse­um in the Nether­lands. It is a loca­tion-based iOS app cur­rent­ly avail­able on the Dutch app store and playable in Air­borne Muse­um Harten­stein in Oost­er­beek.


For KOKORO we used Gingko to write the con­ver­sa­tion branch­es. This is good enough for a pro­to­type but it becomes unwieldy at scale. And any­way you don’t want to be lim­it­ed to a tree struc­ture. You want to at least be able to loop back to a par­ent branch, some­thing that isn’t sup­port­ed by Gingko. And maybe you don’t want to use the branch­ing pat­tern at all.

Free Birds’s sto­ry has a very lin­ear struc­ture. So in this case we just wrote our con­ver­sa­tions in Quip with some basic rules for for­mat­ting, not unlike a screen­play.

In Free Birds play­er choic­es ‘colour’ the events that come imme­di­ate­ly after, but the path stays the same.

This approach was inspired by the Walk­ing Dead games. Those are super clever at giv­ing play­ers a sense of agency with­out the need for sprawl­ing sto­ry trees. I remem­ber see­ing the cre­ators present this strat­e­gy at PRACTICE and some­thing clicked for me. The impor­tant point is, choic­es don’t have to branch out to dif­fer­ent direc­tions to feel mean­ing­ful.

KOKORO’s choic­es did have to lead to dif­fer­ent paths so we had to build a tree struc­ture. But we also kept track of things a user says. This allows the app to “learn” about the user. Sub­se­quent seg­ments of the con­ver­sa­tion are adapt­ed based on this learn­ing. This allows for more flex­i­bil­i­ty and it scales bet­ter. A sec­tion of a con­ver­sa­tion has var­i­ous states between which we switch depend­ing on what a user has said in the past.

We did some­thing sim­i­lar in Free Birds but used it to a far more lim­it­ed degree, real­ly just to once again colour cer­tain pieces of dia­logue. This is already enough to give a play­er a sense of agency.


As you can see, it’s all far from rock­et surgery but you can get sur­pris­ing­ly good results just by stick­ing to these sim­ple pat­terns. If I were to inves­ti­gate more advanced strate­gies I would look into NLP for input and pro­ce­dur­al gen­er­a­tion for out­put. Who knows, maybe I will get to work on a project involv­ing those things some time in the future.

Hardware interfaces for tuning the feel of microinteractions

In Dig­i­tal Ground Mal­colm McCul­lough talks about how tun­ing is a cen­tral part of inter­ac­tion design prac­tice. How part of the chal­lenge of any project is to get to a point where you can start tweak­ing the vari­ables that deter­mine the behav­iour of your inter­face for the best feel.

Feel” is a word I bor­row from game design. There is a book on it by Steve Swink. It is a fun­ny term. We are try­ing to sim­u­late sen­sa­tions that are derived from the phys­i­cal realm. We are try­ing to make things that are pure­ly visu­al behave in such a way that they evoke these sen­sa­tions. There are many games that heav­i­ly depend on get­ting feel right. Basi­cal­ly all games that are built on a physics sim­u­la­tion of some kind require good feel for a good play­er expe­ri­ence to emerge.

Physics sim­u­la­tions have been find­ing their way into non-game soft­ware prod­ucts for some time now and they are becom­ing an increas­ing part of what makes a prod­uct, er, feel great. They are often at the foun­da­tion of sig­na­ture moments that set a prod­uct apart from the pack. These sig­na­ture moments are also known as microin­t­er­ac­tions. To get them just right, being able to tune well is very impor­tant.

The behav­iour of microin­t­er­ac­tions based on physics sim­u­la­tions is deter­mined by vari­ables. For exam­ple, the feel of a spring is deter­mined by the mass of the weight attached to the spring, the spring’s stiff­ness and the fric­tion that resists the motion of the weight. These vari­ables inter­act in ways that are hard to mod­el in your head so you need to make repeat­ed changes to each vari­able and try the sim­u­la­tion to get it just right. This is time-con­sum­ing, cum­ber­some and resists the easy explo­ration of alter­na­tives essen­tial to a good design process.

In The Set­up game design­er Ben­nett Fod­dy talks about a way to improve on this work­flow. Many of his games (if not all of them) are playable physics sim­u­la­tions with pun­ish­ing­ly hard con­trols. He sug­gests using a hard­ware inter­face (a MIDI con­troller) to tune the vari­ables that deter­mine the feel of his game while it runs. In this way the loop between chang­ing a vari­able and see­ing its effect in game is dra­mat­i­cal­ly short­ened and many dif­fer­ent com­bi­na­tions of val­ues can be explored eas­i­ly. Once a sat­is­fac­to­ry set of val­ues for the vari­ables has been found they can be writ­ten back to the soft­ware for future use.

I do believe such a set­up is still non-triv­ial to make work with todays tools. A quick check ver­i­fies that Framer does not have OSC sup­port, for exam­ple. There is an oppor­tu­ni­ty here for pro­to­typ­ing envi­ron­ments such as Framer and oth­ers to sup­port it. The approach is not lim­it­ed to motion-based microin­t­er­ac­tions but can be extend­ed to the tun­ing of vari­ables that con­trol oth­er aspects of an app’s behav­iour.

For exam­ple, when we were mak­ing Stand­ing, we would have ben­e­fit­ed huge­ly from hard­ware con­trols to tweak the sen­si­tiv­i­ty of its motion-sens­ing func­tions as we were using the app. We were forced to do it by repeat­ed­ly chang­ing num­bers in the code and build­ing the app again and again. It was quite a pain to get right. To this day I have the feel­ing we could have made it bet­ter if only we would have had the tools to do it.

Judg­ing from sna­fus such as the poor feel of the lat­est Twit­ter desk­top client, there is a real need for bet­ter tools for tun­ing microin­t­er­ac­tions. Just like pen tablets have become indis­pens­able for those design­ing the form of user inter­faces on screens. I think we might soon find a small set of hard­ware knobs on the desks of those design­ers work­ing on the behav­iour of user inter­faces.

Sources for my Creative Mornings Utrecht talk on education, games, and play

I was stand­ing on the shoul­ders of giants for this one. Here’s a (prob­a­bly incom­plete) list of sources I ref­er­enced through­out the talk.

All of these are high­ly rec­om­mend­ed.

Update: the slides are now up on Speak­er Deck.

This happened – Utrecht #8, coming up

I have to say, num­ber sev­en is still fresh in my mind. Even so, we’ve announced num­ber eight. You’ll find the line­up below. I hope to see you in four weeks, on Novem­ber 22 at the HKU Akademiethe­ater.

Theseus

Rain­er Kohlberg­er is an inde­pen­dent visu­al artist based in Berlin. The con­cept and instal­la­tion design for the THESEUS Inno­va­tion Cen­ter Inter­net of Things was done in col­lab­o­ra­tion with Thomas Schrott and is the basis for the visu­al iden­ti­ty of the tech­nol­o­gy plat­form. The instal­la­tion con­nects and visu­al­ly cre­ates hier­ar­chy between knowl­edge, prod­ucts and ser­vices with a com­bi­na­tion of phys­i­cal poly­gon objects and vir­tu­al­ly pro­ject­ed infor­ma­tion lay­ers. This atmos­pher­ic piece trans­fer knowl­edge and guid­ance to the vis­i­tor but also leaves room for inter­pre­ta­tion.

De Klessebessers

Hel­ma van Rijn is an Indus­tri­al Design Engi­neer­ing PhD can­di­date at the TU Delft ID-Stu­di­o­Lab, spe­cial­ized in ‘dif­fi­cult to reach’ user groups. De Klessebessers is an activ­i­ty for peo­ple with demen­tia to active­ly recall mem­o­ries togeth­er. The design won the first prize in design com­pe­ti­tion Vergeethen­ni­et and was on show dur­ing the Dutch Design Week 2007. De Klessebessers is cur­rent­ly in use at De Lan­dri­jt in Eind­hoven.

Wip 'n' Kip

Fource­Labs talk about Wip ‘n’ Kip, a play­ful instal­la­tion for Stekker Fest, an annu­al elec­tron­ic music fes­ti­val based in Utrecht. Play­ers of Wip ‘n’ Kip use adult-sized spring rid­ers to con­trol a chick­en on a large screen. They race each oth­er to the fin­ish while at the same time try­ing to stay ahead of a horde of pur­su­ing mon­sters. Wip ‘n’ Kip is a strange but effec­tive mashup of video game, car­ni­val ride and per­for­mance. It is part of the PLAY Pilots project, com­mis­sioned by the city and province of Utrecht, which explore the appli­ca­tions of play in the cul­tur­al indus­try.

Smarthistory

Lotte Mei­jer talks about Smarthis­to­ry, an online art his­to­ry resource. It aims to be an addi­tion to, or even replace­ment of, tra­di­tion­al text books through the use of dif­fer­ent media to dis­cuss hun­dreds of West­ern art pieces from antiq­ui­ty to the cur­rent day. Dif­fer­ent brows­ing styles are sup­port­ed by a num­ber of nav­i­ga­tion sys­tems. Art works are con­tex­tu­al­ized using maps and time­lines. The site’s com­mu­ni­ty is engaged using a num­ber of social media. Smarthis­to­ry won a Web­by Award in 2009 in the edu­ca­tion cat­e­go­ry. Lotte has gone on to work as an inde­pen­dent design­er on many inter­est­ing and inno­v­a­tive projects in the art world.

Ronald Rietveld is the fourth speaker at This happened – Utrecht #7

Vacant NL

I’m hap­py to say we have our fourth speak­er con­firmed for next Monday’s This hap­pened. Here’s the blurb:

Land­scape archi­tect Ronald Rietveld talks about Vacant NL. The instal­la­tion chal­lenges the Dutch gov­ern­ment to use the enor­mous poten­tial of inspir­ing, unused build­ings from the 17th, 18th, 19th, 20th and 21st cen­tu­ry for cre­ative entre­pre­neur­ship and inno­va­tion. The Dutch gov­ern­ment wants to be in the top 5 of world knowl­edge economies by the end of 2020. Vacant NL takes this polit­i­cal ambi­tion seri­ous­ly and lever­ages vacan­cy to stim­u­late inno­va­tion with­in the cre­ative knowl­edge econ­o­my. Vacant NL is the Dutch sub­mis­sion for the Venice Archi­tec­ture Bien­nale 2010. It is made by Rietveld Land­scape, which Ronald Rietveld found­ed after win­ning the Prix de Rome in Archi­tec­ture 2006. In 2003 he grad­u­at­ed with hon­ors from the Ams­ter­dam Acad­e­my of Archi­tec­ture.

At first sight this might be an odd one out, and archi­tec­tur­al exhi­bi­tion at an inter­ac­tion design event. But both the sub­ject of the instal­la­tion and the design of the expe­ri­ence deal with inter­ac­tion in many ways. So I am sure it will pro­vide atten­dees with valu­able insights.

Playful street tiles, artful games and radioscapes at the next This happened – Utrecht

After a bit of a long sum­mer break Alexan­der, Ianus and I are back with anoth­er edi­tion of This hap­pened – Utrecht. Read about the pro­gram of the sev­enth edi­tion below. We’ll add a fourth speak­er to the ros­ter soon. The event is sched­uled for Mon­day 4 Octo­ber at The­ater Kikker in Utrecht. Doors open at 7:30PM. The reg­is­tra­tion opens next week on Mon­day 20 Sep­tem­ber at 12:00PM.

The Patchingzone

Anne Nigten is direc­tor of The Patch­ing­zone, a trans­dis­ci­pli­nary lab­o­ra­to­ry for inno­va­tion where Mas­ter, doc­tor, post-doc stu­dents and pro­fes­sion­als from dif­fer­ent back­grounds cre­ate mean­ing­ful con­tent. Ear­li­er, Anne Nigten was man­ag­er of V2_lab and com­plet­ed a PhD on a method for cre­ative research and devel­op­ment. Go-for-IT! is a city game cre­at­ed togeth­er with cit­i­zens of South Rot­ter­dam and launched in Decem­ber 2009. On four play­grounds in the area street tiles were equipped with LEDs. Locals could play games with their feet, sim­i­lar to con­sole game dance mats.

Ibb and Obb

Richard Boeser is an inde­pen­dent design­er based in Rot­ter­dam. His stu­dio Sparp­weed is cur­rent­ly work­ing on the game Ibb and Obb, sched­uled to launch for Playsta­tion Net­work and PC in August 2011. Ibb and Obb is a coop­er­a­tive game for two play­ers who togeth­er must find a way through a world where grav­i­ty is flipped across the hori­zon. Play­ers move between both sides of the world through por­tals. They can surf on grav­i­ty, soul­hop ene­mies and col­lect dia­monds. The game is part­ly financed by the Game Fund, an arrange­ment that seeks to stim­u­late the devel­op­ment of artis­tic games in the Nether­lands.

Radioscape

Edwin van der Hei­de stud­ied sonol­o­gy at the Roy­al Con­ser­va­to­ry in The Hague. He now works as an artist in the field of sound, space and inter­ac­tion. Radioscape trans­forms urban space into an acoustic labyrinth. Based on the fun­da­men­tal prin­ci­ples of radio each par­tic­i­pant is equipped with a receiv­er, head­phones and an anten­na. Fif­teen trans­mit­ters each broad­cast their own com­po­si­tion. Inspired by short wave sounds, they over­lap to form a meta­com­po­si­tion. By chang­ing posi­tion, the inter­pre­ta­tion of sound is changed as well.

A big thank you to our spon­sors, Microsoft and Fier for mak­ing this one hap­pen.