Hardware interfaces for tuning the feel of microinteractions

In Dig­i­tal Ground Mal­colm McCul­lough talks about how tun­ing is a cen­tral part of inter­ac­tion design prac­tice. How part of the chal­lenge of any project is to get to a point where you can start tweak­ing the vari­ables that deter­mine the behav­iour of your inter­face for the best feel.

Feel” is a word I bor­row from game design. There is a book on it by Steve Swink. It is a fun­ny term. We are try­ing to sim­u­late sen­sa­tions that are derived from the phys­i­cal realm. We are try­ing to make things that are pure­ly visu­al behave in such a way that they evoke these sen­sa­tions. There are many games that heav­i­ly depend on get­ting feel right. Basi­cal­ly all games that are built on a physics sim­u­la­tion of some kind require good feel for a good play­er expe­ri­ence to emerge.

Physics sim­u­la­tions have been find­ing their way into non-game soft­ware prod­ucts for some time now and they are becom­ing an increas­ing part of what makes a prod­uct, er, feel great. They are often at the foun­da­tion of sig­na­ture moments that set a prod­uct apart from the pack. These sig­na­ture moments are also known as microin­t­er­ac­tions. To get them just right, being able to tune well is very important.

The behav­iour of microin­t­er­ac­tions based on physics sim­u­la­tions is deter­mined by vari­ables. For exam­ple, the feel of a spring is deter­mined by the mass of the weight attached to the spring, the spring’s stiff­ness and the fric­tion that resists the motion of the weight. These vari­ables inter­act in ways that are hard to mod­el in your head so you need to make repeat­ed changes to each vari­able and try the sim­u­la­tion to get it just right. This is time-con­sum­ing, cum­ber­some and resists the easy explo­ration of alter­na­tives essen­tial to a good design process.

In The Set­up game design­er Ben­nett Fod­dy talks about a way to improve on this work­flow. Many of his games (if not all of them) are playable physics sim­u­la­tions with pun­ish­ing­ly hard con­trols. He sug­gests using a hard­ware inter­face (a MIDI con­troller) to tune the vari­ables that deter­mine the feel of his game while it runs. In this way the loop between chang­ing a vari­able and see­ing its effect in game is dra­mat­i­cal­ly short­ened and many dif­fer­ent com­bi­na­tions of val­ues can be explored eas­i­ly. Once a sat­is­fac­to­ry set of val­ues for the vari­ables has been found they can be writ­ten back to the soft­ware for future use.

I do believe such a set­up is still non-triv­ial to make work with todays tools. A quick check ver­i­fies that Framer does not have OSC sup­port, for exam­ple. There is an oppor­tu­ni­ty here for pro­to­typ­ing envi­ron­ments such as Framer and oth­ers to sup­port it. The approach is not lim­it­ed to motion-based microin­t­er­ac­tions but can be extend­ed to the tun­ing of vari­ables that con­trol oth­er aspects of an app’s behaviour. 

For exam­ple, when we were mak­ing Stand­ing, we would have ben­e­fit­ed huge­ly from hard­ware con­trols to tweak the sen­si­tiv­i­ty of its motion-sens­ing func­tions as we were using the app. We were forced to do it by repeat­ed­ly chang­ing num­bers in the code and build­ing the app again and again. It was quite a pain to get right. To this day I have the feel­ing we could have made it bet­ter if only we would have had the tools to do it.

Judg­ing from sna­fus such as the poor feel of the lat­est Twit­ter desk­top client, there is a real need for bet­ter tools for tun­ing microin­t­er­ac­tions. Just like pen tablets have become indis­pens­able for those design­ing the form of user inter­faces on screens. I think we might soon find a small set of hard­ware knobs on the desks of those design­ers work­ing on the behav­iour of user interfaces.

Storyboarding multi-touch interactions

I think it was around half a year ago that I wrote “UX design­ers should get into every­ware”. Back then I did not expect to be part of a ubi­comp project any­time soon. But here I am now, writ­ing about work I did in the area of mul­ti-touch interfaces. 

Background

The peo­ple at InUse (Swe­den’s pre­mier inter­ac­tion design con­sul­tan­cy firm) asked me to assist them with visu­al­is­ing poten­tial uses of mul­ti-touch tech­nol­o­gy in the con­text of a gat­ed com­mu­ni­ty. That’s right—an actu­al real-world phys­i­cal real-estate devel­op­ment project. How cool is that?

InUse storyboard 1

This res­i­den­tial com­mu­ni­ty is aimed at well-to-do seniors. As with most gat­ed com­mu­ni­ties, it offers them con­ve­nience, secu­ri­ty and pres­tige. You might shud­der at the thought of liv­ing in one of these places (I know I have my reser­va­tions) but there’s not much use in judg­ing peo­ple want­i­ng to do so. Planned ameni­ties include sports facil­i­ties, fine din­ing, onsite med­ical care, a cin­e­ma and on and on…

Social capital

One of the known issues with these ‘com­mu­ni­ties’ is that there’s not much evi­dence of social cap­i­tal being high­er there than in any reg­u­lar neigh­bour­hood. In fact some have argued that the glob­al trend of gat­ed com­mu­ni­ties is detri­men­tal to the build-up of social cap­i­tal in their sur­round­ings. They throw up phys­i­cal bar­ri­ers that pre­vent free inter­ac­tion of peo­ple. These are some of the things I tried to address: To see if we could sup­port the emer­gence of com­mu­ni­ty inside the res­i­den­cy using social tools while at the same coun­ter­act­ing phys­i­cal bar­ri­ers to the out­side world with “vir­tu­al inroads” that allow for free inter­ac­tion between res­i­dents and peo­ple in the periphery.

Being in the world

Anoth­er con­cern I tried to address is the dif­fer­ent ways mul­ti-touch inter­faces can play a role in the lives of peo­ple. Recent­ly Matt Jones addressed this in a post on the iPhone and Noki­a’s upcom­ing mul­ti-touch phones. In a com­mu­ni­ty like the one I was design­ing for, the worst thing I could do is make every instance of mul­ti-touch tech­nol­o­gy an atten­tion-grab­bing pres­ence demand­ing full immer­sion from its user. In many cas­es ‘my’ users would be bet­ter served with them behav­ing in an unob­tru­sive way, allow­ing almost uncon­scious use. In oth­er words: I tried to bal­ance being in the world with being in the screen—apply­ing each par­a­digm based on how appro­pri­ate it was giv­en the user’s con­text. (After all, some­times peo­ple want or even need to be immersed.)

Process

InUse had already pre­pared sev­er­al per­sonas rep­re­sen­ta­tive of the future res­i­dents of the com­mu­ni­ty. We went through those togeth­er and exam­ined each for sce­nar­ios that would make good can­di­dates for sto­ry­board­ing. We want­ed to come up with a range of sce­nar­ios that not only showed how these per­sonas could be sup­port­ed with mul­ti-touch inter­faces, but also illus­trate the dif­fer­ent spaces the inter­ac­tions could take place in (pri­vate, semi­pri­vate and pub­lic) and the scales at which the tech­nol­o­gy can oper­ate (from small key-like tokens to full wall-screens). 

InUse storyboard 2

I draft­ed each sce­nario as a tex­tu­al out­line and sketched the poten­tial sto­ry­boards on thumb­nail size. We went over those in a sec­ond work­shop and refined them—making adjust­ments to bet­ter cov­er the con­cerns out­lined above as well as improv­ing clar­i­ty. We want­ed to end up with a set of sto­ry­boards that could be used in a pre­sen­ta­tion for the client (the real-estate devel­op­ment firm) so we need­ed to bal­ance user goals with busi­ness objec­tives. To that end we thought about and includ­ed exam­ples of API-like inte­gra­tion of the plat­form with ser­vice providers in the periph­ery of the com­mu­ni­ty. We also tried to cre­ate self-ser­vice expe­ri­ences that would feel like being wait­ed on by a per­son­al butler.

Outcome

I end­ed up draw­ing three sce­nar­ios of around 9 pan­els each, digi­tis­ing and clean­ing them up on my Mac. Each sce­nario intro­duces a per­sona, the phys­i­cal con­text of the inter­ac­tion and the per­son­a’s moti­va­tion that dri­ves him to engage with the tech­nol­o­gy. The inter­ac­tions visu­alised are a mix of ges­tures and engage­ments with mul­ti-touch screens of dif­fer­ent sizes. Usu­al­ly the per­sona is sup­port­ed in some way by a social dimension—fostering serendip­i­ty and emer­gence of real relations.

InUse storyboard 3

All in all I have to say I am pret­ty pleased with the result of this short but sweet engage­ment. Col­lab­o­ra­tion with the peo­ple of InUse was smooth (as was expect­ed, since we are very much the same kind of ani­mal) and there will be fol­low-up work­shops with the client. It remains to be seen how much of this mul­ti-touch stuff will find its way into the final gat­ed com­mu­ni­ty. That as always will depend on what makes busi­ness sense. 

In any case it was a great oppor­tu­ni­ty for me to immerse myself ful­ly in the inter­re­lat­ed top­ics of mul­ti-touch, ges­ture, urban­ism and social­i­ty. And final­ly, it gave me the per­fect excuse to sit down and do lots and lots of drawings.

Interface design — fifth and final IA Summit 2007 theme

(Here’s the fifth and final post on the 2007 IA Sum­mit. You can find the first one that intro­duces the series and describes the first theme ‘tan­gi­ble’ here, the sec­ond one on ‘social’ here, the third one on ‘web of data’ here and the fourth one on ‘strat­e­gy’ here.)

It might have been the past RIA hype (which accord­ing to Jared Spool has noth­ing to do with web 2.0) but for what­ev­er rea­son, IAs are mov­ing into inter­face ter­ri­to­ry. They’re broad­en­ing their scope to look at how their archi­tec­tures are pre­sent­ed and made usable by users. The inter­est­ing part for me is to see how a dis­ci­pline that has come from tax­onomies, the­sauri and oth­er abstract infor­ma­tion struc­tures approach­es the design of user fac­ing shells for those struc­tures. Are their designs dra­mat­i­cal­ly dif­fer­ent from those cre­at­ed by inter­face design­ers com­ing from a more visu­al domain con­cerned with sur­face? I would say: at least a little… 

I par­tic­u­lar­ly enjoyed Stephen Anderson’s pre­sen­ta­tion on adap­tive inter­faces. He gave many exam­ples of inter­faces that would change accord­ing to user behav­iour, becom­ing more elab­o­rate and explana­to­ry or very min­i­mal and suc­cinct. His main point was to start with a gener­ic inter­face that would be usable by the major­i­ty of users, and then come up with ways to adapt it to dif­fer­ent spe­cif­ic behav­iours. The way in which those adap­ta­tions were deter­mined and doc­u­ment­ed as rules remind­ed me a lot of game design.

Mar­garet Han­ley gave a sol­id talk on the “unsexy side of IA”, name­ly the design of admin­is­tra­tion inter­faces. This typ­i­cal­ly involves com­ing up with a lot of screens with many form fields and con­trols. The inter­faces she cre­at­ed allowed peo­ple to edit data that would nor­mal­ly not be acces­si­ble through a CMS but need­ed edit­ing nonethe­less (prod­uct details for a web shop, for instance). Users are accus­tomed to think­ing in terms of edit­ing pages, not edit­ing data. The trick­i­est bit is to find ways to com­mu­ni­cate how changes made to the data would prop­a­gate through a site and be shown in dif­fer­ent places. There were some inter­est­ing ideas from the audi­ence on this, but no def­i­nite solu­tion was found.

Harmonious interfaces, martial arts and flow states

Screenshot of the game flOw

There’s been a few posts from the UX com­mu­ni­ty in the recent past on flow states (most notably at 37signals’s Sig­nal vs. Noise). This got me think­ing about my own expe­ri­ences of flow and what this tells me about how flow states could be induced with interfaces.

A com­mon exam­ple of flow states is when play­ing a game (the play­er for­gets she is push­ing but­tons on a game pad and is only mind­ful of the action at hand). I’ve expe­ri­enced flow while paint­ing but also when doing work on a PC (even when cre­at­ing wire­frames in Visio!) How­ev­er, the most inter­est­ing flow expe­ri­ences were while prac­tis­ing mar­tial arts.

The inter­est­ing bit is that the flow hap­pens when per­form­ing tech­niques in part­ner exer­cis­es or even fight­ing match­es. These are all sit­u­a­tions where the ‘sys­tem’ con­sists of two peo­ple, not one per­son and a medi­um medi­at­ed by an inter­face (if you’re will­ing to call a paint brush an inter­face that is).

To reach a state of flow in mar­tial arts you need to stop think­ing about per­form­ing the tech­nique while per­form­ing it, but in stead be mind­ful of the effect on your part­ner and try to visu­al­ize your own move­ments accord­ing­ly. When flow hap­pens, I’m actu­al­ly able to ‘see’ a tech­nique as one sin­gle image before start­ing it and while per­form­ing it I’m only aware of the whole sys­tem, not just myself.

Now here’s the beef. When you try to trans­late this to inter­face design, it’s clear that there’s no easy way to induce flow. The obvi­ous approach, to cre­ate a ‘dis­ap­pear­ing’ inter­face that is unob­tru­sive, min­i­mal, etc. is not enough (it could even be harm­ful). In stead I’d like to sug­gest you need to make your game, soft­ware or site behave more like a mar­tial arts fight­er. It needs to push or give way accord­ing to the actions of it’s part­ner. You real­ly need to approach the whole thing as an inter­con­nect­ed sys­tem where forces flow back and forth. Flow will hap­pen in the user when he or she can work in a har­mo­nious way. Usu­al­ly this requires a huge amount of men­tal mod­el adap­ta­tion on the user’s part… When will we cre­ate appli­ances that can infer the inten­tions of the user and change their stance accord­ing­ly? I’m not talk­ing about AI here, but what I would like to see is stuff more along the lines of flOw.