A day of playing around with multi-touch and RoomWare

Last Sat­ur­day I attend­ed a RoomWare work­shop. The peo­ple of Can­Touch were there too, and brought one of their pro­to­type mul­ti-touch tables. The aim for the day was to come up with appli­ca­tions of RoomWare (open source soft­ware that can sense pres­ence of peo­ple in spaces) and mul­ti-touch. I attend­ed pri­mar­i­ly because it was a good oppor­tu­ni­ty to spend a day mess­ing around with a table.

Atten­dance was mul­ti­fac­eted, so while pro­gram­mers were putting togeth­er a proof-of-con­cept, design­ers (such as Alexan­der Zeh, James Burke and I) came up with con­cepts for new inter­ac­tions. The proof-of-con­cept was up and run­ning at the end of then day: The table could sense who was in the room and dis­play his or her Flickr pho­tos, which you could then move around, scale, rotate, etc. in the typ­i­cal mul­ti-touch fashion.

The con­cepts design­ers came up with main­ly focused on pulling in Last.fm data (again using RoomWare’s sens­ing capa­bil­i­ties) and dis­play­ing it for group-based explo­ration. Here’s a sto­ry­board I quick­ly whipped up of one such application:

RoomWare + CanTouch + Last.fm

The sto­ry­board shows how you can add your­self from a list of peo­ple present in the room. Your top artists flock around you. When more peo­ple are added, lines are drawn between you. The thick­ness of the line rep­re­sents how sim­i­lar your tastes are, accord­ing to Last.fm’s taste-o-meter. Also, shared top artists flock in such a way as to be clos­est to all relat­ed peo­ple. Final­ly, artists can be act­ed on to lis­ten to music.

When I was sketch­ing this, it became appar­ent that ori­en­ta­tion of ele­ments should fol­low very dif­fer­ent rules from reg­u­lar screens. I chose to sketch things so that they all point out­wards, with the mid­dle of the table as the ori­en­ta­tion point.

By spend­ing a day immersed in mul­ti-touch stuff, some inter­est­ing design chal­lenges became apparent:

  • With table­top sur­faces, stuff is clos­er or fur­ther away phys­i­cal­ly. Prox­im­i­ty of ele­ments can be unin­ten­tion­al­ly inter­pret­ed as say­ing some­thing about aspects such as impor­tance, rel­e­vance, etc. Design­ers need to be even more aware of place­ment than before, plus con­ven­tions from ver­ti­cal­ly ori­ent­ed screens no longer apply. Top-of-screen becomes fur­thest away and there­fore least promi­nent in stead of most important. 
  • With group-based inter­ac­tions, it becomes tricky to deter­mine who to address and where to address him or her. Some­times the sys­tem should address the group as a whole. When 5 peo­ple are stand­ing around a table, text-based inter­faces become prob­lem­at­ic since what is leg­i­ble from one end of the table is unin­tel­li­gi­ble from the oth­er. New con­ven­tions need to be devel­oped for this as well. Alexan­der and I phi­los­o­phized about plac­ing text along cir­cles and ani­mat­ing them so that they cir­cu­late around the table, for instance.
  • Besides these, many oth­er inter­face chal­lenges present them­selves. One cru­cial piece of infor­ma­tion for solv­ing many of these is know­ing where peo­ple are locat­ed around the table. This issue can be approached from dif­fer­ent angles. By incor­po­rat­ing sen­sors in the table, detec­tion may be auto­mat­ed and inter­faces could me made to adapt auto­mat­i­cal­ly. This is the tech­no-cen­tric angle. I am not con­vinced this is the way to go, because it dimin­ish­es people’s con­trol over the expe­ri­ence. I would pre­fer to make the inter­face itself adjustable in nat­ur­al ways, so that peo­ple can mold the rep­re­sen­ta­tion to suit their con­text. With sit­u­at­ed tech­nolo­gies like this, auto-mag­i­cal adap­ta­tion is an “AI-hard” prob­lem, and the price of fail­ure is a severe­ly degrad­ed user expe­ri­ence from which peo­ple can­not recov­er because the sys­tem won’t let them.

All in all the work­shop was a won­der­ful day of tin­ker­ing with like-mind­ed indi­vid­u­als from rad­i­cal­ly dif­fer­ent back­grounds. As a design­er, I think this is one of the best way be involved with open source projects. On a day like this, tech­nol­o­gists can be exposed to new inter­ac­tion con­cepts while they are hack­ing away. At the same time design­ers get that rare oppor­tu­ni­ty to play around with tech­nol­o­gy as it is shaped. Quick-and-dirty sketch­es like the ones Alexan­der and I came up with are def­i­nite­ly the way to com­mu­ni­cate ideas. The goal is to sug­gest, not to describe, after all. Tech­nol­o­gists should feel free to elab­o­rate and build on what design­ers come up with and vice-ver­sa. I am curi­ous to see which parts of what we came up with will find their way into future RoomWare projects.

Storyboarding multi-touch interactions

I think it was around half a year ago that I wrote “UX design­ers should get into every­ware”. Back then I did not expect to be part of a ubi­comp project any­time soon. But here I am now, writ­ing about work I did in the area of mul­ti-touch interfaces. 

Background

The peo­ple at InUse (Swe­den’s pre­mier inter­ac­tion design con­sul­tan­cy firm) asked me to assist them with visu­al­is­ing poten­tial uses of mul­ti-touch tech­nol­o­gy in the con­text of a gat­ed com­mu­ni­ty. That’s right—an actu­al real-world phys­i­cal real-estate devel­op­ment project. How cool is that?

InUse storyboard 1

This res­i­den­tial com­mu­ni­ty is aimed at well-to-do seniors. As with most gat­ed com­mu­ni­ties, it offers them con­ve­nience, secu­ri­ty and pres­tige. You might shud­der at the thought of liv­ing in one of these places (I know I have my reser­va­tions) but there’s not much use in judg­ing peo­ple want­i­ng to do so. Planned ameni­ties include sports facil­i­ties, fine din­ing, onsite med­ical care, a cin­e­ma and on and on…

Social capital

One of the known issues with these ‘com­mu­ni­ties’ is that there’s not much evi­dence of social cap­i­tal being high­er there than in any reg­u­lar neigh­bour­hood. In fact some have argued that the glob­al trend of gat­ed com­mu­ni­ties is detri­men­tal to the build-up of social cap­i­tal in their sur­round­ings. They throw up phys­i­cal bar­ri­ers that pre­vent free inter­ac­tion of peo­ple. These are some of the things I tried to address: To see if we could sup­port the emer­gence of com­mu­ni­ty inside the res­i­den­cy using social tools while at the same coun­ter­act­ing phys­i­cal bar­ri­ers to the out­side world with “vir­tu­al inroads” that allow for free inter­ac­tion between res­i­dents and peo­ple in the periphery.

Being in the world

Anoth­er con­cern I tried to address is the dif­fer­ent ways mul­ti-touch inter­faces can play a role in the lives of peo­ple. Recent­ly Matt Jones addressed this in a post on the iPhone and Noki­a’s upcom­ing mul­ti-touch phones. In a com­mu­ni­ty like the one I was design­ing for, the worst thing I could do is make every instance of mul­ti-touch tech­nol­o­gy an atten­tion-grab­bing pres­ence demand­ing full immer­sion from its user. In many cas­es ‘my’ users would be bet­ter served with them behav­ing in an unob­tru­sive way, allow­ing almost uncon­scious use. In oth­er words: I tried to bal­ance being in the world with being in the screen—apply­ing each par­a­digm based on how appro­pri­ate it was giv­en the user’s con­text. (After all, some­times peo­ple want or even need to be immersed.)

Process

InUse had already pre­pared sev­er­al per­sonas rep­re­sen­ta­tive of the future res­i­dents of the com­mu­ni­ty. We went through those togeth­er and exam­ined each for sce­nar­ios that would make good can­di­dates for sto­ry­board­ing. We want­ed to come up with a range of sce­nar­ios that not only showed how these per­sonas could be sup­port­ed with mul­ti-touch inter­faces, but also illus­trate the dif­fer­ent spaces the inter­ac­tions could take place in (pri­vate, semi­pri­vate and pub­lic) and the scales at which the tech­nol­o­gy can oper­ate (from small key-like tokens to full wall-screens). 

InUse storyboard 2

I draft­ed each sce­nario as a tex­tu­al out­line and sketched the poten­tial sto­ry­boards on thumb­nail size. We went over those in a sec­ond work­shop and refined them—making adjust­ments to bet­ter cov­er the con­cerns out­lined above as well as improv­ing clar­i­ty. We want­ed to end up with a set of sto­ry­boards that could be used in a pre­sen­ta­tion for the client (the real-estate devel­op­ment firm) so we need­ed to bal­ance user goals with busi­ness objec­tives. To that end we thought about and includ­ed exam­ples of API-like inte­gra­tion of the plat­form with ser­vice providers in the periph­ery of the com­mu­ni­ty. We also tried to cre­ate self-ser­vice expe­ri­ences that would feel like being wait­ed on by a per­son­al butler.

Outcome

I end­ed up draw­ing three sce­nar­ios of around 9 pan­els each, digi­tis­ing and clean­ing them up on my Mac. Each sce­nario intro­duces a per­sona, the phys­i­cal con­text of the inter­ac­tion and the per­son­a’s moti­va­tion that dri­ves him to engage with the tech­nol­o­gy. The inter­ac­tions visu­alised are a mix of ges­tures and engage­ments with mul­ti-touch screens of dif­fer­ent sizes. Usu­al­ly the per­sona is sup­port­ed in some way by a social dimension—fostering serendip­i­ty and emer­gence of real relations.

InUse storyboard 3

All in all I have to say I am pret­ty pleased with the result of this short but sweet engage­ment. Col­lab­o­ra­tion with the peo­ple of InUse was smooth (as was expect­ed, since we are very much the same kind of ani­mal) and there will be fol­low-up work­shops with the client. It remains to be seen how much of this mul­ti-touch stuff will find its way into the final gat­ed com­mu­ni­ty. That as always will depend on what makes busi­ness sense. 

In any case it was a great oppor­tu­ni­ty for me to immerse myself ful­ly in the inter­re­lat­ed top­ics of mul­ti-touch, ges­ture, urban­ism and social­i­ty. And final­ly, it gave me the per­fect excuse to sit down and do lots and lots of drawings.