A day of playing around with multi-touch and RoomWare

Last Saturday I attended a RoomWare workshop. The people of CanTouch were there too, and brought one of their prototype multi-touch tables. The aim for the day was to come up with applications of RoomWare (open source software that can sense presence of people in spaces) and multi-touch. I attended primarily because it was a good opportunity to spend a day messing around with a table.

Attendance was multifaceted, so while programmers were putting together a proof-of-concept, designers (such as Alexander Zeh, James Burke and I) came up with concepts for new interactions. The proof-of-concept was up and running at the end of then day: The table could sense who was in the room and display his or her Flickr photos, which you could then move around, scale, rotate, etc. in the typical multi-touch fashion.

The concepts designers came up with mainly focused on pulling in Last.fm data (again using RoomWare’s sensing capabilities) and displaying it for group-based exploration. Here’s a storyboard I quickly whipped up of one such application:

RoomWare + CanTouch + Last.fm

The storyboard shows how you can add yourself from a list of people present in the room. Your top artists flock around you. When more people are added, lines are drawn between you. The thickness of the line represents how similar your tastes are, according to Last.fm’s taste-o-meter. Also, shared top artists flock in such a way as to be closest to all related people. Finally, artists can be acted on to listen to music.

When I was sketching this, it became apparent that orientation of elements should follow very different rules from regular screens. I chose to sketch things so that they all point outwards, with the middle of the table as the orientation point.

By spending a day immersed in multi-touch stuff, some interesting design challenges became apparent:

  • With tabletop surfaces, stuff is closer or further away physically. Proximity of elements can be unintentionally interpreted as saying something about aspects such as importance, relevance, etc. Designers need to be even more aware of placement than before, plus conventions from vertically oriented screens no longer apply. Top-of-screen becomes furthest away and therefore least prominent in stead of most important.
  • With group-based interactions, it becomes tricky to determine who to address and where to address him or her. Sometimes the system should address the group as a whole. When 5 people are standing around a table, text-based interfaces become problematic since what is legible from one end of the table is unintelligible from the other. New conventions need to be developed for this as well. Alexander and I philosophized about placing text along circles and animating them so that they circulate around the table, for instance.
  • Besides these, many other interface challenges present themselves. One crucial piece of information for solving many of these is knowing where people are located around the table. This issue can be approached from different angles. By incorporating sensors in the table, detection may be automated and interfaces could me made to adapt automatically. This is the techno-centric angle. I am not convinced this is the way to go, because it diminishes people’s control over the experience. I would prefer to make the interface itself adjustable in natural ways, so that people can mold the representation to suit their context. With situated technologies like this, auto-magical adaptation is an “AI-hard” problem, and the price of failure is a severely degraded user experience from which people cannot recover because the system won’t let them.

All in all the workshop was a wonderful day of tinkering with like-minded individuals from radically different backgrounds. As a designer, I think this is one of the best way be involved with open source projects. On a day like this, technologists can be exposed to new interaction concepts while they are hacking away. At the same time designers get that rare opportunity to play around with technology as it is shaped. Quick-and-dirty sketches like the ones Alexander and I came up with are definitely the way to communicate ideas. The goal is to suggest, not to describe, after all. Technologists should feel free to elaborate and build on what designers come up with and vice-versa. I am curious to see which parts of what we came up with will find their way into future RoomWare projects.

Storyboarding multi-touch interactions

I think it was around half a year ago that I wrote “UX designers should get into everyware”. Back then I did not expect to be part of a ubicomp project anytime soon. But here I am now, writing about work I did in the area of multi-touch interfaces.

Background

The people at InUse (Sweden’s premier interaction design consultancy firm) asked me to assist them with visualising potential uses of multi-touch technology in the context of a gated community. That’s right—an actual real-world physical real-estate development project. How cool is that?

InUse storyboard 1

This residential community is aimed at well-to-do seniors. As with most gated communities, it offers them convenience, security and prestige. You might shudder at the thought of living in one of these places (I know I have my reservations) but there’s not much use in judging people wanting to do so. Planned amenities include sports facilities, fine dining, onsite medical care, a cinema and on and on…

Social capital

One of the known issues with these ‘communities’ is that there’s not much evidence of social capital being higher there than in any regular neighbourhood. In fact some have argued that the global trend of gated communities is detrimental to the build-up of social capital in their surroundings. They throw up physical barriers that prevent free interaction of people. These are some of the things I tried to address: To see if we could support the emergence of community inside the residency using social tools while at the same counteracting physical barriers to the outside world with “virtual inroads” that allow for free interaction between residents and people in the periphery.

Being in the world

Another concern I tried to address is the different ways multi-touch interfaces can play a role in the lives of people. Recently Matt Jones addressed this in a post on the iPhone and Nokia’s upcoming multi-touch phones. In a community like the one I was designing for, the worst thing I could do is make every instance of multi-touch technology an attention-grabbing presence demanding full immersion from its user. In many cases ‘my’ users would be better served with them behaving in an unobtrusive way, allowing almost unconscious use. In other words: I tried to balance being in the world with being in the screen—applying each paradigm based on how appropriate it was given the user’s context. (After all, sometimes people want or even need to be immersed.)

Process

InUse had already prepared several personas representative of the future residents of the community. We went through those together and examined each for scenarios that would make good candidates for storyboarding. We wanted to come up with a range of scenarios that not only showed how these personas could be supported with multi-touch interfaces, but also illustrate the different spaces the interactions could take place in (private, semiprivate and public) and the scales at which the technology can operate (from small key-like tokens to full wall-screens).

InUse storyboard 2

I drafted each scenario as a textual outline and sketched the potential storyboards on thumbnail size. We went over those in a second workshop and refined them—making adjustments to better cover the concerns outlined above as well as improving clarity. We wanted to end up with a set of storyboards that could be used in a presentation for the client (the real-estate development firm) so we needed to balance user goals with business objectives. To that end we thought about and included examples of API-like integration of the platform with service providers in the periphery of the community. We also tried to create self-service experiences that would feel like being waited on by a personal butler.

Outcome

I ended up drawing three scenarios of around 9 panels each, digitising and cleaning them up on my Mac. Each scenario introduces a persona, the physical context of the interaction and the persona’s motivation that drives him to engage with the technology. The interactions visualised are a mix of gestures and engagements with multi-touch screens of different sizes. Usually the persona is supported in some way by a social dimension—fostering serendipity and emergence of real relations.

InUse storyboard 3

All in all I have to say I am pretty pleased with the result of this short but sweet engagement. Collaboration with the people of InUse was smooth (as was expected, since we are very much the same kind of animal) and there will be follow-up workshops with the client. It remains to be seen how much of this multi-touch stuff will find its way into the final gated community. That as always will depend on what makes business sense.

In any case it was a great opportunity for me to immerse myself fully in the interrelated topics of multi-touch, gesture, urbanism and sociality. And finally, it gave me the perfect excuse to sit down and do lots and lots of drawings.