Hardware interfaces for tuning the feel of microinteractions

In Digital Ground Malcolm McCullough talks about how tuning is a central part of interaction design practice. How part of the challenge of any project is to get to a point where you can start tweaking the variables that determine the behaviour of your interface for the best feel.

“Feel” is a word I borrow from game design. There is a book on it by Steve Swink. It is a funny term. We are trying to simulate sensations that are derived from the physical realm. We are trying to make things that are purely visual behave in such a way that they evoke these sensations. There are many games that heavily depend on getting feel right. Basically all games that are built on a physics simulation of some kind require good feel for a good player experience to emerge.

Physics simulations have been finding their way into non-game software products for some time now and they are becoming an increasing part of what makes a product, er, feel great. They are often at the foundation of signature moments that set a product apart from the pack. These signature moments are also known as microinteractions. To get them just right, being able to tune well is very important.

The behaviour of microinteractions based on physics simulations is determined by variables. For example, the feel of a spring is determined by the mass of the weight attached to the spring, the spring’s stiffness and the friction that resists the motion of the weight. These variables interact in ways that are hard to model in your head so you need to make repeated changes to each variable and try the simulation to get it just right. This is time-consuming, cumbersome and resists the easy exploration of alternatives essential to a good design process.

In The Setup game designer Bennett Foddy talks about a way to improve on this workflow. Many of his games (if not all of them) are playable physics simulations with punishingly hard controls. He suggests using a hardware interface (a MIDI controller) to tune the variables that determine the feel of his game while it runs. In this way the loop between changing a variable and seeing its effect in game is dramatically shortened and many different combinations of values can be explored easily. Once a satisfactory set of values for the variables has been found they can be written back to the software for future use.

I do believe such a setup is still non-trivial to make work with todays tools. A quick check verifies that Framer does not have OSC support, for example. There is an opportunity here for prototyping environments such as Framer and others to support it. The approach is not limited to motion-based microinteractions but can be extended to the tuning of variables that control other aspects of an app’s behaviour.

For example, when we were making Standing, we would have benefited hugely from hardware controls to tweak the sensitivity of its motion-sensing functions as we were using the app. We were forced to do it by repeatedly changing numbers in the code and building the app again and again. It was quite a pain to get right. To this day I have the feeling we could have made it better if only we would have had the tools to do it.

Judging from snafus such as the poor feel of the latest Twitter desktop client, there is a real need for better tools for tuning microinteractions. Just like pen tablets have become indispensable for those designing the form of user interfaces on screens. I think we might soon find a small set of hardware knobs on the desks of those designers working on the behaviour of user interfaces.

Storyboarding multi-touch interactions

I think it was around half a year ago that I wrote “UX designers should get into everyware”. Back then I did not expect to be part of a ubicomp project anytime soon. But here I am now, writing about work I did in the area of multi-touch interfaces.

Background

The people at InUse (Sweden’s premier interaction design consultancy firm) asked me to assist them with visualising potential uses of multi-touch technology in the context of a gated community. That’s right—an actual real-world physical real-estate development project. How cool is that?

InUse storyboard 1

This residential community is aimed at well-to-do seniors. As with most gated communities, it offers them convenience, security and prestige. You might shudder at the thought of living in one of these places (I know I have my reservations) but there’s not much use in judging people wanting to do so. Planned amenities include sports facilities, fine dining, onsite medical care, a cinema and on and on…

Social capital

One of the known issues with these ‘communities’ is that there’s not much evidence of social capital being higher there than in any regular neighbourhood. In fact some have argued that the global trend of gated communities is detrimental to the build-up of social capital in their surroundings. They throw up physical barriers that prevent free interaction of people. These are some of the things I tried to address: To see if we could support the emergence of community inside the residency using social tools while at the same counteracting physical barriers to the outside world with “virtual inroads” that allow for free interaction between residents and people in the periphery.

Being in the world

Another concern I tried to address is the different ways multi-touch interfaces can play a role in the lives of people. Recently Matt Jones addressed this in a post on the iPhone and Nokia’s upcoming multi-touch phones. In a community like the one I was designing for, the worst thing I could do is make every instance of multi-touch technology an attention-grabbing presence demanding full immersion from its user. In many cases ‘my’ users would be better served with them behaving in an unobtrusive way, allowing almost unconscious use. In other words: I tried to balance being in the world with being in the screen—applying each paradigm based on how appropriate it was given the user’s context. (After all, sometimes people want or even need to be immersed.)

Process

InUse had already prepared several personas representative of the future residents of the community. We went through those together and examined each for scenarios that would make good candidates for storyboarding. We wanted to come up with a range of scenarios that not only showed how these personas could be supported with multi-touch interfaces, but also illustrate the different spaces the interactions could take place in (private, semiprivate and public) and the scales at which the technology can operate (from small key-like tokens to full wall-screens).

InUse storyboard 2

I drafted each scenario as a textual outline and sketched the potential storyboards on thumbnail size. We went over those in a second workshop and refined them—making adjustments to better cover the concerns outlined above as well as improving clarity. We wanted to end up with a set of storyboards that could be used in a presentation for the client (the real-estate development firm) so we needed to balance user goals with business objectives. To that end we thought about and included examples of API-like integration of the platform with service providers in the periphery of the community. We also tried to create self-service experiences that would feel like being waited on by a personal butler.

Outcome

I ended up drawing three scenarios of around 9 panels each, digitising and cleaning them up on my Mac. Each scenario introduces a persona, the physical context of the interaction and the persona’s motivation that drives him to engage with the technology. The interactions visualised are a mix of gestures and engagements with multi-touch screens of different sizes. Usually the persona is supported in some way by a social dimension—fostering serendipity and emergence of real relations.

InUse storyboard 3

All in all I have to say I am pretty pleased with the result of this short but sweet engagement. Collaboration with the people of InUse was smooth (as was expected, since we are very much the same kind of animal) and there will be follow-up workshops with the client. It remains to be seen how much of this multi-touch stuff will find its way into the final gated community. That as always will depend on what makes business sense.

In any case it was a great opportunity for me to immerse myself fully in the interrelated topics of multi-touch, gesture, urbanism and sociality. And finally, it gave me the perfect excuse to sit down and do lots and lots of drawings.

Interface design — fifth and final IA Summit 2007 theme

(Here’s the fifth and final post on the 2007 IA Summit. You can find the first one that introduces the series and describes the first theme ‘tangible’ here, the second one on ‘social’ here, the third one on ‘web of data’ here and the fourth one on ‘strategy’ here.)

It might have been the past RIA hype (which according to Jared Spool has nothing to do with web 2.0) but for whatever reason, IAs are moving into interface territory. They’re broadening their scope to look at how their architectures are presented and made usable by users. The interesting part for me is to see how a discipline that has come from taxonomies, thesauri and other abstract information structures approaches the design of user facing shells for those structures. Are their designs dramatically different from those created by interface designers coming from a more visual domain concerned with surface? I would say: at least a little…

I particularly enjoyed Stephen Anderson’s presentation on adaptive interfaces. He gave many examples of interfaces that would change according to user behaviour, becoming more elaborate and explanatory or very minimal and succinct. His main point was to start with a generic interface that would be usable by the majority of users, and then come up with ways to adapt it to different specific behaviours. The way in which those adaptations were determined and documented as rules reminded me a lot of game design.

Margaret Hanley gave a solid talk on the “unsexy side of IA”, namely the design of administration interfaces. This typically involves coming up with a lot of screens with many form fields and controls. The interfaces she created allowed people to edit data that would normally not be accessible through a CMS but needed editing nonetheless (product details for a web shop, for instance). Users are accustomed to thinking in terms of editing pages, not editing data. The trickiest bit is to find ways to communicate how changes made to the data would propagate through a site and be shown in different places. There were some interesting ideas from the audience on this, but no definite solution was found.

Harmonious interfaces, martial arts and flow states

Screenshot of the game flOw

There’s been a few posts from the UX community in the recent past on flow states (most notably at 37signals’s Signal vs. Noise). This got me thinking about my own experiences of flow and what this tells me about how flow states could be induced with interfaces.

A common example of flow states is when playing a game (the player forgets she is pushing buttons on a game pad and is only mindful of the action at hand). I’ve experienced flow while painting but also when doing work on a PC (even when creating wireframes in Visio!) However, the most interesting flow experiences were while practising martial arts.

The interesting bit is that the flow happens when performing techniques in partner exercises or even fighting matches. These are all situations where the ‘system’ consists of two people, not one person and a medium mediated by an interface (if you’re willing to call a paint brush an interface that is).

To reach a state of flow in martial arts you need to stop thinking about performing the technique while performing it, but in stead be mindful of the effect on your partner and try to visualize your own movements accordingly. When flow happens, I’m actually able to ‘see’ a technique as one single image before starting it and while performing it I’m only aware of the whole system, not just myself.

Now here’s the beef. When you try to translate this to interface design, it’s clear that there’s no easy way to induce flow. The obvious approach, to create a ‘disappearing’ interface that is unobtrusive, minimal, etc. is not enough (it could even be harmful). In stead I’d like to suggest you need to make your game, software or site behave more like a martial arts fighter. It needs to push or give way according to the actions of it’s partner. You really need to approach the whole thing as an interconnected system where forces flow back and forth. Flow will happen in the user when he or she can work in a harmonious way. Usually this requires a huge amount of mental model adaptation on the user’s part… When will we create appliances that can infer the intentions of the user and change their stance accordingly? I’m not talking about AI here, but what I would like to see is stuff more along the lines of flOw.