Play in social and tangible interactions

Now that the IxDA has posted a video of my presentation at Interaction 09 to Vimeo, I thought it would be a good idea to provide a little background to the talk. I had already posted the slides to SlideShare, so a full write-up doesn’t seem necessary. To provide a little context though, I will summarize the thing.

Summary

The idea of the talk was to look at a few qualities of embodied interaction, and relate them to games and play, in the hopes of illuminating some design opportunities. Without dwelling on what embodiment really means, suffice to say that there is a school of thought that states that our thinking originates in our bodily experience of the world around us, and our relationships with the people in it. I used the example of an improvised information display I once encountered in the paediatric ward of a local hospital to highlight two qualities of embodied interaction: (1) meaning is socially constructed and (2) cognition is facilitated by tangibility.1

ix09-lightning-talk-presented012

With regards to the first aspect — the social construction of meaning — I find it interesting that in games, you find a distinction between the official rules to a game, and the rules that are arrived at through mutual consent by the players, the latter being how the game is actually played. Using the example of an improvised manège in Habbo, I pointed out that under-specified design tends to encourage the emergence of such interesting uses. What it comes down to, as a designer, is to understand that once people get together to do stuff, and it involves the thing you’ve designed, they will layer new meanings on top of what you came up with, which is largely out of your control.

ix09-lightning-talk-presented015

For the second aspect — cognition being facilitated by tangibility — I talked about how people use the world around them to offload mental computation. For instance, when people get better at playing Tetris, they start backtracking more than when they just started playing. They are essentially using the game’s space to think with. As an aside, I pointed out that in my experience, sketching plays a similar role when designing. As with the social construction of meaning, for epistemic action to be possible, the system in use needs to be adaptable.

ix09-lightning-talk-presented025

To wrap up, I suggested that, when it comes to the design of embodied interactive stuff, we are struggling with the same issues as game designers. We’re both positioning ourselves (in the words of Eric Zimmerman) as meta-creators of meaning; as designers of spaces in which people discover new things about themselves, the world around them and the people in it.

Sources

I had several people come up to me afterwards, asking for sources, so I’ll list them here.

  • the significance of the social construction of meaning for interaction design is explained in detail by Paul Dourish in his book Where the Action Is
  • the research by Jean Piaget I quoted is from his book The Moral Judgement of the Child (which I first encountered in Rules of Play, see below)
  • the concept of ideal versus real rules is from the wonderful book Rules of Play by Katie Salen and Eric Zimmerman (who in turn have taken it from Kenneth Goldstein’s article Strategies in Counting Out)
  • for a wonderful description of how children socially mediate the rules to a game, have a look at the article Beyond the Rules of the Game by Linda Hughes (collected in the Game Design Reader)
  • the Will Wright quote is from an interview in Tracy Fullerton’s book Game Design Workshop, second edition
  • for a discussion of pragmatic versus epistemic action and how it relates to interaction design, refer to the article How Bodies Matter (PDF) by Scott Klemmer, Björn Hartmann and Leila Takayama (which is rightfully recommended by Dan Saffer in his book, Designing Gestural Interfaces)
  • the Tetris research (which I first found in the previously mentioned article) is described in Epistemic Action Increases With Skill (PDF), an article by Paul Maglio and David Kirsh
  • the “play is free movement…” quote is from Rules of Play
  • the picture of the guy skateboarding is a still from the awesome documentary film Dogtown and Z-Boys
  • for a lot of great thinking on “loose fit” design, be sure to check out the book How Buildings Learn by Stewart Brand
  • the “meta-creators of meaning” quote is from Eric Zimmerman’s foreword to the aforementioned Game Design Workshop, 2nd ed.

Thanks

And that’s it. Interaction 09 was a great event, I’m happy to have been a part of it. Most of the talks seem to be online now. So why not check them out? My favourites by far were John Thackara and Robert Fabricant. Thanks to the people of the IxDA for all the effort they put into increasing interaction design’s visibility to the world.

  1. For a detailed discussion of the information display, have a look at this blog post. []

Reboot 10 slides and video

I am breaking radio-silence for a bit to let you know the slides and video for my Reboot 10 presentation are now available online, in case you’re interested. I presented this talk before at The Web and Beyond, but this time I had a lot more time, and I presented in English. I therefore think this might still be of interest to some people.1 As always, I am very interested in receiving constructive criticism Just drop me a line in the comments.

Update: It occurred to me that it might be a good idea to briefly summarize what this is about. This is a presentation in two parts. In the first, I theorize about the emergence of games that have as their goal the conveying of an argument. These games would use the real-time city as their platform. It is these games that I call urban procedural rhetorics. In the second part I give a few examples of what such games might look like, using a series of sketches.

The slides, posted to SlideShare, as usual:

The video, hosted on the Reboot website:

  1. I did post a transcript in English before, in case you prefer reading to listening. []

A day of playing around with multi-touch and RoomWare

Last Saturday I attended a RoomWare workshop. The people of CanTouch were there too, and brought one of their prototype multi-touch tables. The aim for the day was to come up with applications of RoomWare (open source software that can sense presence of people in spaces) and multi-touch. I attended primarily because it was a good opportunity to spend a day messing around with a table.

Attendance was multifaceted, so while programmers were putting together a proof-of-concept, designers (such as Alexander Zeh, James Burke and I) came up with concepts for new interactions. The proof-of-concept was up and running at the end of then day: The table could sense who was in the room and display his or her Flickr photos, which you could then move around, scale, rotate, etc. in the typical multi-touch fashion.

The concepts designers came up with mainly focused on pulling in Last.fm data (again using RoomWare’s sensing capabilities) and displaying it for group-based exploration. Here’s a storyboard I quickly whipped up of one such application:

RoomWare + CanTouch + Last.fm

The storyboard shows how you can add yourself from a list of people present in the room. Your top artists flock around you. When more people are added, lines are drawn between you. The thickness of the line represents how similar your tastes are, according to Last.fm’s taste-o-meter. Also, shared top artists flock in such a way as to be closest to all related people. Finally, artists can be acted on to listen to music.

When I was sketching this, it became apparent that orientation of elements should follow very different rules from regular screens. I chose to sketch things so that they all point outwards, with the middle of the table as the orientation point.

By spending a day immersed in multi-touch stuff, some interesting design challenges became apparent:

  • With tabletop surfaces, stuff is closer or further away physically. Proximity of elements can be unintentionally interpreted as saying something about aspects such as importance, relevance, etc. Designers need to be even more aware of placement than before, plus conventions from vertically oriented screens no longer apply. Top-of-screen becomes furthest away and therefore least prominent in stead of most important.
  • With group-based interactions, it becomes tricky to determine who to address and where to address him or her. Sometimes the system should address the group as a whole. When 5 people are standing around a table, text-based interfaces become problematic since what is legible from one end of the table is unintelligible from the other. New conventions need to be developed for this as well. Alexander and I philosophized about placing text along circles and animating them so that they circulate around the table, for instance.
  • Besides these, many other interface challenges present themselves. One crucial piece of information for solving many of these is knowing where people are located around the table. This issue can be approached from different angles. By incorporating sensors in the table, detection may be automated and interfaces could me made to adapt automatically. This is the techno-centric angle. I am not convinced this is the way to go, because it diminishes people’s control over the experience. I would prefer to make the interface itself adjustable in natural ways, so that people can mold the representation to suit their context. With situated technologies like this, auto-magical adaptation is an “AI-hard” problem, and the price of failure is a severely degraded user experience from which people cannot recover because the system won’t let them.

All in all the workshop was a wonderful day of tinkering with like-minded individuals from radically different backgrounds. As a designer, I think this is one of the best way be involved with open source projects. On a day like this, technologists can be exposed to new interaction concepts while they are hacking away. At the same time designers get that rare opportunity to play around with technology as it is shaped. Quick-and-dirty sketches like the ones Alexander and I came up with are definitely the way to communicate ideas. The goal is to suggest, not to describe, after all. Technologists should feel free to elaborate and build on what designers come up with and vice-versa. I am curious to see which parts of what we came up with will find their way into future RoomWare projects.

Embodied interaction and improvised information displays

Recently a good friend of mine became a dad. It made me feel really old, but it also lead to an encounter with an improvised information display, which I’d like to tell you about, because it illustrates some of the things I have learnt from reading Paul Dourish’s Where the Action Is.

My friend’s son was born a bit too early, so we went to see him (the son) at the neonatology ward of the local hospital. It was there that I saw this whiteboard with stickers, writing and the familiar magnets on it:

Tracing of a photo of an improvised information display in a hospital neonatology ward consisting of a whiteboard, magnets, stickers and writing

(I decided to trace the photo I took of it and replace the names with fictional ones.)

Now, at first I only noticed parts of what was there. I saw the patient names on the left-hand side, and recognised the name of my friend’s son. I also noticed that on the right-hand side, the names of all the nurses on duty were there. I did not think much more of it.

Before leaving, my friend walked up to the whiteboard and said something along the lines of “yes, this is correct,” and touched one of the green magnets that was in the middle of the board as if to confirm this. It was then that my curiosity was piqued, and I asked my friend to explain what the board meant.

It turns out it was a wonderful thing, something I’ll call an improvised information display, for lack of a better word. What I had not seen the first time around, but were pointed out by my friend:

  1. There is a time axis along the top of the board. By placing a green magnet at the height of a child’s name somewhere along this axis, parents can let the staff know when they intend to visit. This is important for many reasons. One being that it helps the nurses time the moment a child will be fed so that the parents can be present. So in the example, the parents of ‘Faramond’ will be visiting around 21:00 hours.
  2. There are different colour magnets behind the children’s names, and behind the nurses’ names. This shows which nurse is responsible for which child. For instance, ‘Charlotte’ is in charge of ‘Once’s’ care.

Dourish’s book has influenced the way I look at things like this. It has made me more aware of their unique value. Whereas before I would think that something like this could be done better by a proper designer, with digital means, I now think the grasp-able aspect of such a display is vital. I also now believe that the prominent role of users in shaping the display is vital. Dourish writes:1

“What embodied interaction adds to existing representational practice is the understanding that representations are also themselves artefacts. Not only do they allow users to “reach through” and act upon the entity being represented, but they can also themselves be acted upon—picked up, examined, manipulated and rearranged.”

Parents and nurses reach through the display I saw in the neonatology ward to act upon the information about visiting times and responsibility of care. But they also act on the components of the display itself to manipulate the meaning they have.

In fact, this is how the display was constructed in the first place! The role of the designer in this display was limited to the components themselves. Designers were responsible for the affordances of the whiteboard, the magnets, the erasable markers and stickers, which enabled users to produce the information display they needed. In the words of Dourish:2

“Principle: Users, not designers, create and communicate meaning.”

“Principle: Users, not designers, manage coupling.”

It is the nurses and the parents and the social practice they together constitute that gives rise to the meaning of the display. What the board means is obvious to them, because they have ‘work’ that needs to be done together. It was not obvious to me, because I am not part of that group. It was not a designer that decided what the meaning of the different colours of the magnets were. It was a group of users who coupled meaning to the components they had available to them.

It might be a radical example, but I think this does demonstrate what people can do if the right components are made available to them, and they are allowed to make their own meaning with them. I think it is important for designers to realise this, and allow for this kind of manipulation of the products and services they shape. Clearly, Dourish’s notion of embodied interaction is a key to designing for adaptation and hacking. When it comes to this, today’s whiteboards, magnets and markers seem to do a better job than many of our current digital technologies.

  1. Page 169 []
  2. Page 170 []

Storyboarding multi-touch interactions

I think it was around half a year ago that I wrote “UX designers should get into everyware”. Back then I did not expect to be part of a ubicomp project anytime soon. But here I am now, writing about work I did in the area of multi-touch interfaces.

Background

The people at InUse (Sweden’s premier interaction design consultancy firm) asked me to assist them with visualising potential uses of multi-touch technology in the context of a gated community. That’s right—an actual real-world physical real-estate development project. How cool is that?

InUse storyboard 1

This residential community is aimed at well-to-do seniors. As with most gated communities, it offers them convenience, security and prestige. You might shudder at the thought of living in one of these places (I know I have my reservations) but there’s not much use in judging people wanting to do so. Planned amenities include sports facilities, fine dining, onsite medical care, a cinema and on and on…

Social capital

One of the known issues with these ‘communities’ is that there’s not much evidence of social capital being higher there than in any regular neighbourhood. In fact some have argued that the global trend of gated communities is detrimental to the build-up of social capital in their surroundings. They throw up physical barriers that prevent free interaction of people. These are some of the things I tried to address: To see if we could support the emergence of community inside the residency using social tools while at the same counteracting physical barriers to the outside world with “virtual inroads” that allow for free interaction between residents and people in the periphery.

Being in the world

Another concern I tried to address is the different ways multi-touch interfaces can play a role in the lives of people. Recently Matt Jones addressed this in a post on the iPhone and Nokia’s upcoming multi-touch phones. In a community like the one I was designing for, the worst thing I could do is make every instance of multi-touch technology an attention-grabbing presence demanding full immersion from its user. In many cases ‘my’ users would be better served with them behaving in an unobtrusive way, allowing almost unconscious use. In other words: I tried to balance being in the world with being in the screen—applying each paradigm based on how appropriate it was given the user’s context. (After all, sometimes people want or even need to be immersed.)

Process

InUse had already prepared several personas representative of the future residents of the community. We went through those together and examined each for scenarios that would make good candidates for storyboarding. We wanted to come up with a range of scenarios that not only showed how these personas could be supported with multi-touch interfaces, but also illustrate the different spaces the interactions could take place in (private, semiprivate and public) and the scales at which the technology can operate (from small key-like tokens to full wall-screens).

InUse storyboard 2

I drafted each scenario as a textual outline and sketched the potential storyboards on thumbnail size. We went over those in a second workshop and refined them—making adjustments to better cover the concerns outlined above as well as improving clarity. We wanted to end up with a set of storyboards that could be used in a presentation for the client (the real-estate development firm) so we needed to balance user goals with business objectives. To that end we thought about and included examples of API-like integration of the platform with service providers in the periphery of the community. We also tried to create self-service experiences that would feel like being waited on by a personal butler.

Outcome

I ended up drawing three scenarios of around 9 panels each, digitising and cleaning them up on my Mac. Each scenario introduces a persona, the physical context of the interaction and the persona’s motivation that drives him to engage with the technology. The interactions visualised are a mix of gestures and engagements with multi-touch screens of different sizes. Usually the persona is supported in some way by a social dimension—fostering serendipity and emergence of real relations.

InUse storyboard 3

All in all I have to say I am pretty pleased with the result of this short but sweet engagement. Collaboration with the people of InUse was smooth (as was expected, since we are very much the same kind of animal) and there will be follow-up workshops with the client. It remains to be seen how much of this multi-touch stuff will find its way into the final gated community. That as always will depend on what makes business sense.

In any case it was a great opportunity for me to immerse myself fully in the interrelated topics of multi-touch, gesture, urbanism and sociality. And finally, it gave me the perfect excuse to sit down and do lots and lots of drawings.

Tangible — first of five IA Summit 2007 themes

I’ll be posting a top 5 of the themes I noticed during the past 2007 IA Summit in Las Vegas. It’s a little late maybe, but hopefully still offers some value. Here are the 5 themes. My thoughts on the first one (tangible) are below the list:

  1. Tangible (this post)
  2. Social
  3. Web of data
  4. Strategy
  5. Interface design

1. Tangible

The IA community is making a strange dance around the topic of design for physical spaces and objects. On the one hand IAs seem reluctant to move away from the web, on the other hand they seem very curious about what value they can bring to the table when designing buildings, appliances, etc.

The opening keynote was delivered by Joshua Prince-Ramus, of REX (notes by Rob Fay and Jennifer Keach). He made some interesting points about how ‘real’ architects are struggling with including informational concerns in their practice. Michele Tepper, a designer at Frog talked us through the creation of a specialized communications device for day traders where industrial design, interaction design and information architecture went hand in hand.

More to come!

UX designers should get into everyware

I’ve been reading Adam Greenfield’s Everyware on and off and one of the things that it has me wondering the most lately is: are UX professionals making the move to design for ubiquitous computing?

There’re several places in the book where he explicitly mentions UX in relation to everyware. Let’s have a look at the ones I managed to retrieve using the book’s trusty index…

On page 14 Greenfield writes that with the emergence of ubicomp at the dawn of the new millennium, the user experience community took up the challenge with “varying degrees of enthusiasm, scepticism and critical distance”, trying to find a “language of interaction suited to a world where information processing would be everywhere in the human environment.”

So of course the UX community has already started considering what it means to design for ubicomp. This stuff is quite different to internet appliances and web sites though, as Greenfield points out in thesis 09 (pp.37-39):

“Consistently eliciting good user experiences means accounting for the physical design of the human interface, the flow of interaction between user and device, and the larger context in which that interaction is embedded. In not a single one of these dimensions is the experience of everyware anything like that of personal computing.” (p.37)

That’s a clear statement, on which he elaborates further on, mentioning that traditional interactions are usually of a “call-and-response rhythm: user actions followed by system events.” Whereas everyware interactions “can’t meaningfully be constructed as ‘task-driven.’ Nor does anything in the interplay between user and system […] correspond with […] information seeking.” (p.38)

So, UX designers moving into everyware have their work cut out for them. This is virgin territory:

“[…] it is […] a radically new situation that will require the development over time of a doctrine and a body of standards and conventions […]” (p.39)

Now, UX in traditional projects has been prone to what Greenfield calls ‘value engineering’. Commercial projects can only be two of these three things: fast, good and cheap. UX would support the second, but sadly it is often sacrificed for the sake of the other two. Not always though, but this is usually dependent on who is involved with the project:

“[…] it often takes an unusually dedicated, persistent, and powerful advocate […] to see a high-quality design project through to completion with everything that makes it excellent intact. […] the painstakingly detailed work of ensuring a good user experience is frequently hard to justify on a short-term ROI basis, and this is why it is often one of the first things to get value-engineered out of an extended development process. […] we’ve seen that getting everyware right will be orders of magnitude more complicated than achieving acceptable quality in a Web site, […] This is not the place for value engineers,” (p.166)

So if traditional projects need UX advocates on board with considerable influence, comparable to Steve Jobs’s role at Apple, to ensure a descent user experience will it even be possible to create ubiquitous experiences that are enjoyable to use? If these projects are so complex, can they be even gotten ‘right’ in a commercial context? I’m sorry to say I think not…

Designers (used broadly) will be at the forefront of deciding what everyware looks like. If you don’t think they will, at least I’m sure they should. They’re not the only ones to determine its shape though, Greenfield points out that both regulators and markets have important parts to play too (pp.172-173):

“[…] the interlocking influences of designer, regulator, and market will be most likely to result in beneficial outcomes if these parties all treat everyware as a present reality, and if the decision makers concerned act accordingly.” (p.173)

Now there’s an interesting notion. Having just come back from a premier venue for the UX community to talk about this topic, the IA Summit, I’m afraid to say that I didn’t get the impression IAs are taking everyware seriously (yet.) There were no talks really concerned with tangible, pervasive, ubiquitous or ambient technologies. Some basic fare on mobile web stuff, that’s all. Worrying, because as Greenfield points out:

“[UX designers] will best be able to intervene effectively if they develop appropriate insights, tools, and methodologies ahead of the actual deployment of ubiquitous systems.” (pp.173-174)

This stuff is real, and it is here. Greenfield points to the existence of systems such as Octopus in Hong Kong and E-ZPass in the US. Honestly, if you think beyond the tools and methods we’ve been using to communicate our designs, IxDs and IAs are well-equipped to handle everyware. No, you won’t be required to draw wireframes or sitemaps; but you’ll damn well need to put in a lot of the thinking designers do. And you’ll still need to be able to communicate those designs. It’s time to get our hands dirty:

“What fully operational systems such as Octopus and E-ZPass tell us is that privacy concerns, social implications, ethical questions, and practical details of the user experience are no longer matters for conjecture or supposition. With ubiquitous systems available for empirical enquiry, these things we need to focus on today.” (p.217)

So, to reiterate the question I started with: are there any UX designers out there that have made the switch from web-work to ubicomp? Anyone considering it? I’d love to hear about your experiences.

Albert Heijn RFID epiphany

I was standing in line at the local Albert Heijn1 the other day and had a futurist’s ‘epiphany’. I had three items in my basket. The couple in front of me had a shopping cart full of stuff. I had an empty stomach and was tired from a long day’s work. They were taking their time placing their items on the short conveyor belt. The cashier took her time scanning each individual item. The couple had a lot of stuff and only a few bags to put their stuff in. Did I mention this was taking a looong time?

I wasn’t being impatient though, I used the time to let my thoughts wander. For some reason my associative brain became occupied with RFID. Many of the items in the Albert Heijn shelves have RFID tags in them already. They use those to track inventory. Soon, all of the items will be tagged with these chips. That’ll make it easy to restock stuff. But it occurred to me that it might make the situation I was in at that moment (standing there waiting for a large amount of items to be moved from a cart, scanned and packed in bags to be placed back in the cart again) history.

Imagine driving your overflowing shopping cart through a stall and having all the items read simultaneously. If you’d wanted to get rid of the friendly cashier you could put automatic gates on the cash register and have them open once all items were paid for (by old-fashioned debit or credit card or newfangled RFID enabled payment token). Walk up to the gate, swipe your token past a reader and have the gate open, no matter how many items you have with you.

No more checking the receipt for items that were mistakenly scanned twice (or not scanned at all, if you’re that honest). No more waiting for people with too many stuff in their cart that they don’t really need. And no more underpaid pubescent cashiers to ruin your day with their bad manners!

Actually, would that ever happen? It would take a large amount of trust from everyone involved. There is a lot of trust implicitly involved in the whole exchange. Handing your stuff one after the other to an actual human being and having that person scan them is a very physical, tangible way to get a sense of what you’re paying for, and that you’re getting your money’s worth. With completely automated RFID-enabled shopping, that would be lost.

It’s a banal, pedestrian and simple example of how this stuff could change your everyday life, I know, but something to think about, nonetheless.

1. Albert Heijn is the largest super market chain in the Netherlands.