The experience of playful IAs

Solving a Rubik's Cube

It’s time for a short update on my thinking about Playful IAs (the topic of my Euro IA Summit talk). One of the under-served aspects so far is the actual user experience of an architecture that is playful.

Brian Sutton-Smith describes a model describing the ways in which games are experienced in his book Toys as Culture. I first came across this book in (not surprisingly) Rules of Play. He lists five aspects:

  1. Visual scanning
  2. Auditory discrimination
  3. Motor responses
  4. Concentration
  5. Perceptual patterns of learning

Of most importance to my subject is the 5th one.

Game design, like the design of emergent IAs is a 2nd order design problem. You can only shape the user’s experience indirectly. One of the most important sources of pleasure for the user is the way you offer feedback on the ways he or she has explored and discovered the information space.

Obviously, I’m not saying you should make the use of your service deliberately hard. However, what I am saying is that if you’re interested in offering a playful experience on the level of IA, then Sutton-Smith’s perceptual patterns of learning is the best suited experiential dimension.

Possibility spaces and algorithmic architectures

A screenshot of Sim City.

One of the concepts I plan on exploring in my talk at the Euro IA Summit in Barcelona is ‘possibility spaces’. It’s a term used by Will Wright to describe his view of what a game can be – a space that offers multiple routes and outcomes to its explorer. That idea maps nicely with one definition of play that Zimmerman and Salen offer in Rules of Play: ‘free movement within a rigid structure’. Some examples of possibility spaces created by Wright are the well-known games Sim City and The Sims.

I think the idea of possibility spaces can help IAs to get a firmer grip on ways to realize information spaces that are multi-dimensional and (to use a term put forward by Jesse James Garrett) algorithmic. Algorithmic architectures according to Garrett are created ‘on the fly’ based on a set of rules (algorithms) that get their input (ideally) from user behaviour. The example he uses to explain this concept is Amazon.

I’ve found myself in several projects recently that would have benefited from an algorithmic approach. The hard thing is to explain its charms to clients and to get a unified vision of what it means across to the design team. I believe games might be a useful analogy. What do you think?