Design and machine learning – an annotated reading list

Earlier this year I coached Design for Interaction master students at Delft University of Technology in the course Research Methodology. The students organised three seminars for which I provided the claims and assigned reading. In the seminars they argued about my claims using the Toulmin Model of Argumentation. The readings served as sources for backing and evidence.

The claims and readings were all related to my nascent research project about machine learning. We delved into both designing for machine learning, and using machine learning as a design tool.

Below are the readings I assigned, with some notes on each, which should help you decide if you want to dive into them yourself.

Hebron, Patrick. 2016. Machine Learning for Designers. Sebastopol: O’Reilly.

The only non-academic piece in this list. This served the purpose of getting all students on the same page with regards to what machine learning is, its applications of machine learning in interaction design, and common challenges encountered. I still can’t think of any other single resource that is as good a starting point for the subject as this one.

Fiebrink, Rebecca. 2016. “Machine Learning as Meta-Instrument: Human-Machine Partnerships Shaping Expressive Instrumental Creation.” In Musical Instruments in the 21st Century, 14:137–51. Singapore: Springer Singapore. doi:10.1007/978–981–10–2951–6_10.

Fiebrink’s Wekinator is groundbreaking, fun and inspiring so I had to include some of her writing in this list. This is mostly of interest for those looking into the use of machine learning for design and other creative and artistic endeavours. An important idea explored here is that tools that make use of (interactive, supervised) machine learning can be thought of as instruments. Using such a tool is like playing or performing, exploring a possibility space, engaging in a dialogue with the tool. For a tool to feel like an instrument requires a tight action-feedback loop.

Dove, Graham, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. The 2017 CHI Conference. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.

A really good survey of how designers currently deal with machine learning. Key takeaways include that in most cases, the application of machine learning is still engineering-led as opposed to design-led, which hampers the creation of non-obvious machine learning applications. It also makes it hard for designers to consider ethical implications of design choices. A key reason for this is that at the moment, prototyping with machine learning is prohibitively cumbersome.

Fiebrink, Rebecca, Perry R Cook, and Dan Trueman. 2011. “Human Model Evaluation in Interactive Supervised Learning.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.

The second Fiebrink piece in this list, which is more of a deep dive into how people use Wekinator. As with the chapter listed above this is required reading for those working on design tools which make use of interactive machine learning. An important finding here is that users of intelligent design tools might have very different criteria for evaluating the ‘correctness’ of a trained model than engineers do. Such criteria are likely subjective and evaluation requires first-hand use of the model in real time.

Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M Ramsey, 316–34. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139046855.020.

Bostrom is known for his somewhat crazy but thoughtprovoking book on superintelligence and although a large part of this chapter is about the ethics of general artificial intelligence (which at the very least is still a way out), the first section discusses the ethics of current “narrow” artificial intelligence. It makes for a good checklist of things designers should keep in mind when they create new applications of machine learning. Key insight: when a machine learning system takes on work with social dimensions—tasks previously performed by humans—the system inherits its social requirements.

Yang, Qian, John Zimmerman, Aaron Steinfeld, and Anthony Tomasic. 2016. Planning Adaptive Mobile Experiences When Wireframing. The 2016 ACM Conference. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.

Finally, a feet-in-the-mud exploration of what it actually means to design for machine learning with the tools most commonly used by designers today: drawings and diagrams of various sorts. In this case the focus is on using machine learning to make an interface adaptive. It includes an interesting discussion of how to balance the use of implicit and explicit user inputs for adaptation, and how to deal with inference errors. Once again the limitations of current sketching and prototyping tools is mentioned, and related to the need for designers to develop tacit knowledge about machine learning. Such tacit knowledge will only be gained when designers can work with machine learning in a hands-on manner.

Supplemental material

Floyd, Christiane. 1984. “A Systematic Look at Prototyping.” In Approaches to Prototyping, 1–18. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978–3–642–69796–8_1.

I provided this to students so that they get some additional grounding in the various kinds of prototyping that are out there. It helps to prevent reductive notions of prototyping, and it makes for a nice complement to Buxton’s work on sketching.

Blevis, E, Y Lim, and E Stolterman. 2006. “Regarding Software as a Material of Design.”

Some of the papers refer to machine learning as a “design material” and this paper helps to understand what that idea means. Software is a material without qualities (it is extremely malleable, it can simulate nearly anything). Yet, it helps to consider it as a physical material in the metaphorical sense because we can then apply ways of design thinking and doing to software programming.

Status update

This is not exactly a now page, but I thought I would write up what I am doing at the moment since last reporting on my status in my end-of-year report.

The majority of my workdays are spent doing freelance design consulting. My primary gig has been through Eend at the Dutch Victim Support Foundation, where until very recently I was part of a team building online services. I helped out with product strategy, setting up a lean UX design process, and getting an integrated agile design and development team up and running. The first services are now shipping so it is time for me to move on, after 10 months of very gratifying work. I really enjoy working in the public sector and I hope to be doing more of it in future.

So yes, this means I am available and you can hire me to do strategy and design for software products and services. Just send me an email.

Shortly before the Dutch national elections of this year, Iskander and I gathered a group of fellow tech workers under the banner of “Tech Solidarity NL” to discuss the concerning lurch to the right in national politics and what our field can do about it. This has developed into a small but active community who gather monthly to educate ourselves and develop plans for collective action. I am getting a huge boost out of this. Figuring out how to be a leftist in this day and age is not easy. The only way to do it is to practice and for that reflection with peers is invaluable. Building and facilitating a group like this is hugely educational too. I have learned a lot about how a community is boot-strapped and nurtured.

If you are in the Netherlands, your politics are left of center, and you work in technology, consider yourself invited to join.

And finally, the last major thing on my plate is a continuing effort to secure a PhD position for myself. I am getting great support from people at Delft University of Technology, in particular Gerd Kortuem. I am focusing on internet of things products that have features driven by machine learning. My ultimate aim is to develop prototyping tools for design and development teams that will help them create more innovative and more ethical solutions. The first step for this will be to conduct field research inside companies who are creating such products right now. So I am reaching out to people to see if I can secure a reasonable amount of potential collaborators for this, which will go a long way in proving the feasibility of my whole plan.

If you know of any companies that develop consumer-facing products that have a connected hardware component and make use of machine learning to drive features, do let me know.

That’s about it. Freelance UX consulting, leftist tech-worker organising and design-for-machine-learning research. Quite happy with that mix, really.

Doing UX inside of Scrum

Some notes on how I am currently “doing user experience” inside of Scrum. This approach has evolved from my projects at Hubbub as well as more recently my work with ARTO and on a project at Edenspiekermann. So I have found it works with both startups and agency style projects.

The starting point is to understand that Scrum is intended to be a container. It is a process framework. It should be able to hold any other activity you think you need as a team. So if we feel we need to add UX somehow, we should try to make it part of Scrum and not something that is tacked onto Scrum. Why not tack something on? Because it signals design is somehow distinct from development. And the whole point of doing agile is to have cross-functional teams. If you set up a separate process for design you are highly likely not to benefit from the full collective intelligence of the combined design and development team. So no, design needs to be inside of the Scrum container.

Staggered sprints are not the answer either because you are still splitting the team into design and development, hampering cross-collaboration and transparency. You’re basically inviting Taylorism back into your process—the very thing you were trying to getting away from.

When you are uncomfortable with putting designers and developers all in the same team and the same process the answer is not to make your process more elaborate, parcel things up, and decrease “messy” interactions. The answer is increasing conversation, not eliminating it.

It turns out things aren’t remotely as complicated as they appear to be. The key is understanding Scrum’s events. The big event holding all other events is the sprint. The sprint outputs a releasable increment of “done" product. The development team does everything required to achieve the sprint goal collaboratively determined during sprint planning. Naturally this includes any design needed for the product. I think of this as the ‘production’ type of design. It typically consists mostly of UI design. There may already be some preliminary UI design available at the start of the sprint but it does not have to be finished.

What about the kind of design that is required for figuring out what to build in the first place? It might not be obvious at first, but Scrum actually has an ongoing process which readily accommodates it: backlog refinement. These are all activities required to get a product backlog item in shape for sprint planning. This is emphatically not a solo show for the product manager to conduct. It is something the whole team collaborates on. Developers and designers. In my experience designers are great at facilitating backlog refinement sessions. At the whiteboard, figuring stuff out with the whole team ‘Lean UX’ style.

I will admit product backlog refinement is Scrum’s weak point. Where it offers a lot of structure for the sprints, it offers hardly any for the backlog refinement (or grooming as some call it). But that’s okay, we can evolve our own.

I like to use Kanban to manage the process of backlog refinement. Items come into the pipeline as something we want to elaborate because we have decided we want to build it (in some form or other, can be just an experiment) in the next sprint or two. It then goes through various stages of elaboration. At the very least capturing requirements in the form of user stories or job stories, doing sketches, a lo-fi prototype, mockups and a hi-fi prototype and finally breaking the item down into work to be done and attaching an estimate to it. At this point it is ready to be part of a sprint. Crucially, during this lifecycle of an item as it is being refined, we can and should do user research if we feel we need more data, or user testing if we feel it is too risky to commit to a feature outright.

For this kind of figuring stuff out, this ‘planning’ type of design, it makes no sense to have it be part of a sprint-like structure because the work required to get it to a ‘ready’ state is much more unpredictable. The point of having a looser grooming flow is that it exists to eliminate uncertainty for when we commit to an item in a sprint.

So between the sprint and backlog refinement, Scrum readily accommodates design. ‘Production’ type design happens inside of the sprint and designers are considered part of the development team. ‘Planning’ type of design happens as part of backlog refinement.

So no need to tack on a separate process. It keeps the process simple and understandable, thus increasing transparency for the whole team. It prevents design from becoming a black box to others. And when we make design part of the container process framework that is Scrum, we reap the rewards of the team’s collective intelligence and we increase our agility.

Prototyping is a team sport

Lately I have been binging on books, presentations and articles related to ‘Lean UX’. I don’t like the term, but then I don’t like the tech industry’s love for inventing a new label for every damn thing. I do like the things emphasises: shared understanding, deep collaboration, continuous user feedback. These are principles that have always implicitly guided the choices I made when leading teams at Hubbub and now also as a member of several teams in the role of product designer.

In all these lean UX readings a thing that keeps coming up again and again is prototyping. Prototypes are the go-to way of doing ‘experiments’, in lean-speak. Other things can be done as well—surveys, interviews, whatever—but more often than not, assumptions are tested with prototypes.

Which is great! And also unsurprising as prototyping has really been embraced by the tech world. And tools for rapid prototyping are getting a lot of attention and interest as a result. However, this comes with a couple of risks. For one, sometimes it is fine to stick to paper. But the lure of shiny prototyping tools is strong. You’d rather not show a crappy drawing to a user. What if they hate it? However, high fidelity prototyping is always more costly than paper. So although well-intentioned, prototyping tools can encourage wastefulness, the bane of lean.

There is a bigger danger which runs against the lean ethos, though. Some tools afford deep collaboration more than others. Let’s be real: none afford deeper collaboration than paper and whiteboards. There is one person behind the controls when prototyping with a tool. So in my view, one should only ever progress to that step once a team effort has been made to hash out the rough outlines of what is to be prototyped. Basically: always paper prototype the digital prototype. Together.

I have had a lot of fun lately playing with browser prototypes and with prototyping in Framer. But as I was getting back into all of this I did notice this risk: All of a sudden there is a person on the team who does the prototypes. Unless this solo prototyping is preceded by shared prototyping, this is a problem. Because the rest of the team is left out of the thinking-through-making which makes the prototyping process so valuable in addition to the testable artefacts it outputs.

It is I think a key oversight of the ‘should designers code’ debaters and to an extent one made by all prototyping tool manufacturers: Individuals don’t prototype, teams do. Prototyping is a team sport. And so the success of a tool depends not only on how well it supports individual prototyping activities but also how well it embeds itself in collaborative workflows.

In addition to the tools themselves getting better at supporting collaborative workflows, I would also love to see more tutorials, both official and from the community, about how to use a prototyping tool within the larger context of a team doing some form of agile. Most tutorials now focus on “how do I make this thing with this tool”. Useful, up to a point. But a large part of prototyping is to arrive at “the thing” together.

One of the lean UX things I devoured was this presentation by Bill Scott in which he talks about aligning a prototyping and a development tech stack, so that the gap between design and engineering is bridged not just with processes but also with tooling. His example applies to web development and app development using web technologies. I wonder what a similar approach looks like for native mobile app development. But this is the sort of thing I am talking about: Smart thinking about how to actually do this lean thing in the real world. I believe organising ourselves so that we can prototype as a team is absolutely key. I will pick my tools and processes accordingly in future.

All of the above is as usual mostly a reminder to self: As a designer your role is not to go off and work solo on brilliant prototypes. Your role is to facilitate such efforts by the whole team. Sure, there will be solo deep designerly crafting happening. But it will not add up to anything if it is not embedded in a collaborative design and development framework.

Nobody does thoroughly argued presentations quite like Sebastian. This is good stuff on ethics and design.

I decided to share some thoughts it sparked via Twitter and ended up ranting a bit:

I recently talked about ethics to a bunch of “behavior designers” and found myself concluding that any designed system that does not allow for user appropriation is fundamentally unethical because as you rightly point out what is the good life is a personal matter. Imposing it is an inherently violent act. A lot of design is a form of technologically mediated violence. Getting people to do your bidding, however well intended. Which given my own vocation and work in the past is a kind of troubling thought to arrive at… Help?

Sebastian makes his best point on slides 113-114. Ethical design isn’t about doing the least harm, but about doing the most good. And, to come back to my Twitter rant, for me the ultimate good is for others to be free. Hence non-prescriptive design.

(via Designing the Good Life: Ethics and User Experience Design)

On sketching

Catching up with this slightly neglected blog (it’s been 6 weeks since the last proper post). I’d like to start by telling you about a small thing I helped out with last week. Peter Boersma1 asked me to help out with one of his UX Cocktail Hours. He was inspired by a recent IxDA Studio event where, in stead of just chatting and drinking, designers actually made stuff. (Gasp!) Peter wanted to do a workshop where attendees collaborated on sketching a solution to a given design problem.

Part of my contribution to the evening was a short presentation on the theory and practice of sketching. On the theory side, I referenced Bill Buxton’s list of qualities that define what a sketch is2, and emphasized that this means a sketch can be done in any material, not necessarily pencil and paper. Furthermore I discussed why sketching works, using part of an article on embodied interaction3. The main point there, as far as I am concerned is that when sketching, as designers we have the benefit of ‘backtalk’ from our materials, which can provide us with new insights. I wrapped up the presentation with a case study of a project I did a while back with the Amsterdam-based agency Info.nl4 for a social web start-up aimed at independent professionals. In the project I went quite far in using sketches to not only develop the design, but also collaboratively construct it with the client, technologists and others.

The whole thing was recorded; you can find a video of the talk at Vimeo (thanks to Iskander and Alper). I also uploaded the slides to SlideShare (sans notes).

The second, and most interesting part of the evening was the workshop itself. This was set up as follows: Peter and I had prepared a fictional case, concerning peer-to-peer energy. We used the Dutch company Qurrent as an example, and asked the participants to conceptualise a way to encourage use of Qurrent’s product range. The aim was to have people be more energy efficient, and share surplus energy they had generated with the Qurrent community. The participants split up in teams of around ten people each, and went to work. We gave them around one hour to design a solution, using only pen and paper. Afterwards, they presented the outcome of their work to each other. For each team, we asked one participant to critique the work by mentioning one thing he or she liked, and one thing that could be improved. The team was then given a chance to reply. We also asked each team to briefly reflect on their working process. At the end of the evening everyone was given a chance to vote for their favourite design. The winner received a prize.5

Wrapping up, I think what I liked most about the workshop was seeing the many different ways the teams approached the problem (many of the participants did not know each other beforehand). Group dynamics varied hugely. I think it was valuable to have each team share their experiences on this front with each other. One thing that I think we could improve was the case itself; next time I would like to provide participants with a more focused, more richly detailed briefing for them to sink their teeth in. That might result in an assignment that is more about structure and behaviour (or even interface) and less about concepts and values. It would be good to see how sketching functions in such a context.

  1. the Netherlands’ tallest IA and one of several famous Peters who work in UX []
  2. taken from his wonderful book Sketching User Experiences []
  3. titled How Bodies Matter (PDF) by Klemer and Takayama []
  4. who were also the hosts of this event []
  5. I think it’s interesting to note that the winner had a remarkable concept, but in my opinion was not the best example of the power of sketching. Apparently the audience valued product over process. []

A day of playing around with multi-touch and RoomWare

Last Saturday I attended a RoomWare workshop. The people of CanTouch were there too, and brought one of their prototype multi-touch tables. The aim for the day was to come up with applications of RoomWare (open source software that can sense presence of people in spaces) and multi-touch. I attended primarily because it was a good opportunity to spend a day messing around with a table.

Attendance was multifaceted, so while programmers were putting together a proof-of-concept, designers (such as Alexander Zeh, James Burke and I) came up with concepts for new interactions. The proof-of-concept was up and running at the end of then day: The table could sense who was in the room and display his or her Flickr photos, which you could then move around, scale, rotate, etc. in the typical multi-touch fashion.

The concepts designers came up with mainly focused on pulling in Last.fm data (again using RoomWare’s sensing capabilities) and displaying it for group-based exploration. Here’s a storyboard I quickly whipped up of one such application:

RoomWare + CanTouch + Last.fm

The storyboard shows how you can add yourself from a list of people present in the room. Your top artists flock around you. When more people are added, lines are drawn between you. The thickness of the line represents how similar your tastes are, according to Last.fm’s taste-o-meter. Also, shared top artists flock in such a way as to be closest to all related people. Finally, artists can be acted on to listen to music.

When I was sketching this, it became apparent that orientation of elements should follow very different rules from regular screens. I chose to sketch things so that they all point outwards, with the middle of the table as the orientation point.

By spending a day immersed in multi-touch stuff, some interesting design challenges became apparent:

  • With tabletop surfaces, stuff is closer or further away physically. Proximity of elements can be unintentionally interpreted as saying something about aspects such as importance, relevance, etc. Designers need to be even more aware of placement than before, plus conventions from vertically oriented screens no longer apply. Top-of-screen becomes furthest away and therefore least prominent in stead of most important.
  • With group-based interactions, it becomes tricky to determine who to address and where to address him or her. Sometimes the system should address the group as a whole. When 5 people are standing around a table, text-based interfaces become problematic since what is legible from one end of the table is unintelligible from the other. New conventions need to be developed for this as well. Alexander and I philosophized about placing text along circles and animating them so that they circulate around the table, for instance.
  • Besides these, many other interface challenges present themselves. One crucial piece of information for solving many of these is knowing where people are located around the table. This issue can be approached from different angles. By incorporating sensors in the table, detection may be automated and interfaces could me made to adapt automatically. This is the techno-centric angle. I am not convinced this is the way to go, because it diminishes people’s control over the experience. I would prefer to make the interface itself adjustable in natural ways, so that people can mold the representation to suit their context. With situated technologies like this, auto-magical adaptation is an “AI-hard” problem, and the price of failure is a severely degraded user experience from which people cannot recover because the system won’t let them.

All in all the workshop was a wonderful day of tinkering with like-minded individuals from radically different backgrounds. As a designer, I think this is one of the best way be involved with open source projects. On a day like this, technologists can be exposed to new interaction concepts while they are hacking away. At the same time designers get that rare opportunity to play around with technology as it is shaped. Quick-and-dirty sketches like the ones Alexander and I came up with are definitely the way to communicate ideas. The goal is to suggest, not to describe, after all. Technologists should feel free to elaborate and build on what designers come up with and vice-versa. I am curious to see which parts of what we came up with will find their way into future RoomWare projects.

Urban procedural rhetorics — transcript of my TWAB 2008 talk

This is a transcript of my presentation at The Web and Beyond 2008: Mobility in Amsterdam on 22 May. Since the majority of paying attendees were local I presented in Dutch. However, English appears to be the lingua franca of the internet, so here I offer a translation. I have uploaded the slides to SlideShare and hope to be able to share a video recording of the whole thing soon.

Update: I have uploaded a video of the presentation to Vimeo. Many thanks to Almar van der Krogt for recording this.

In 1966 a number of members of Provo took to the streets of Amsterdam carrying blank banners. Provo was a nonviolent anarchist movement. They primarily occupied themselves with provoking the authorities in a “ludic” manner. Nothing was written on their banners because the mayor of Amsterdam had banned the slogans “freedom of speech”, “democracy” and “right to demonstrate”. Regardless, the members were arrested by police, showing that the authorities did not respect their right to demonstrate.1

Good afternoon everyone, my name is Kars Alfrink, I’m a freelance interaction designer. Today I’d like to talk about play in public space. I believe that with the arrival of ubiquitous computing in the city new forms of play will be made possible. The technologies we shape will be used for play wether we want to or not. As William Gibson writes in Burning Chrome:

“…the street finds its own uses for things”

For example: Skateboarding as we now know it — with its emphasis on aerial acrobatics — started in empty pools like this one. That was done without permission, of course…

Only later half-pipes, ramps, verts (which by the way is derived from ‘vertical’) and skateparks arrived — areas where skateboarding is tolerated. Skateboarding would not be what it is today without those first few empty pools.2

Continue reading Urban procedural rhetorics — transcript of my TWAB 2008 talk

  1. The website of Gramschap contains a chronology of the Provo movement in Dutch. []
  2. For a vivid account of the emergence of the vertical style of skateboarding see the documentary film Dogtown and Z-Boys. []

Sketching the experience of toys

A frame from the Sketch-A-Move video

“Play is the highest form of research.”

—Albert Einstein1

That’s what I always say when I’m playing games, too.

I really liked Bill Buxton‘s book Sketching User Experiences. I like it because Buxton defends design as a legitimate profession separate from other disciplines—such as engineering—while at the same time showing that designers (no matter how brilliant) can only succeed in the right ecosystem. I also like the fact that he identifies sketching (in its many forms) as a defining activity of the design profession. The many examples he shows are very inspiring.

One in particular stood out for me, which is the project Sketch-A-Move by Anab Jain and Louise Klinker done in 2004 at the RCA in London. The image above is taken from the video they created to illustrate their concept. It’s about cars auto-magically driving along trajectories that you draw on their roof. You can watch the video over at the book’s companion website. It’s a very good example of visualizing an interactive product in a very compelling way without actually building it. This was all faked, if you want to find out how, buy the book.2

The great thing about the video is not only does it illustrate how the concept works, it also gives you a sense of what the experience of using it would be like. As Buxton writes:3

“You see, toys are not about toys. Toys are about play and the experience of fun that they help foster. And that is what this video really shows. That, and the power of video to go beyond simply documenting a concept to communicating something about experience in a very visceral way.”

Not only does it communicate the fun you would have playing with it, I think this way of sketching actually helped the designers get a sense themselves of wether what they had come up with was fun. You can tell they are actually playing, being surprised by unexpected outcomes, etc.

The role of play in design is discussed by Buxton as well, although he admits he needed to be prompted by a friend of his: Alex Manu, a teacher at OCAD in Toronto writes in an email to Buxton:4

“Without play imagination dies.”

“Challenges to imagination are the keys to creativity. The skill of retrieving imagination resides in the mastery of play. The ecology of play is the ecology of the possible. Possibility incubates creativity.”

Which Buxton rephrases in one of his own personal mantras:5

“These things are far too important to take seriously.”

All of which has made me realize that if I’m not having some sort of fun while designing, I’m doing something wrong. It might be worth considering switching from one sketching technique to another. It might help me get a different perspective on the problem, and yield new possible solutions. Buxton’s book is a treasure trove of sketching techniques. There is no excuse for being bored while designing anymore.

  1. Sketching User Experiences p.349 []
  2. No, I’m not getting a commission to say that. []
  3. Ibid. 1, at 325 []
  4. Ibid., at 263 []
  5. Ibid. []

Notes on play, exploration, challenge and learning

(My reading notes are piling up so here’s an attempt to clear out at least a few of them.)

Part of the play experience of many digital games is figuring out how the damn thing works in the first place. In Rules of Play on page 210:

“[…] as the player plays with FLUID, interaction and observation reveals the underlying principles of the system. In this case the hidden information gradually revealed through play is the rules of the simulation itself. Part of the play of FLUID is the discovery of the game rules as information.”

(Sadly, I could not find a link to the game mentioned.)

I did not give Donald Norman all the credit he was due in my earlier post. He doesn’t have a blind spot for games. Quite the contrary. For instance, he explains how to make systems easier to learn and points to games in the process. On page 183 of The Design of Everyday Things:

“One important method of making systems easier to learn and to use is to make them explorable, to encourage the user to experiment and learn the possibilities through active exploration.”

The way to do this is through direct manipulation, writes Norman. He also reminds us that it’s not necessary to make any system explorable.1 But (on page 184):

“[…] if the job is critical, novel, or ill-specified, or if you do not yet know exactly what is to be done, then you need direct, first-person interaction.”

So much written after DOET seems to have added little to the conversation. I’m surprised how useful this classic still is.

I’m reminded of a section of Matt Jones’s Interaction 08 talk—which I watched yesterday. He went through a number of information visualisations and said he’d like to add more stuff like that into Dopplr, to allow people to play with their data. He even compared this act of play to Will Wright’s concept of possibility space.2 He also briefly mentioned that easily accessible tools for creating information visualisations might become a valuable tool for designers working with complex sets of data.

Norman actually points to games for inspiration, by the way. On page 184 just before the previous quote:

“Some computer systems offer direct manipulation, first-person interactions, good examples being the driving, flying, and sports games that are commonplace in arcades and on home machines. In these games, the feeling of direct control over the actions is an essential part of the task.”

And so on.

One of the most useful parts of Dan Saffer’s book on interaction design is where he explains the differences between customisation, personalisation, adaptation and hacking. He notes that an adaptive system can be designed to induce flow—balancing challenge with the skill of the user. In games, there is something called dynamic difficulty adjustment (DDA) which has very similar aims.

Salen and Zimmerman have their doubts about DDA though. In Rules of Play on page 223 they write:

“Playing a game becomes less like learning an expressive language and more like being the sole audience member for a participatory, improvisational performance, where the performers adjust their actions to how you interact with them. Are you then playing the game, or is it playing you?”

Perhaps, but it all depends on what DDA actually adjusts. The technique might be objectionable in a game (where a large part of the point is overcoming challenge) but in other systems many of these objections do not apply.

“With a successful adaptive design, the product fits the user’s life and environment as though it were custom made.”

(Designing for Interaction, page 162.)

Adaptive systems explicitly anticipate transformative play. They allow themselves to be changed through a person’s interactions with it.3

A characteristic of good interaction design is playfulness, writes Mr. Saffer in his book on page 67:

“Through serious play, we seek out new products, services and features and then try them to see how they work. How many times have you pushed a button just to see what it did?”

The funny thing is, the conditions for play according to Saffer are very similar to some of the basic guidelines Norman offers: Make users feel comfortable, reduce the chance for errors and if errors do occur, make sure the consequences are small—by allowing users to undo, for instance.

Mr. Norman writes that in games “designers deliberately flout the laws of understandability and usability” (p.205). Although even in games: “[the] rules [of usability] must be applied intelligently, for ease of use or difficulty of use” (p.208).

By now, it should be clear making interactions playful is very different from making them game-like.

  1. Apparently, “explorable” isn’t a proper English word, but if it’s good enough for Mr. Norman it’s good enough for me. []
  2. I blogged about possibility space before here. []
  3. Yes, I know I blogged about adaptive design before. Also about flow and adaptation, it seems. []