Designers make choices. They should be able to provide rationales for those choices. (Although sometimes they can’t.) Being able to explain the thinking that went into a design move to yourself, your teammates and clients is part of being a professional.
Move 37. This was the move AlphaGo made which took everyone by surprise because it appeared so wrong at first.
The interesting thing is that in hindsight it appeared AlphaGo had good reasons for this move. Based on a calculation of odds, basically.
If asked at the time, would AlphaGo have been able to provide this rationale?
It’s a thing that pops up in a lot of the reading I am doing around AI. This idea of transparency. In some fields you don’t just want an AI to provide you with a decision, but also with the arguments supporting that decision. Obvious examples would include a system that helps diagnose disease. You want it to provide more than just the diagnosis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cultural and also legal requirement.
Although lives don’t depend on it, the same might apply to intelligent design tools. If I am working with a system and it is offering me design directions or solutions, I want to know why it is suggesting these things as well. Because my reason for picking one over the other depends not just on the surface level properties of the design but also the underlying reasons. It might be important because I need to be able to tell stakeholders about it.
An added side effect of this is that a designer working with such a system is be exposed to machine reasoning about design choices. This could inform their own future thinking too.
Transparent AI might help people improve themselves. A black box can’t teach you much about the craft it’s performing. Looking at outcomes can be inspirational or helpful, but the processes that lead up to them can be equally informative. If not more so.
Imagine working with an intelligent design tool and getting the equivalent of an AlphaGo move 37 moment. Hugely inspirational. Game changer.
This idea gets me much more excited than automating design tasks does.
“The holy grail of a conversational system would be one that’s aware of itself — one that knows its own model and internal structure and allows you to change all of that by talking to it. Imagine being able to tell Siri to tone it down a bit with the jokes and that it would then actually do that.”
His point stuck with me because I think this is of particular importance to creative tools. These need to be flexible so that a variety of people can use them in different circumstances. This adaptability is what lends a tool depth.
The depth I am thinking of in creative tools is similar to the one in games, which appears to be derived from a kind of semi-orderedness. In short, you’re looking for a sweet spot between too simple and too complex.
And of course, you need good defaults.
Back to adaptation. This can happen in at least two ways on the interface level: modal or modeless. A simple example of the former would be to go into a preferences window to change the behaviour of your drawing package. Similarly, modeless adaptation happens when you rearrange some panels to better suit the task at hand.
Returning to Siri, the equivalence of modeless adaptation would be to tell her to tone it down when her sense of humor irks you.
For the modal solution, imagine a humor slider in a settings screen somewhere. This would be a terrible solution because it offers a poor mapping of a control to a personality trait. Can you pinpoint on a scale of 1 to 10 your preferred amount of humor in your hypothetical personal assistant? And anyway, doesn’t it depend on a lot of situational things such as your mood, the particular task you’re trying to complete and so on? In short, this requires something more situated and adaptive.
So just being able to tell Siri to tone it down would be the equivalent of rearranging your Photoshop palets. And in a next interaction Siri might carefully try some humor again to gauge your response. And if you encourage her, she might be more humorous again.
Enough about funny Siri for now because it’s a bit of a silly example.
Funny Siri, although she’s a bit of a Silly example, does illustrate another problem I am trying to wrap my head around. How does an intelligent tool for creativity communicate its internal state? Because it is probabilistic, it can’t be easily mapped to a graphic information display. And so our old way of manipulating state, and more specifically adapting a tool to our needs becomes very different too.
It seems to be best for an intelligent system to be open to suggestions from users about how to behave. Adapting an intelligent creative tool is less like rearranging your workspace and more like coordinating with a coworker.
My ideal is for this to be done in the same mode (and so using the same controls) as when doing the work itself. I expect this to allow for more fluid interactions, going back and forth between doing the work at hand, and meta-communication about how the system supports the work. I think if we look at how people collaborate this happens a lot, communication and meta-communication going on continuously in the same channels.
We don’t need a self-aware artificial intelligence to do this. We need to apply what computer scientists call supervised learning. The basic idea is to provide a system with example inputs and desired outputs, and let it infer the necessary rules from them. If the results are unsatisfactory, you simply continue training it until it performs well enough.
A super fun example of this approach is the Wekinator, a piece of machine learning software for creating musical instruments. Below is a video in which Wekinator’s creator Rebecca Fiebrink performs several demos.
Here we have an intelligent system learning from examples. A person manipulating data in stead of code to get to a particular desired behaviour. But what Wekinator lacks and what I expect will be required for this type of thing to really catch on is for the training to happen in the same mode or medium as the performance. The technology seems to be getting there, but there are many interaction design problems remaining to be solved.
Boris pointed me to CreativeAI, an interesting article about creativity and artificial intelligence. It offers a really nice overview of the development of the idea of augmenting human capabilities through technology. One of the claims the authors make is that artificial intelligence is making creativity more accessible. Because tools with AI in them support humans in a range of creative tasks in a way that shortcuts the traditional requirements of long practice to acquire the necessary technical skills.
For example, ShadowDraw (PDF) is a program that helps people with freehand drawing by guessing what they are trying to create and showing a dynamically updated ‘shadow image’ on the canvas which people can use as a guide.
It is an interesting idea and in some ways these kinds of software indeed lower the threshold for people to engage in creative tasks. They are good examples of artificial intelligence as partner in stead of master or servant.
While reading CreativeAI I wasn’t entirely comfortable though and I think it may have been caused by two things.
One is that I care about creativity and I think that a good understanding of it and a daily practice at it—in the broad sense of the word—improves lives. I am also in some ways old-fashioned about it and I think the joy of creativity stems from the infinitely high skill ceiling involved and the never-ending practice it affords. Let’s call it the Jiro perspective, after the sushi chef made famous by a wonderful documentary.
So, claiming that creative tools with AI in them can shortcut all of this life-long joyful toil produces a degree of panic for me. Although it’s probably a Pastoral worldview which would be better to abandon. In a world eaten by software, it’s better to be a Promethean.
The second reason might hold more water but really is more of an open question than something I have researched in any meaningful way. I think there is more to creativity than just the technical skill required and as such the CreativeAI story runs the risk of being reductionist. While reading the article I was also slowly but surely making my way through one of the final chapters of James C. Scott’s Seeing Like a State, which is about the concept of metis.
It is probably the most interesting chapter of the whole book. Scott introduces metis as a form of knowledge different from that produced by science. Here are some quick excerpts from the book that provide a sense of what it is about. But I really can’t do the richness of his description justice here. I am trying to keep this short.
The kind of knowledge required in such endeavors is not deductive knowledge from first principles but rather what Greeks of the classical period called metis, a concept to which we shall return. […] metis is better understood as the kind of knowledge that can be acquired only by long practice at similar but rarely identical tasks, which requires constant adaptation to changing circumstances. […] It is to this kind of knowledge that [socialist writer] Luxemburg appealed when she characterized the building of socialism as “new territory” demanding “improvisation” and “creativity.”
Scott’s argument is about how authoritarian high-modernist schemes privilege scientific knowledge over metis. His exploration of what metis means is super interesting to anyone dedicated to honing a craft, or to cultivating organisations conducive to the development and application of craft in the face of uncertainty. There is a close link between metis and the concept of agility.
So circling back to artificially intelligent tools for creativity I would be interested in exploring not only how we can diminish the need for the acquisition of the technical skills required, but to also accelerate the acquisition of the practical knowledge required to apply such skills in the ever-changing real world. I suggest we expand our understanding of what it means to be creative, but without losing the link to actual practice.
For the ancient Greeks metis became synonymous with a kind of wisdom and cunning best exemplified by such figures as Odysseus and notably also Prometheus. The latter in particular exemplifies the use of creativity towards transformative ends. This is the real promise of AI for creativity in my eyes. Not to simply make it easier to reproduce things that used to be hard to create but to create new kinds of tools which have the capacity to surprise their users and to produce results that were impossible to create before.
I will use the flexibility afforded by this freeing up of time to take stock of where I have come from and where I am headed. ‘Orientation is the Schwerpunkt,’ as Boyd says. I have definitely cycled back through my meta-OODA-loop and am firmly back in the second O.
To make things more interesting I have exchanged the Netherlands for Singapore. I will be here until August. It is going to be fun to explore the things this city has to offer. I am curious what the technology and design scene is like when seen up close. So I hope to do some work locally.
I will take on short commitments. Let’s say no longer than two to three months. Anything goes really, but I am particularly interested in work related to creativity and learning. I am also keen on getting back into teaching.
So if you are in Singapore, work in technology or design and want to have a cup of coffee. Drop me a line.
Location 3176 – Boyd introduces a very simple but fundamental reason for why we should care about decision making:
… a basic aim or goal, as individuals, is to improve our capacity for independent action
Location 3183 – the same applies to design and designers. We do not want to be controlled by our circumstances. Boyd was talking to a military audience, but the description below is true of any social situation, including the design practice:
In a real world of limited resources and skills, individuals and groups form, dissolve and reform their cooperative or competitive postures in a continuous struggle to remove or overcome physical and social environmental obstacles.
Against such a background, actions and decisions become critically important.
To make these timely decisions implies that we must be able to form mental concepts of observed reality, as we perceive it, and be able to change these concepts as reality itself appears to change.
Location 3195 – designers are asked to do nothing but the above. The succes of our designs hinges on our understanding of reality and our skill at intervening in it. So the question below is of vital importance to us:
How do we generate or create the mental concepts to support this decision-making activity?
Location 3196 – in the next section of the essay Boyd starts to provide answers:
There are two ways in which we can develop and manipulate mental concepts to represent observed reality: We can start from a comprehensive whole and break it down to its particulars or we can start with the particulars and build towards a comprehensive whole.
… general-to-specific is related to deduction, analysis, and differentiation, while, specific-to-general is related to induction, synthesis, and integration.
… such an unstructuring or destruction of many domains – to break the correspondence of each with its respective constituents – is related to deduction, analysis, and differentiation. We call this kind of unstructuring a destructive deduction.
… creativity is related to induction, synthesis, and integration since we proceeded from unstructured bits and pieces to a new general pattern or concept. We call such action a creative or constructive induction.
Location 3227 – here Boyd starts to connect the two ways of creating concepts. I have always found it gratifying to immerse myself in a design’s domain and to start teasing apart its constituent elements, before moving on to acts of creation:
It is important to note that the crucial or key step that permits this creative induction is the separation of the particulars from their previous domains by the destructive deduction.
… the unstructuring and restructuring just shown reveals a way of changing our perception of reality.
Location 3237 – so far so fairly straight-forward. But Boyd gets increasingly more sophisticated about this cycle of destruction and creation. For example, he suggests we should check for internal consistency of a new concept by tracing back its elements to the original sources:
… we check for reversibility as well as check to see which ideas and interactions match-up with our observations of reality.
Location 3240 – so this is not a two-step linear act, but a cyclical one, where we keep tuning parts and wholes of a concept (or design) and test them against reality:
Over and over again this cycle of Destruction and Creation is repeated until we demonstrate internal consistency and match-up with reality.
Location 3249 – in the next section, Boyd problematises the process he has proposed by showing that once we have formed a concept, its matchup to reality immediately starts to deteriorate:
… at some point, ambiguities, uncertainties, anomalies, or apparent inconsistencies may emerge to stifle a more general and precise match-up of concept with observed reality.
Location 3257 – the point below is one I can’t help but iterate often enough to clients and coworkers. We must work under the assumption of mismatches occurring sooner or later. It is an essential state of mind:
… we should anticipate a mismatch between phenomena observation and concept description of that observation.
Location 3266 – he brings in Gödel, Heisenberg and the second law of thermodynamics to explain why this is so:
Gödel’s Proof indirectly shows that in order to determine the consistency of any new system we must construct or uncover another system beyond it.
Back and forth, over and over again, we use observations to sharpen a concept and a concept to sharpen observations. Under these circumstances, a concept must be incomplete since we depend upon an ever-changing array of observations to shape or formulate it. Likewise, our observations of reality must be incomplete since we depend upon a changing concept to shape or formulate the nature of new inquiries and observations.
Location 3301 – so Gödel shows we need to continuously create new concepts to maintain the usefulness of prior ones due to the relationship between observed reality and mental concepts. Good news for designers! Our work is never done. It is also an interesting way to think about culture evolving by the building of increasingly complex networks of prior concepts into new ones. Next, Boyd brings in Heisenberg to explain why there is uncertainty involved when making observations of reality:
… the magnitude of the uncertainty values represent the degree of intrusion by the observer upon the observed.
… uncertainty values not only represent the degree of intrusion by the observer upon the observed but also the degree of confusion and disorder perceived by that observer.
Location 3308 – Heisenberg shows that the more we become intwined with observed reality the more uncertainty increases. This is of note because as we design new things and we introduce them into the environment, unexpected things start to happen. But also, we as designers ourselves are part of the environment. The more we are part of the same context we are designing for, the less able we will be to see things as they truly are. Finally, for the third move by which Boyd problematises the creation of new concepts, we arrive at the second law of thermodynamics:
High entropy implies a low potential for doing work, a low capacity for taking action or a high degree of confusion and disorder. Low entropy implies just the opposite.
Location 3312 – closed systems are those that don’t communicate with their environment. A successful design practice should be an open system, lest it succumb to entropy:
From this law it follows that entropy must increase in any closed system
… whenever we attempt to do work or take action inside such a system – a concept and its match-up with reality – we should anticipate an increase in entropy hence an increase in confusion and disorder.
Location 3317 – it’s important to note that Boyd’s ideas are equally applicable to design plans, design practices, design outcomes, any system involved in design, really. Confused? Not to worry, Boyd boils it down in the next and final section:
According to Gödel we cannot – in general – determine the consistency, hence the character or nature, of an abstract system within itself. According to Heisenberg and the Second Law of Thermodynamics any attempt to do so in the real world will expose uncertainty and generate disorder.
Location 3320 – the bit below is a pretty good summary of why “big design up front” does not work:
any inward-oriented and continued effort to improve the match-up of concept with observed reality will only increase the degree of mismatch.
Location 3329 – whenever we encounter chaos the instinct is to stick to our guns, but it is probably wiser to take a step back and reconsider our assumptions:
we find that the uncertainty and disorder generated by an inward-oriented system talking to itself can be offset by going outside and creating a new system.
Location 3330 – creativity or explorative design under pressure can seem like a waste of time but once we have gone through the exercise in hind sight we always find it more useful than thought before:
Simply stated, uncertainty and related disorder can be diminished by the direct artifice of creating a higher and broader more general concept to represent reality.
I believe we have uncovered a Dialectic Engine that permits the construction of decision models needed by individuals and societies for determining and monitoring actions in an effort to improve their capacity for independent action.
the goal seeking effort itself appears to be the other side of a control mechanism that seems also to drive and regulate the alternating cycle of destruction and creation toward higher and broader levels of elaboration.
Location 3347 – chaos is a fact of life, and as such we should welcome it because it is as much a source of vitality as it is a threat:
Paradoxically, then, an entropy increase permits both the destruction or unstructuring of a closed system and the creation of a new system to nullify the march toward randomness and death.
Location 3350 – one of Boyd’s final lines is a fine description of what I think design should aspire to:
The result is a changing and expanding universe of mental concepts matched to a changing and expanding universe of observed reality.
It’s been a hugely enjoyable and rewarding intellectual trip. I feel like Boyd has given me some pretty sharp new tools-to-think-with. From his background you might think these tools are limited to warfare. But in fact they can be applied much more broadly, to any field in which we need to make decisions under uncertain circumstances.
As we go about our daily lives we are actually always dealing with this dynamic. But the stakes are usually low, so we mostly don’t really care about having a thorough understanding of how to do what we want to do. In warfare the stakes are obviously unusually high, so it makes sense for some of the most articulate thinking on the subject to emerge from it.
As a designer I have always been interested in how my profession makes decisions. Designers usually deal with high levels of uncertainty too. Although lives are rarely at stake, the continued viability of businesses and quality of peoples lives usually are, at least in some way. Furthermore, there is always a leap of faith involved with any design decision. When we suggest a path forward with our sketches and prototypes, and we choose to proceed to development, we can never be entirely sure if our intended outcomes will pan out as we had hoped.
This uncertainty has always been present in any design act, but an argument could be made that technology has increased the amount of uncertainty in our world.
The way I see it, the methods of user centred design, interaction design, user experience, etc are all attempts to “deal with” uncertainty in various ways. The same can be said for the techniques of agile software development.
These methods can be divided into roughly two categories, which more or less correspond to the upper two quadrants of this two-by-two by Venkatesh. Borrowing the diagram’s labels, one is called Spore. It is risk-averse and focuses on sustainability. The other is called Hydra and it is risk-savvy and about anti-fragility. Spore tries to limit the negative consequences of unexpected events, and Hydra tries to maximise their positive consequences.
An example of a Spore-like design move would be to insist on thorough user research at the start of a project. We expend significant resources to diminish the amount of unknowns about our target audience. An example of a Hydra-like design move is the kind of playtesting employed by many game designers. We leave open the possibility of surprising acts from our target audience and hope to subsequently use those as the basis for new design directions.
It is interesting to note that these upper two quadrants are strategies for dealing with uncertainty based on synthesis. The other two rely on analysis. We typically associate synthesis with creativity and by extension with design. But as Boyd frequently points out, invention requires both analysis and synthesis, which he liked to call destruction and creation. When I reflect on my own way of working, particularly in the early stages of a project, the so-called fuzzy front end, I too rely on a cycle of destruction and creation to make progress.
I do not see one of the two approaches, Spore or Hydra, as inherently superior. But my personal preference is most definitely the Hydra approach. I think this is because a risk-savvy stance is most helpful when trying to invent new things, and when trying to design for play and playfulness.
The main thing I learned from Boyd for my own design practice is to be aware of uncertainty in the first place, and to know how to deal with it in an agile way. You might not be willing to do all the reading I did, but I would recommend to at least peruse the one long-form essay Boyd wrote, titled Destruction and Creation (PDF), about how to be creative and decisive in the face of uncertainty.
“The really good creative people are always organized, it’s true. The difference is efficiency. If you have an agenda—a schedule—you will be better. In order to have moments of chaos and anarchy and creativity, you have to be very ordered so that when the moment arrives it doesn’t put things out of whack.”
MMOGs have not progressed since 1990. Neither has social software.
Well maybe a little, but not much. At least that’s what I’m lead to believe after reading another wonderful essay in The Game Design Reader—a book I like to dip into once in a while to read whatever catches my fancy.
In The Lessons of Lucasfilm’s Habitat1 Messrs Farmer and Morningstar share their experiences building possibly one of the first graphical MMOGs ever. The game’s front-end ran on a Commodore 64 and looked something like this:
It’s striking how many of the lessons summed up by the authors have not been (fully) taken to heart by MMOG designers. Bitching aside, their article offers as much useful advice to game designers as to designers of any piece of social software. Since this post has grown unexpectedly long (again). I’ll sum them up here:
“The implementation platform is relatively unimportant.” — on loosely coupling a world’s conceptual model and its representation
“Detailed central planning is impossible; don’t even try.” — on relinquishing control as designers, co-design and evolutionary systems
“Work within the system.” — on facilitating world creation by players and moderation from within the world
Let’s look at each in more detail:
“The implementation platform is relatively unimportant.”
Meaning that how you describe the world and how you present it can or should be loosely coupled. The advantage of this is that with one world model you can serve clients with a wide range of (graphical) capabilities and scale into the future without having to change model. Their example is of a tree, which can be rendered to one user as a string of text: “There is a tree here.” And to another user as a rich high resolution 3D animated image accompanied by sound.
“And these two users might be looking at the same tree in the same place in the same world and talking to each other as they do so.”
When I read this I instantly thought of Raph Koster’s Metaplace and wondered if the essay I was reading served as some sort of design guideline for it. What I understood from Raph’s GDC 2008 presentation2 was that they are trying to achieve exactly this, by applying the architectural model of the internet to the design of MMOGs.
Looking at social software in general, how many examples can you give of the current wave of social web apps that apply this principle? I’m reminded of Tom Coates’s Native to a Web of Data presentation—in which he argues that a service’s data should ideally be accessible through any number of channels.3
Similarly, web 2.0 poster child Dopplr is designed to be “a beautiful part of the web”, “a feature of a larger service, called the internet”.4 And they want to be everywhere, adding a little bit of value where it is most needed. Perhaps not exactly the same thing as what Farmer and Morningstar are alluding to, but based on similar principles.
As an aside, in MMOG land, there is one other major concern with this:
“Making the system fully distributed […] requires solving a number of difficult problems. The most significant of these is the prevention of cheating.”
Cheating aside, there is more useful (albeit familiar) advice for social software designers in the piece. For instance on the need to hand over (part of) the control over the system’s design to its users:
“Again and again we found that activities based on often unconscious assumptions about player behaviour had completely unexpected outcomes (when they were not simply outright failures). ”
They go on to say that they found it was more productive to work with the community:
“We could influence things, we could set up interesting situations, we could provide opportunities for things to happen, but we could not dictate the outcome. Social engineering is, at best, an inexact science […] we shifted into a style of operations in which we let the players themselves drive the direction of the design.”
Again, familiar advice perhaps, but they describe in some detail how they actually went about this, which makes for enlightening reading. That this practice of co-design goes against ‘common’ software development practices is not left unaddressed either:
“[…] the challenge posed by large systems are prompting some researchers to question the centralized, planning dominated attitude that we have criticized here, and to propose alternative approaches based on evolutionary and market principles. These principles appear applicable to complex systems of all types […]”
“the Web in 2008 has some entirely new qualities: more than ever it’s an ecology of separate but highly interconnected services. Its fiercely competitive, rapid development means differentiating innovations are quickly copied and spread. Attention from users is scarce. The fittest websites survive.”
(Again, emphasis mine.) I think the challenge that now lies before us is to not only as designers practice co-design with our users, but to go one step further, and encode rules for autonomous evolution into our systems. These are the adaptive systems I’ve been blogging about recently. An important note is that systems can adapt to individual users, but also—in the case of social software—to aggregate behaviour of user groups.5
This can be extended to a world’s governance. Here is one of the ideas I find most exciting in the context of social software, one I have seen very few examples of so far.
“[…] our view is that a virtual world need not be set up with a “default” government, but can instead evolve as needed.”
I cannot think of oneMMOG that is designed to allow for a model of governance to emerge from player interactions. The best example I can think of from the world of social software is this article by Tom Coates at the Barbelith wiki. Barbelith is a somewhat ‘old school’ online community comprised of message boards (remember those?). In the piece (titled TriPolitica) he writes:
“Imagine a message board with three clear identities, colour-schemes and names. Each has a generic set of basic initial forums on a clearly defined range of subjects (say — Politics / Science / Entertainment). Each forum starts with a certain structure — one Monarchic, one Parliamentary Democracy and one Distributed Anarchy. All the rules that it takes to run each community have been sufficiently abstracted so that they can be turned on or off at will BY the community concerned. Moreover, the rules are self-reflexive — ie. the community can also create structures to govern how those rules are changed. This would operate by a bill-like structure where an individual can propose a new rule or a change to an existing rule that then may or may not require one or more forms of ratification. There would be the ability to create a rule governing who could propose a new bill, how often and what areas it might be able to change or influence.”
He goes on to give examples of how this would work—what user types you’d need and what actions would need to be available to those users. I’m pretty sure this was never implemented at Barbelith (which, by the way, is a fun community to browse through if you’re into counter cultural geekery). Actually, I’m pretty sure I know of no online space that has a system like this in place. Any interaction designers out there who are willing to take up the gauntlet?
“Work within the system.”
This is the final lesson offered in the essay I’d like to look at, one that is multifaceted. On the one hand, Messrs Farmer and Morningstar propose that world building should be part of the system itself (and therefore accessible to regular players):
“One of the goals of a next generation Habitat-like system ought to be to permit far greater creative involvement by the participants without requiring them to ascend to full-fledged guru-hood to do so.”
And, further on:
“This requires finding ways to represent design and creation of regions and objects as part of the underlying fantasy.”
I do not think a MMOG has achieved this in any meaningful sense so far. Second Life may offer world creation tools to users, but they are far from accessible, and certainly not part of the “underlying fantasy”. In web based social software, suspension of disbelief is of less concern. It can be argued that Flickr for instance successfully offers world creation at an accessible level. Each Flickr user contributes to the photographic tapestry that is the Flickr ‘photoverse’. Wikipedia, too offers relatively simple tools for contribution, albeit text based. In the gaming sphere, there are examples such as SFZero, a Collaborative Production Game, in which players add tasks for others to complete, essentially collaboratively creating the game with the designers.
Like I said, the lesson “work within the system” applies to more than one aspect. The other being moderation. The authors share an amusing anecdote about players exploiting a loop hole introduced by new characters and objects (the players gained access to an unusually powerful weapon). The anecdote shows that it is always better to moderate disputes within the shared fantasy of the world, in stead of making use of external measures that break the player’s suspension of disbelief. Players will consider the latter cheating on the part of administrators:
“Operating within the participants’ world model produced a very satisfactory result. On the other hand, what seemed like the expedient course, which involved violating this model, provoked upset and dismay.”
Allowing people to change parts of your product is playful. It has also always ‘just’ seemed like a good thing to do to me. You see this with with people who become passionate about a thing they use often: They want to take it apart, see how it works, put it back together again, maybe add some stuff, replace something else… I’ve always liked the idea of passionate people wanting to change something about a thing I designed. And it’s always been a disappointment when I’d find out that they did not, or worse—wanted to but weren’t able to.
Apparently this is what people call adaptive design. But if you Google that, you won’t find much. In fact, there’s remarkably little written about it. I was put on the term’s trail by Matt Webb and from there found my way to Dan Hill’s site. There’s a lot on the topic there, but if I can recommend one piece it’s the interview he did for Dan Saffer’s book on interaction design. Read it. It’s full of wonderful ideas articulated 100 times better than I’ll ever be able to.
So why is adaptive design conducive to the playfulness of a user experience? I’m not sure. One aspect of it might be the fact that as a designer you explicitly relinquish some control over the final experience people have with your…stuff.1As Webb noted in an end-of-the-year post, in stead of saying to people: “Here’s something I made. Go on—play with it.” You say: “Here’s something I made—let’s play with it together.”
This makes a lot of sense if you don’t think of the thing under design as something that’ll be consumed but something that will be used to create. It sounds easy but again is surprisingly hard. It’s like we have been infected with this hard-to-kill idea that makes us think we can only consume whereas we are actually all very much creative beings.2 I think that’s what Generation C is really about.
A sidetrack: In digital games, for a long time developments have been towards games as media that can be consumed. The real changes in digital games are: One—there’s a renewed interest in games as activities (particularly in the form of casual games). And two—there’s an increase in games that allow themselves to be changed in meaningful ways. These developments make the term “replay value” seem ready for extinction. How can you even call something that isn’t interesting to replay a game?3
In Rules of Play, Salen and Zimmerman describe the phenomenon of transformative play—where the “free movement within a more rigid structure” changes the mentioned structure itself (be it intended or not). They hold it as one of the most powerful forms of play. Think of a simple house rule you made up the last time you played a game with some friends. The fact that on the web the rules that make up the structures we designed are codified in software should not be an excuse to disallow people to change them.
That’s true literacy: When you can both read and write in a medium (as Alan Kay would have it). I’d like to enable people to do that. It might be hopelessly naive, but I don’t care—it’s a very interesting challenge.
That’s a comfortable idea to all of the—cough—web 2.0 savvy folk out there. But it certainly still is an uncomfortable thought to many. And I think it’d surprise you to find out how many people who claim to be “hip to the game” will still refuse to let go. [↩]
Note I’m not saying we can all be designers, but I do think people can all create meaningful things for themselves and others. [↩]
Today Playyoo went beta. Playyoo is a mobile games community I have been involved with as a freelance interaction designer since july of this year. I don’t have time for an elaborate post-mortem, but here are some preliminary notes on what Playyoo is and what part I’ve played in its conception.
Playyoo brings some cool innovations to the mobile games space. It allows you to snack on free casual mobile games while on the go, using a personalized mobile web page. It stores your high scores and allows you to interact with your friends (and foes) on an accompanying regular web site. Playyoo is a platform for indie mobile game developers. Anyone can publish their Flash Lite game on it. Best of all — even if you’re not a mobile games developer, you can create a game of your own.
It’s that last bit I’ve worked on the most. I took care of the interaction design for an application imaginatively called the Game Creator. It allows you to take well known games (such as Lunar Lander) and give them your own personal twist. Obviously this includes the game’s graphics, but we’ve gone one step further. You can change the way the game works as well.
So in the example of Lunar Lander you can make the spaceship look like whatever you want. But you can also change the gravity, controlling the speed with which your ship drops to the surface. Best of all, you can create your own planet surface, as easy as drawing a line on paper. This is why Lunar Lander in the Playyoo Game Creator is called Line Lander. (See? Another imaginative title!)
At the moment there are six games in the Game Creator: Tic-Tac-Toe, Pairs, Revenge, Snake, Ping-Pong, and the aforementioned Line Lander. There’s long list of other games I’d like to put in there. I’m sure there will be more to come.
So although making a game is very different from playing one, I hope I managed to make it fun nonetheless. My ambition was to create a toy-like application that makes ‘creating’ a game a fun and engaging way to kill a few minutes — much like Mii creation on the Nintendo Wii, or playing with Spore’s editors (although we still haven’t had the chance to actually play with latter, yet.) And who knows, perhaps it’ll inspire a few people to start developing games of their own. That would probably be the ultimate compliment.
In any case, I’d love to hear your comments, both positive and negative. And if you have a Flash Lite compatible phone, be sure to sign up with Playyoo. There is no other place offering you an endless stream of snack sized casual games on your phone. Once you’ve had a taste of that, I’m sure you’ll wonder how you ever got by without it.