Move 37

Designers make choices. They should be able to provide rationales for those choices. (Although sometimes they can’t.) Being able to explain the thinking that went into a design move to yourself, your teammates and clients is part of being a professional.

Move 37. This was the move AlphaGo made which took everyone by surprise because it appeared so wrong at first.

The interesting thing is that in hindsight it appeared AlphaGo had good reasons for this move. Based on a calculation of odds, basically.

If asked at the time, would AlphaGo have been able to provide this rationale?

It’s a thing that pops up in a lot of the reading I am doing around AI. This idea of transparency. In some fields you don’t just want an AI to provide you with a decision, but also with the arguments supporting that decision. Obvious examples would include a system that helps diagnose disease. You want it to provide more than just the diagnosis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cultural and also legal requirement.

It’s interesting.

Although lives don’t depend on it, the same might apply to intelligent design tools. If I am working with a system and it is offering me design directions or solutions, I want to know why it is suggesting these things as well. Because my reason for picking one over the other depends not just on the surface level properties of the design but also the underlying reasons. It might be important because I need to be able to tell stakeholders about it.

An added side effect of this is that a designer working with such a system is be exposed to machine reasoning about design choices. This could inform their own future thinking too.

Transparent AI might help people improve themselves. A black box can’t teach you much about the craft it’s performing. Looking at outcomes can be inspirational or helpful, but the processes that lead up to them can be equally informative. If not more so.

Imagine working with an intelligent design tool and getting the equivalent of an AlphaGo move 37 moment. Hugely inspirational. Game changer.

This idea gets me much more excited than automating design tasks does.

Adapting intelligent tools for creativity

I read Alper’s book on conversational user interfaces over the weekend and was struck by this paragraph:

“The holy grail of a conversational system would be one that’s aware of itself — one that knows its own model and internal structure and allows you to change all of that by talking to it. Imagine being able to tell Siri to tone it down a bit with the jokes and that it would then actually do that.”

His point stuck with me because I think this is of particular importance to creative tools. These need to be flexible so that a variety of people can use them in different circumstances. This adaptability is what lends a tool depth.

The depth I am thinking of in creative tools is similar to the one in games, which appears to be derived from a kind of semi-orderedness. In short, you’re looking for a sweet spot between too simple and too complex.

And of course, you need good defaults.

Back to adaptation. This can happen in at least two ways on the interface level: modal or modeless. A simple example of the former would be to go into a preferences window to change the behaviour of your drawing package. Similarly, modeless adaptation happens when you rearrange some panels to better suit the task at hand.

Returning to Siri, the equivalence of modeless adaptation would be to tell her to tone it down when her sense of humor irks you.

For the modal solution, imagine a humor slider in a settings screen somewhere. This would be a terrible solution because it offers a poor mapping of a control to a personality trait. Can you pinpoint on a scale of 1 to 10 your preferred amount of humor in your hypothetical personal assistant? And anyway, doesn’t it depend on a lot of situational things such as your mood, the particular task you’re trying to complete and so on? In short, this requires something more situated and adaptive.

So just being able to tell Siri to tone it down would be the equivalent of rearranging your Photoshop palets. And in a next interaction Siri might carefully try some humor again to gauge your response. And if you encourage her, she might be more humorous again.

Enough about funny Siri for now because it’s a bit of a silly example.

Funny Siri, although she’s a bit of a Silly example, does illustrate another problem I am trying to wrap my head around. How does an intelligent tool for creativity communicate its internal state? Because it is probabilistic, it can’t be easily mapped to a graphic information display. And so our old way of manipulating state, and more specifically adapting a tool to our needs becomes very different too.

It seems to be best for an intelligent system to be open to suggestions from users about how to behave. Adapting an intelligent creative tool is less like rearranging your workspace and more like coordinating with a coworker.

My ideal is for this to be done in the same mode (and so using the same controls) as when doing the work itself. I expect this to allow for more fluid interactions, going back and forth between doing the work at hand, and meta-communication about how the system supports the work. I think if we look at how people collaborate this happens a lot, communication and meta-communication going on continuously in the same channels.

We don’t need a self-aware artificial intelligence to do this. We need to apply what computer scientists call supervised learning. The basic idea is to provide a system with example inputs and desired outputs, and let it infer the necessary rules from them. If the results are unsatisfactory, you simply continue training it until it performs well enough.

A super fun example of this approach is the Wekinator, a piece of machine learning software for creating musical instruments. Below is a video in which Wekinator’s creator Rebecca Fiebrink performs several demos.

Here we have an intelligent system learning from examples. A person manipulating data in stead of code to get to a particular desired behaviour. But what Wekinator lacks and what I expect will be required for this type of thing to really catch on is for the training to happen in the same mode or medium as the performance. The technology seems to be getting there, but there are many interaction design problems remaining to be solved.

Prototyping is a team sport

Lately I have been binging on books, presentations and articles related to ‘Lean UX’. I don’t like the term, but then I don’t like the tech industry’s love for inventing a new label for every damn thing. I do like the things emphasises: shared understanding, deep collaboration, continuous user feedback. These are principles that have always implicitly guided the choices I made when leading teams at Hubbub and now also as a member of several teams in the role of product designer.

In all these lean UX readings a thing that keeps coming up again and again is prototyping. Prototypes are the go-to way of doing ‘experiments’, in lean-speak. Other things can be done as well—surveys, interviews, whatever—but more often than not, assumptions are tested with prototypes.

Which is great! And also unsurprising as prototyping has really been embraced by the tech world. And tools for rapid prototyping are getting a lot of attention and interest as a result. However, this comes with a couple of risks. For one, sometimes it is fine to stick to paper. But the lure of shiny prototyping tools is strong. You’d rather not show a crappy drawing to a user. What if they hate it? However, high fidelity prototyping is always more costly than paper. So although well-intentioned, prototyping tools can encourage wastefulness, the bane of lean.

There is a bigger danger which runs against the lean ethos, though. Some tools afford deep collaboration more than others. Let’s be real: none afford deeper collaboration than paper and whiteboards. There is one person behind the controls when prototyping with a tool. So in my view, one should only ever progress to that step once a team effort has been made to hash out the rough outlines of what is to be prototyped. Basically: always paper prototype the digital prototype. Together.

I have had a lot of fun lately playing with browser prototypes and with prototyping in Framer. But as I was getting back into all of this I did notice this risk: All of a sudden there is a person on the team who does the prototypes. Unless this solo prototyping is preceded by shared prototyping, this is a problem. Because the rest of the team is left out of the thinking-through-making which makes the prototyping process so valuable in addition to the testable artefacts it outputs.

It is I think a key oversight of the ‘should designers code’ debaters and to an extent one made by all prototyping tool manufacturers: Individuals don’t prototype, teams do. Prototyping is a team sport. And so the success of a tool depends not only on how well it supports individual prototyping activities but also how well it embeds itself in collaborative workflows.

In addition to the tools themselves getting better at supporting collaborative workflows, I would also love to see more tutorials, both official and from the community, about how to use a prototyping tool within the larger context of a team doing some form of agile. Most tutorials now focus on “how do I make this thing with this tool”. Useful, up to a point. But a large part of prototyping is to arrive at “the thing” together.

One of the lean UX things I devoured was this presentation by Bill Scott in which he talks about aligning a prototyping and a development tech stack, so that the gap between design and engineering is bridged not just with processes but also with tooling. His example applies to web development and app development using web technologies. I wonder what a similar approach looks like for native mobile app development. But this is the sort of thing I am talking about: Smart thinking about how to actually do this lean thing in the real world. I believe organising ourselves so that we can prototype as a team is absolutely key. I will pick my tools and processes accordingly in future.

All of the above is as usual mostly a reminder to self: As a designer your role is not to go off and work solo on brilliant prototypes. Your role is to facilitate such efforts by the whole team. Sure, there will be solo deep designerly crafting happening. But it will not add up to anything if it is not embedded in a collaborative design and development framework.

Artificial intelligence, creativity and metis

Boris pointed me to CreativeAI, an interesting article about creativity and artificial intelligence. It offers a really nice overview of the development of the idea of augmenting human capabilities through technology. One of the claims the authors make is that artificial intelligence is making creativity more accessible. Because tools with AI in them support humans in a range of creative tasks in a way that shortcuts the traditional requirements of long practice to acquire the necessary technical skills.

For example, ShadowDraw (PDF) is a program that helps people with freehand drawing by guessing what they are trying to create and showing a dynamically updated ‘shadow image’ on the canvas which people can use as a guide.

It is an interesting idea and in some ways these kinds of software indeed lower the threshold for people to engage in creative tasks. They are good examples of artificial intelligence as partner in stead of master or servant.

While reading CreativeAI I wasn’t entirely comfortable though and I think it may have been caused by two things.

One is that I care about creativity and I think that a good understanding of it and a daily practice at it—in the broad sense of the word—improves lives. I am also in some ways old-fashioned about it and I think the joy of creativity stems from the infinitely high skill ceiling involved and the never-ending practice it affords. Let’s call it the Jiro perspective, after the sushi chef made famous by a wonderful documentary.

So, claiming that creative tools with AI in them can shortcut all of this life-long joyful toil produces a degree of panic for me. Although it’s probably a Pastoral worldview which would be better to abandon. In a world eaten by software, it’s better to be a Promethean.

The second reason might hold more water but really is more of an open question than something I have researched in any meaningful way. I think there is more to creativity than just the technical skill required and as such the CreativeAI story runs the risk of being reductionist. While reading the article I was also slowly but surely making my way through one of the final chapters of James C. Scott’s Seeing Like a State, which is about the concept of metis.

It is probably the most interesting chapter of the whole book. Scott introduces metis as a form of knowledge different from that produced by science. Here are some quick excerpts from the book that provide a sense of what it is about. But I really can’t do the richness of his description justice here. I am trying to keep this short.

The kind of knowledge required in such endeavors is not deductive knowledge from first principles but rather what Greeks of the classical period called metis, a concept to which we shall return. […] metis is better understood as the kind of knowledge that can be acquired only by long practice at similar but rarely identical tasks, which requires constant adaptation to changing circumstances. […] It is to this kind of knowledge that [socialist writer] Luxemburg appealed when she characterized the building of socialism as “new territory” demanding “improvisation” and “creativity.”

Scott’s argument is about how authoritarian high-modernist schemes privilege scientific knowledge over metis. His exploration of what metis means is super interesting to anyone dedicated to honing a craft, or to cultivating organisations conducive to the development and application of craft in the face of uncertainty. There is a close link between metis and the concept of agility.

So circling back to artificially intelligent tools for creativity I would be interested in exploring not only how we can diminish the need for the acquisition of the technical skills required, but to also accelerate the acquisition of the practical knowledge required to apply such skills in the ever-changing real world. I suggest we expand our understanding of what it means to be creative, but without losing the link to actual practice.

For the ancient Greeks metis became synonymous with a kind of wisdom and cunning best exemplified by such figures as Odysseus and notably also Prometheus. The latter in particular exemplifies the use of creativity towards transformative ends. This is the real promise of AI for creativity in my eyes. Not to simply make it easier to reproduce things that used to be hard to create but to create new kinds of tools which have the capacity to surprise their users and to produce results that were impossible to create before.

Artificial intelligence as partner

Some notes on artificial intelligence, technology as partner and related user interface design challenges. Mostly notes to self, not sure I am adding much to the debate. Just summarising what I think is important to think about more. Warning: Dense with links.

Matt Jones writes about how artificial intelligence does not have to be a slave, but can also be partner.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

I would add a third possibility, which is AI as master. A common fear we humans have and one I think only growing as things like AlphaGo and new Boston Dynamics robots keep happening.

I have had a tweet pinned to my timeline for a while now, which is a quote from Play Matters.

“tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actually does not just apply to AI but to tech in general. Of course, as tech gets smarter and more independent from humans, the idea of a ‘third way’ only grows in importance.

More tweeting. A while back, shortly after AlphaGo’s victory, James tweeted:

On the one hand, we must insist, as Kasparov did, on Advanced Go, and then Advanced Everything Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear example of humans and AI partnering. And it is also an example of technology as a source of expression and a way of being.

Also, in a WIRED article on AlphaGo, someone who had played the AI repeatedly says his game has improved tremendously.

So that is the promise: Artificially intelligent systems which work together with humans for mutual benefit.

Now of course these AIs don’t just arrive into the world fully formed. They are created by humans with particular goals in mind. So there is a design component there. We can design them to be partners but we can also design them to be masters or slaves.

As an aside: Maybe AIs that make use of deep learning are particularly well suited to this partner model? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynamics. There apparently is a significant difference between holistic and reductionist approaches, deep learning being holistic. I imagine reductionist AI might be more dependent on humans. But this is just wild speculation. I don’t know if there is anything there.

This insistence of James on “advanced everything else” is a world view. A politics. To allow ourselves to be increasingly entangled with these systems, to not be afraid of them. Because if we are afraid, we either want to subjugate them or they will subjugate us. It is also about not obscuring the systems we are part of. This is a sentiment also expressed by James in the same series of tweets I quoted from earlier:

These emergences are also the best model we have ever built for describing the true state of the world as it always already exists.

And there is overlap here with ideas expressed by Kevin in ‘Design as Participation’:

[W]e are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?

AI partners might help us to better see the systems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.

I am reminded of the mixed-initiative systems being researched in the area of procedural content generation for games. I wrote about these a while back on the Hubbub blog. Such systems are partners of designers. They give something like super powers. Now imagine such powers applied to other problems. Quite exciting.

Actually, in the aforementioned article I distinguish between tools for making things and tools for inspecting possibility spaces. In the first case designers manipulate more abstract representations of the intended outcome and the system generates the actual output. In the second case the system visualises the range of possible outcomes given a particular configuration of the abstract representation. These two are best paired.

From a design perspective, a lot remains to be figured out. If I look at those mixed-initiative tools I am struck by how poorly they communicate what the AI is doing and what its capabilities are. There is a huge user interface design challenge there.

For stuff focused on getting information, a conversational UI seems to be the current local optimum for working with an AI. But for tools for creativity, to use the two-way split proposed by Victor, different UIs will be required.

What shape will they take? What visual language do we need to express the particular properties of artificial intelligence? What approaches can we take in addition to personifying AI as bots or characters? I don’t know and I can hardly think of any good examples that point towards promising approaches. Lots to be done.

Recess! 8 – Cardboard Inspiration

Recess! is a correspondence series with personal ruminations on games.

Dear Alper and Niels,

This morning I read the news that Jason Rohrer has won the final game design challenge at GDC. A Game For Someone is amazing—a boardgame buried in the Nevada desert, intended to be played in a few thousand years by those who finally find it after working down a humongous list of GPS coordinates. The game has never been played, it’s been designed using genetic algorithms. It’s made from incredibly durable materials.

I find it ironic that a boardgame wins a game design contest at an event whose attendants also drool over technofetishistic nonsense such as Oculus Rift.

And I love boardgames. I love playing big tactical shouty competitive ones at my house with friends on Saturday evenings. Or small, slow meditative strategic ones with my fiance on Sunday afternoons. I love their physicality, the shared nature of playing.

I also love them for the inspiration they offer me. Their inner workings are exposed. They’re a bit like the engines in those old cars I see some of neighbours work on every weekend, just for fun. It’s so easy to pick out mechanics, study them and see how they may be of use to my own projects.

I recently sat down to revisit the game Cuba, because our own work on KAIGARA involved an engine building mechanic and Cuba does this really well. KAIGARA doesn’t involve any cardboard, but that doesn’t mean we can’t draw inspiration from it. On the contrary. It’s like James Wallis recently said in an interview at BoardGameGeek:

“My games collection isn’t a library, it’s a toolkit.”

Kars

A Playful Stance — my Game Design London 2008 talk

A while ago I was interviewed by Sam Warnaars. He’s researching people’s conference experiences; he asked me what my most favourite and least favourite conference of the past year was. I wish he’d asked me after my trip to Playful ’08, because it has been by far the best conference experience to date. Why? Because it was like Toby, Richard and the rest of the event’s producers had taken a peek inside my brain and came up with a program encompassing (almost) all my fascinations — games, interaction design, play, sociality, the web, products, physical interfaces, etc. Almost every speaker brought something interesting to the table. The audience was composed of people from many different backgrounds, and all seemed to, well, like each other. The venue was lovely and atmospheric (albeit a bit chilly). They had good tea. Drinks afterwards were tasty and fun, the tapas later on even more so. And the whiskey after that, well let’s just say I was glad to have a late flight the next day. Many thanks to my friends at Pixel-Lab for inviting me, and to Mr. Davies for the referral.

Below is a transcript plus slides of my contribution to the day. The slides are also on SlideShare. I have been told all talks have been recorded and will be published to the event’s Vimeo group.

Perhaps 1874 words is a bit too much for you? In that case, let me give you an executive summary of sorts:

  1. The role of design in rich forms of play, such as skateboarding, is facilitatory. Designers provide tools for people to play with.
  2. It is hard to predict what people will do exactly with your tools. This is OK. In fact it is best to leave room for unexpected uses.
  3. Underspecified, playful tools can be used for learning. People can use them to explore complex concepts on their own terms.

As always, I am interested in receiving constructive criticism, as well as good examples of the things I’ve discussed.

Continue reading A Playful Stance — my Game Design London 2008 talk

Tools for having fun

ZoneTag Photo Friday 11:40 am 4/18/08 Copenhagen, Hovedstaden

One of the nicer things about GDC was the huge stack of free magazines I took home with me. Among those was an issue of Edge, the glossy games magazine designed to look good on a coffee table next to the likes of Vogue (or whatever). I was briefly subscribed to Edge, but ended up not renewing because I could read reviews online and the articles weren’t all that good.

The january 2008 issue I brought home did have some nice bits in it—in particular an interview with Yoshinori Ono, the producer of Street Fighter IV. This latest incarnation of the game aims to go back to what made Street Fighter II great. What I liked about the interview was Ono’s clear dedication to players, not force feeding them what the designers think would be cool. Something often lacking in game design.

“”First of all, the most important thing about SFIV is ‘fair rules’, and by that I mean fair and clear rules that can be understood by everyone very easily.” A lesson learned from the birth of modern videogaming: ‘Avoid missing ball for high score’.”

This of course is a reference to PONG. Allan Alcorn (the designer of the arcade coin operated version of PONG) famously refused to include instructions with the game because he believed if a game needed written instructions, it was crap.

Later on in the same article, Ono says:

“[…] what the game is — a tool for having fun. A tool to give the players a virtual fighting stage — an imaginary arena, if you like.”

(Emphasis mine.) I like the fact that he sees the game as something to be used, as opposed to something to be consumed. Admittedly, it is easier to think of a fighting game this way than for instance an adventure game—which has much more embedded narrative—but in any case I think it is a more productive view.

While we’re on the topic of magazines. A while back I read an enjoyable little piece in my favorite free magazine Vice about the alleged clash between ‘hardcore’ and ‘casual’ gamers:

“Casual games are taking off like never before, with half of today’s games being little fun quizzes or about playing tennis or golf by waving your arms around. The Hardcore crowd are shitting themselves that there might not be a Halo 4 if girls and old people carry on buying simple games where everyone’s a winner and all you have to do is wave a magic wand around and press a button every few times.”

Only half serious, to be sure, but could it be at least partly true? I wouldn’t mind it to be so. I appreciate the rise of the casual game mainly for the way it brings focus back to player centred game design. Similar to Yoshinori Ono’s attitude in redesigning Street Fighter.

Playyoo goes beta

Today Playyoo went beta. Playyoo is a mobile games community I have been involved with as a freelance interaction designer since july of this year. I don’t have time for an elaborate post-mortem, but here are some preliminary notes on what Playyoo is and what part I’ve played in its conception.

Playyoo's here

Playyoo brings some cool innovations to the mobile games space. It allows you to snack on free casual mobile games while on the go, using a personalized mobile web page. It stores your high scores and allows you to interact with your friends (and foes) on an accompanying regular web site. Playyoo is a platform for indie mobile game developers. Anyone can publish their Flash Lite game on it. Best of all — even if you’re not a mobile games developer, you can create a game of your own.

It’s that last bit I’ve worked on the most. I took care of the interaction design for an application imaginatively called the Game Creator. It allows you to take well known games (such as Lunar Lander) and give them your own personal twist. Obviously this includes the game’s graphics, but we’ve gone one step further. You can change the way the game works as well.

Screenshot of my lolcats pairs game on Playyoo

So in the example of Lunar Lander you can make the spaceship look like whatever you want. But you can also change the gravity, controlling the speed with which your ship drops to the surface. Best of all, you can create your own planet surface, as easy as drawing a line on paper. This is why Lunar Lander in the Playyoo Game Creator is called Line Lander. (See? Another imaginative title!)

At the moment there are six games in the Game Creator: Tic-Tac-Toe, Pairs, Revenge, Snake, Ping-Pong, and the aforementioned Line Lander. There’s long list of other games I’d like to put in there. I’m sure there will be more to come.

Since today’s launch, people have already started creating crazy stuff with it. There’s a maze-like snake game, for instance. And a game where you need to land a spider crab on the head of some person called Rebecca… I decided to chip in with a pairs game full of lolcats (an idea I’ve had since doing the very first wireframe.) Anyway, the mind boggles to think of what people might come up with next! That’s the cool part about creating a tool for creative expression.

Screenshot of a Line Lander game in progress in the Playyoo Game Creator

So although making a game is very different from playing one, I hope I managed to make it fun nonetheless. My ambition was to create a toy-like application that makes ‘creating’ a game a fun and engaging way to kill a few minutes — much like Mii creation on the Nintendo Wii, or playing with Spore‘s editors (although we still haven’t had the chance to actually play with latter, yet.) And who knows, perhaps it’ll inspire a few people to start developing games of their own. That would probably be the ultimate compliment.

In any case, I’d love to hear your comments, both positive and negative. And if you have a Flash Lite compatible phone, be sure to sign up with Playyoo. There is no other place offering you an endless stream of snack sized casual games on your phone. Once you’ve had a taste of that, I’m sure you’ll wonder how you ever got by without it.

Let’s see if we can post from IMified

So I’m giving IMified (www.imified.com) a spin and have just added the WordPress service to see if it works. For those that haven’t heard about IMified yet; it allows you to do a number of things through instant messaging (MSN, Google Talk, whatever). For instance add stuff to your Backpack account, or like I’m doing now, write a blog post. Let’s publish this to see what happens, hitting ‘return’…

Update: Looks like it’s working! I had to manually insert the link to the website and also go into WordPress to add some categories, so it’s only really useful when you want to fire off a quick note. As a bonus, here’s the Adium window with a transcript of the IMified session.