Move 37

Design­ers make choic­es. They should be able to pro­vide ratio­nales for those choic­es. (Although some­times they can’t.) Being able to explain the think­ing that went into a design move to your­self, your team­mates and clients is part of being a pro­fes­sion­al.

Move 37. This was the move Alpha­Go made which took every­one by sur­prise because it appeared so wrong at first.

The inter­est­ing thing is that in hind­sight it appeared Alpha­Go had good rea­sons for this move. Based on a cal­cu­la­tion of odds, basically.

If asked at the time, would Alpha­Go have been able to pro­vide this rationale?

It’s a thing that pops up in a lot of the read­ing I am doing around AI. This idea of trans­paren­cy. In some fields you don’t just want an AI to pro­vide you with a deci­sion, but also with the argu­ments sup­port­ing that deci­sion. Obvi­ous exam­ples would include a sys­tem that helps diag­nose dis­ease. You want it to pro­vide more than just the diag­no­sis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cul­tur­al and also legal requirement.

It’s inter­est­ing.

Although lives don’t depend on it, the same might apply to intel­li­gent design tools. If I am work­ing with a sys­tem and it is offer­ing me design direc­tions or solu­tions, I want to know why it is sug­gest­ing these things as well. Because my rea­son for pick­ing one over the oth­er depends not just on the sur­face lev­el prop­er­ties of the design but also the under­ly­ing rea­sons. It might be impor­tant because I need to be able to tell stake­hold­ers about it.

An added side effect of this is that a design­er work­ing with such a sys­tem is be exposed to machine rea­son­ing about design choic­es. This could inform their own future think­ing too.

Trans­par­ent AI might help peo­ple improve them­selves. A black box can’t teach you much about the craft it’s per­form­ing. Look­ing at out­comes can be inspi­ra­tional or help­ful, but the process­es that lead up to them can be equal­ly infor­ma­tive. If not more so.

Imag­ine work­ing with an intel­li­gent design tool and get­ting the equiv­a­lent of an Alpha­Go move 37 moment. Huge­ly inspi­ra­tional. Game changer.

This idea gets me much more excit­ed than automat­ing design tasks does.

Adapting intelligent tools for creativity

I read Alper’s book on con­ver­sa­tion­al user inter­faces over the week­end and was struck by this paragraph:

The holy grail of a con­ver­sa­tion­al sys­tem would be one that’s aware of itself — one that knows its own mod­el and inter­nal struc­ture and allows you to change all of that by talk­ing to it. Imag­ine being able to tell Siri to tone it down a bit with the jokes and that it would then actu­al­ly do that.”

His point stuck with me because I think this is of par­tic­u­lar impor­tance to cre­ative tools. These need to be flex­i­ble so that a vari­ety of peo­ple can use them in dif­fer­ent cir­cum­stances. This adapt­abil­i­ty is what lends a tool depth.

The depth I am think­ing of in cre­ative tools is sim­i­lar to the one in games, which appears to be derived from a kind of semi-ordered­ness. In short, you’re look­ing for a sweet spot between too sim­ple and too complex.

And of course, you need good defaults.

Back to adap­ta­tion. This can hap­pen in at least two ways on the inter­face lev­el: modal or mod­e­less. A sim­ple exam­ple of the for­mer would be to go into a pref­er­ences win­dow to change the behav­iour of your draw­ing pack­age. Sim­i­lar­ly, mod­e­less adap­ta­tion hap­pens when you rearrange some pan­els to bet­ter suit the task at hand.

Return­ing to Siri, the equiv­a­lence of mod­e­less adap­ta­tion would be to tell her to tone it down when her sense of humor irks you. 

For the modal solu­tion, imag­ine a humor slid­er in a set­tings screen some­where. This would be a ter­ri­ble solu­tion because it offers a poor map­ping of a con­trol to a per­son­al­i­ty trait. Can you pin­point on a scale of 1 to 10 your pre­ferred amount of humor in your hypo­thet­i­cal per­son­al assis­tant? And any­way, doesn’t it depend on a lot of sit­u­a­tion­al things such as your mood, the par­tic­u­lar task you’re try­ing to com­plete and so on? In short, this requires some­thing more sit­u­at­ed and adaptive. 

So just being able to tell Siri to tone it down would be the equiv­a­lent of rear­rang­ing your Pho­to­shop palets. And in a next inter­ac­tion Siri might care­ful­ly try some humor again to gauge your response. And if you encour­age her, she might be more humor­ous again.

Enough about fun­ny Siri for now because it’s a bit of a sil­ly example.

Fun­ny Siri, although she’s a bit of a Sil­ly exam­ple, does illus­trate anoth­er prob­lem I am try­ing to wrap my head around. How does an intel­li­gent tool for cre­ativ­i­ty com­mu­ni­cate its inter­nal state? Because it is prob­a­bilis­tic, it can’t be eas­i­ly mapped to a graph­ic infor­ma­tion dis­play. And so our old way of manip­u­lat­ing state, and more specif­i­cal­ly adapt­ing a tool to our needs becomes very dif­fer­ent too.

It seems to be best for an intel­li­gent sys­tem to be open to sug­ges­tions from users about how to behave. Adapt­ing an intel­li­gent cre­ative tool is less like rear­rang­ing your work­space and more like coor­di­nat­ing with a coworker. 

My ide­al is for this to be done in the same mode (and so using the same con­trols) as when doing the work itself. I expect this to allow for more flu­id inter­ac­tions, going back and forth between doing the work at hand, and meta-com­mu­ni­ca­tion about how the sys­tem sup­ports the work. I think if we look at how peo­ple col­lab­o­rate this hap­pens a lot, com­mu­ni­ca­tion and meta-com­mu­ni­ca­tion going on con­tin­u­ous­ly in the same channels.

We don’t need a self-aware arti­fi­cial intel­li­gence to do this. We need to apply what com­put­er sci­en­tists call super­vised learn­ing. The basic idea is to pro­vide a sys­tem with exam­ple inputs and desired out­puts, and let it infer the nec­es­sary rules from them. If the results are unsat­is­fac­to­ry, you sim­ply con­tin­ue train­ing it until it per­forms well enough. 

A super fun exam­ple of this approach is the Wek­ina­tor, a piece of machine learn­ing soft­ware for cre­at­ing musi­cal instru­ments. Below is a video in which Wekinator’s cre­ator Rebec­ca Fiebrink per­forms sev­er­al demos.

Here we have an intel­li­gent sys­tem learn­ing from exam­ples. A per­son manip­u­lat­ing data in stead of code to get to a par­tic­u­lar desired behav­iour. But what Wek­ina­tor lacks and what I expect will be required for this type of thing to real­ly catch on is for the train­ing to hap­pen in the same mode or medi­um as the per­for­mance. The tech­nol­o­gy seems to be get­ting there, but there are many inter­ac­tion design prob­lems remain­ing to be solved. 

Prototyping is a team sport

Late­ly I have been bing­ing on books, pre­sen­ta­tions and arti­cles relat­ed to ‘Lean UX’. I don’t like the term, but then I don’t like the tech industry’s love for invent­ing a new label for every damn thing. I do like the things empha­sis­es: shared under­stand­ing, deep col­lab­o­ra­tion, con­tin­u­ous user feed­back. These are prin­ci­ples that have always implic­it­ly guid­ed the choic­es I made when lead­ing teams at Hub­bub and now also as a mem­ber of sev­er­al teams in the role of prod­uct designer.

In all these lean UX read­ings a thing that keeps com­ing up again and again is pro­to­typ­ing. Pro­to­types are the go-to way of doing ‘exper­i­ments’, in lean-speak. Oth­er things can be done as well—surveys, inter­views, whatever—but more often than not, assump­tions are test­ed with prototypes. 

Which is great! And also unsur­pris­ing as pro­to­typ­ing has real­ly been embraced by the tech world. And tools for rapid pro­to­typ­ing are get­ting a lot of atten­tion and inter­est as a result. How­ev­er, this comes with a cou­ple of risks. For one, some­times it is fine to stick to paper. But the lure of shiny pro­to­typ­ing tools is strong. You’d rather not show a crap­py draw­ing to a user. What if they hate it? How­ev­er, high fideli­ty pro­to­typ­ing is always more cost­ly than paper. So although well-inten­tioned, pro­to­typ­ing tools can encour­age waste­ful­ness, the bane of lean. 

There is a big­ger dan­ger which runs against the lean ethos, though. Some tools afford deep col­lab­o­ra­tion more than oth­ers. Let’s be real: none afford deep­er col­lab­o­ra­tion than paper and white­boards. There is one per­son behind the con­trols when pro­to­typ­ing with a tool. So in my view, one should only ever progress to that step once a team effort has been made to hash out the rough out­lines of what is to be pro­to­typed. Basi­cal­ly: always paper pro­to­type the dig­i­tal pro­to­type. Together. 

I have had a lot of fun late­ly play­ing with brows­er pro­to­types and with pro­to­typ­ing in Framer. But as I was get­ting back into all of this I did notice this risk: All of a sud­den there is a per­son on the team who does the pro­to­types. Unless this solo pro­to­typ­ing is pre­ced­ed by shared pro­to­typ­ing, this is a prob­lem. Because the rest of the team is left out of the think­ing-through-mak­ing which makes the pro­to­typ­ing process so valu­able in addi­tion to the testable arte­facts it outputs.

It is I think a key over­sight of the ‘should design­ers code’ debaters and to an extent one made by all pro­to­typ­ing tool man­u­fac­tur­ers: Indi­vid­u­als don’t pro­to­type, teams do. Pro­to­typ­ing is a team sport. And so the suc­cess of a tool depends not only on how well it sup­ports indi­vid­ual pro­to­typ­ing activ­i­ties but also how well it embeds itself in col­lab­o­ra­tive workflows. 

In addi­tion to the tools them­selves get­ting bet­ter at sup­port­ing col­lab­o­ra­tive work­flows, I would also love to see more tuto­ri­als, both offi­cial and from the com­mu­ni­ty, about how to use a pro­to­typ­ing tool with­in the larg­er con­text of a team doing some form of agile. Most tuto­ri­als now focus on “how do I make this thing with this tool”. Use­ful, up to a point. But a large part of pro­to­typ­ing is to arrive at “the thing” together. 

One of the lean UX things I devoured was this pre­sen­ta­tion by Bill Scott in which he talks about align­ing a pro­to­typ­ing and a devel­op­ment tech stack, so that the gap between design and engi­neer­ing is bridged not just with process­es but also with tool­ing. His exam­ple applies to web devel­op­ment and app devel­op­ment using web tech­nolo­gies. I won­der what a sim­i­lar approach looks like for native mobile app devel­op­ment. But this is the sort of thing I am talk­ing about: Smart think­ing about how to actu­al­ly do this lean thing in the real world. I believe organ­is­ing our­selves so that we can pro­to­type as a team is absolute­ly key. I will pick my tools and process­es accord­ing­ly in future.

All of the above is as usu­al most­ly a reminder to self: As a design­er your role is not to go off and work solo on bril­liant pro­to­types. Your role is to facil­i­tate such efforts by the whole team. Sure, there will be solo deep design­er­ly craft­ing hap­pen­ing. But it will not add up to any­thing if it is not embed­ded in a col­lab­o­ra­tive design and devel­op­ment framework.

Artificial intelligence, creativity and metis

Boris point­ed me to Cre­ativeAI, an inter­est­ing arti­cle about cre­ativ­i­ty and arti­fi­cial intel­li­gence. It offers a real­ly nice overview of the devel­op­ment of the idea of aug­ment­ing human capa­bil­i­ties through tech­nol­o­gy. One of the claims the authors make is that arti­fi­cial intel­li­gence is mak­ing cre­ativ­i­ty more acces­si­ble. Because tools with AI in them sup­port humans in a range of cre­ative tasks in a way that short­cuts the tra­di­tion­al require­ments of long prac­tice to acquire the nec­es­sary tech­ni­cal skills. 

For exam­ple, Shad­ow­Draw (PDF) is a pro­gram that helps peo­ple with free­hand draw­ing by guess­ing what they are try­ing to cre­ate and show­ing a dynam­i­cal­ly updat­ed ‘shad­ow image’ on the can­vas which peo­ple can use as a guide.

It is an inter­est­ing idea and in some ways these kinds of soft­ware indeed low­er the thresh­old for peo­ple to engage in cre­ative tasks. They are good exam­ples of arti­fi­cial intel­li­gence as part­ner in stead of mas­ter or servant. 

While read­ing Cre­ativeAI I wasn’t entire­ly com­fort­able though and I think it may have been caused by two things. 

One is that I care about cre­ativ­i­ty and I think that a good under­stand­ing of it and a dai­ly prac­tice at it—in the broad sense of the word—improves lives. I am also in some ways old-fash­ioned about it and I think the joy of cre­ativ­i­ty stems from the infi­nite­ly high skill ceil­ing involved and the nev­er-end­ing prac­tice it affords. Let’s call it the Jiro per­spec­tive, after the sushi chef made famous by a won­der­ful doc­u­men­tary.

So, claim­ing that cre­ative tools with AI in them can short­cut all of this life-long joy­ful toil pro­duces a degree of pan­ic for me. Although it’s prob­a­bly a Pas­toral world­view which would be bet­ter to aban­don. In a world eat­en by soft­ware, it’s bet­ter to be a Promethean.

The sec­ond rea­son might hold more water but real­ly is more of an open ques­tion than some­thing I have researched in any mean­ing­ful way. I think there is more to cre­ativ­i­ty than just the tech­ni­cal skill required and as such the Cre­ativeAI sto­ry runs the risk of being reduc­tion­ist. While read­ing the arti­cle I was also slow­ly but sure­ly mak­ing my way through one of the final chap­ters of James C. Scott’s See­ing Like a State, which is about the con­cept of metis.

It is prob­a­bly the most inter­est­ing chap­ter of the whole book. Scott intro­duces metis as a form of knowl­edge dif­fer­ent from that pro­duced by sci­ence. Here are some quick excerpts from the book that pro­vide a sense of what it is about. But I real­ly can’t do the rich­ness of his descrip­tion jus­tice here. I am try­ing to keep this short.

The kind of knowl­edge required in such endeav­ors is not deduc­tive knowl­edge from first prin­ci­ples but rather what Greeks of the clas­si­cal peri­od called metis, a con­cept to which we shall return. […] metis is bet­ter under­stood as the kind of knowl­edge that can be acquired only by long prac­tice at sim­i­lar but rarely iden­ti­cal tasks, which requires con­stant adap­ta­tion to chang­ing cir­cum­stances. […] It is to this kind of knowl­edge that [social­ist writer] Lux­em­burg appealed when she char­ac­ter­ized the build­ing of social­ism as “new ter­ri­to­ry” demand­ing “impro­vi­sa­tion” and “cre­ativ­i­ty.”

Scott’s argu­ment is about how author­i­tar­i­an high-mod­ernist schemes priv­i­lege sci­en­tif­ic knowl­edge over metis. His explo­ration of what metis means is super inter­est­ing to any­one ded­i­cat­ed to hon­ing a craft, or to cul­ti­vat­ing organ­i­sa­tions con­ducive to the devel­op­ment and appli­ca­tion of craft in the face of uncer­tain­ty. There is a close link between metis and the con­cept of agility.

So cir­cling back to arti­fi­cial­ly intel­li­gent tools for cre­ativ­i­ty I would be inter­est­ed in explor­ing not only how we can dimin­ish the need for the acqui­si­tion of the tech­ni­cal skills required, but to also accel­er­ate the acqui­si­tion of the prac­ti­cal knowl­edge required to apply such skills in the ever-chang­ing real world. I sug­gest we expand our under­stand­ing of what it means to be cre­ative, but with­out los­ing the link to actu­al practice.

For the ancient Greeks metis became syn­ony­mous with a kind of wis­dom and cun­ning best exem­pli­fied by such fig­ures as Odysseus and notably also Prometheus. The lat­ter in par­tic­u­lar exem­pli­fies the use of cre­ativ­i­ty towards trans­for­ma­tive ends. This is the real promise of AI for cre­ativ­i­ty in my eyes. Not to sim­ply make it eas­i­er to repro­duce things that used to be hard to cre­ate but to cre­ate new kinds of tools which have the capac­i­ty to sur­prise their users and to pro­duce results that were impos­si­ble to cre­ate before.

Artificial intelligence as partner

Some notes on arti­fi­cial intel­li­gence, tech­nol­o­gy as part­ner and relat­ed user inter­face design chal­lenges. Most­ly notes to self, not sure I am adding much to the debate. Just sum­maris­ing what I think is impor­tant to think about more. Warn­ing: Dense with links.

Matt Jones writes about how arti­fi­cial intel­li­gence does not have to be a slave, but can also be partner.

I’m per­son­al­ly much more inter­est­ed in machine intel­li­gence as human aug­men­ta­tion rather than the oft-hyped AI assis­tant as a sep­a­rate embodiment.

I would add a third pos­si­bil­i­ty, which is AI as mas­ter. A com­mon fear we humans have and one I think only grow­ing as things like Alpha­Go and new Boston Dynam­ics robots keep happening.

I have had a tweet pinned to my time­line for a while now, which is a quote from Play Mat­ters.

tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being” 

So this idea actu­al­ly does not just apply to AI but to tech in gen­er­al. Of course, as tech gets smarter and more inde­pen­dent from humans, the idea of a ‘third way’ only grows in importance. 

More tweet­ing. A while back, short­ly after AlphaGo’s vic­to­ry, James tweet­ed:

On the one hand, we must insist, as Kas­parov did, on Advanced Go, and then Advanced Every­thing Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear exam­ple of humans and AI part­ner­ing. And it is also an exam­ple of tech­nol­o­gy as a source of expres­sion and a way of being.

Also, in a WIRED arti­cle on Alpha­Go, some­one who had played the AI repeat­ed­ly says his game has improved tremendously. 

So that is the promise: Arti­fi­cial­ly intel­li­gent sys­tems which work togeth­er with humans for mutu­al benefit. 

Now of course these AIs don’t just arrive into the world ful­ly formed. They are cre­at­ed by humans with par­tic­u­lar goals in mind. So there is a design com­po­nent there. We can design them to be part­ners but we can also design them to be mas­ters or slaves.

As an aside: Maybe AIs that make use of deep learn­ing are par­tic­u­lar­ly well suit­ed to this part­ner mod­el? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynam­ics. There appar­ent­ly is a sig­nif­i­cant dif­fer­ence between holis­tic and reduc­tion­ist approach­es, deep learn­ing being holis­tic. I imag­ine reduc­tion­ist AI might be more depen­dent on humans. But this is just wild spec­u­la­tion. I don’t know if there is any­thing there.

This insis­tence of James on “advanced every­thing else” is a world view. A pol­i­tics. To allow our­selves to be increas­ing­ly entan­gled with these sys­tems, to not be afraid of them. Because if we are afraid, we either want to sub­ju­gate them or they will sub­ju­gate us. It is also about not obscur­ing the sys­tems we are part of. This is a sen­ti­ment also expressed by James in the same series of tweets I quot­ed from earlier:

These emer­gences are also the best mod­el we have ever built for describ­ing the true state of the world as it always already exists.

And there is over­lap here with ideas expressed by Kevin in ‘Design as Par­tic­i­pa­tion’:

[W]e are no longer just using com­put­ers. We are using com­put­ers to use the world. The obscured and com­plex code and engi­neer­ing now engages with peo­ple, resources, civics, com­mu­ni­ties and ecosys­tems. Should design­ers con­tin­ue to priv­i­lege users above all oth­ers in the sys­tem? What would it mean to design for par­tic­i­pants instead? For all the participants?

AI part­ners might help us to bet­ter see the sys­tems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emer­gence of arti­fi­cial intel­li­gence (only this time with a bud­dy-style user inter­face that actu­al­ly works), this ques­tion of “doing some­thing for me” vs “allow­ing me to do even more” is going to get even more pro­nounced. Both are effec­tive, but the first sucks… or at least, it sucks accord­ing to my own per­son­al pol­i­tics, because I regard indi­vid­ual alien­ation from soci­ety and com­plex sys­tems as one of the huge threats in the 21st century.

I am remind­ed of the mixed-ini­tia­tive sys­tems being researched in the area of pro­ce­dur­al con­tent gen­er­a­tion for games. I wrote about these a while back on the Hub­bub blog. Such sys­tems are part­ners of design­ers. They give some­thing like super pow­ers. Now imag­ine such pow­ers applied to oth­er prob­lems. Quite exciting.

Actu­al­ly, in the afore­men­tioned arti­cle I dis­tin­guish between tools for mak­ing things and tools for inspect­ing pos­si­bil­i­ty spaces. In the first case design­ers manip­u­late more abstract rep­re­sen­ta­tions of the intend­ed out­come and the sys­tem gen­er­ates the actu­al out­put. In the sec­ond case the sys­tem visu­alis­es the range of pos­si­ble out­comes giv­en a par­tic­u­lar con­fig­u­ra­tion of the abstract rep­re­sen­ta­tion. These two are best paired. 

From a design per­spec­tive, a lot remains to be fig­ured out. If I look at those mixed-ini­tia­tive tools I am struck by how poor­ly they com­mu­ni­cate what the AI is doing and what its capa­bil­i­ties are. There is a huge user inter­face design chal­lenge there. 

For stuff focused on get­ting infor­ma­tion, a con­ver­sa­tion­al UI seems to be the cur­rent local opti­mum for work­ing with an AI. But for tools for cre­ativ­i­ty, to use the two-way split pro­posed by Vic­tor, dif­fer­ent UIs will be required.

What shape will they take? What visu­al lan­guage do we need to express the par­tic­u­lar prop­er­ties of arti­fi­cial intel­li­gence? What approach­es can we take in addi­tion to per­son­i­fy­ing AI as bots or char­ac­ters? I don’t know and I can hard­ly think of any good exam­ples that point towards promis­ing approach­es. Lots to be done.

Recess! 8 – Cardboard Inspiration

Recess! is a cor­re­spon­dence series with per­son­al rumi­na­tions on games.

Dear Alper and Niels,

This morn­ing I read the news that Jason Rohrer has won the final game design chal­lenge at GDC. A Game For Some­one is amazing—a boardgame buried in the Neva­da desert, intend­ed to be played in a few thou­sand years by those who final­ly find it after work­ing down a humon­gous list of GPS coor­di­nates. The game has nev­er been played, it’s been designed using genet­ic algo­rithms. It’s made from incred­i­bly durable materials.

I find it iron­ic that a boardgame wins a game design con­test at an event whose atten­dants also drool over tech­nofetishis­tic non­sense such as Ocu­lus Rift.

And I love boardgames. I love play­ing big tac­ti­cal shouty com­pet­i­tive ones at my house with friends on Sat­ur­day evenings. Or small, slow med­i­ta­tive strate­gic ones with my fiance on Sun­day after­noons. I love their phys­i­cal­i­ty, the shared nature of playing.

I also love them for the inspi­ra­tion they offer me. Their inner work­ings are exposed. They’re a bit like the engines in those old cars I see some of neigh­bours work on every week­end, just for fun. It’s so easy to pick out mechan­ics, study them and see how they may be of use to my own projects.

I recent­ly sat down to revis­it the game Cuba, because our own work on KAIGARA involved an engine build­ing mechan­ic and Cuba does this real­ly well. KAIGARA doesn’t involve any card­board, but that doesn’t mean we can’t draw inspi­ra­tion from it. On the con­trary. It’s like James Wal­lis recent­ly said in an inter­view at BoardGameGeek:

My games col­lec­tion isn’t a library, it’s a toolkit.”

Kars

A Playful Stance — my Game Design London 2008 talk

A while ago I was inter­viewed by Sam War­naars. He’s research­ing people’s con­fer­ence expe­ri­ences; he asked me what my most favourite and least favourite con­fer­ence of the past year was. I wish he’d asked me after my trip to Play­ful ’08, because it has been by far the best con­fer­ence expe­ri­ence to date. Why? Because it was like Toby, Richard and the rest of the event’s pro­duc­ers had tak­en a peek inside my brain and came up with a pro­gram encom­pass­ing (almost) all my fas­ci­na­tions — games, inter­ac­tion design, play, social­i­ty, the web, prod­ucts, phys­i­cal inter­faces, etc. Almost every speak­er brought some­thing inter­est­ing to the table. The audi­ence was com­posed of peo­ple from many dif­fer­ent back­grounds, and all seemed to, well, like each oth­er. The venue was love­ly and atmos­pher­ic (albeit a bit chilly). They had good tea. Drinks after­wards were tasty and fun, the tapas lat­er on even more so. And the whiskey after that, well let’s just say I was glad to have a late flight the next day. Many thanks to my friends at Pix­el-Lab for invit­ing me, and to Mr. Davies for the referral. 

Below is a tran­script plus slides of my con­tri­bu­tion to the day. The slides are also on SlideShare. I have been told all talks have been record­ed and will be pub­lished to the event’s Vimeo group.

Per­haps 1874 words is a bit too much for you? In that case, let me give you an exec­u­tive sum­ma­ry of sorts: 

  1. The role of design in rich forms of play, such as skate­board­ing, is facil­i­ta­to­ry. Design­ers pro­vide tools for peo­ple to play with.
  2. It is hard to pre­dict what peo­ple will do exact­ly with your tools. This is OK. In fact it is best to leave room for unex­pect­ed uses. 
  3. Under­spec­i­fied, play­ful tools can be used for learn­ing. Peo­ple can use them to explore com­plex con­cepts on their own terms.

As always, I am inter­est­ed in receiv­ing con­struc­tive crit­i­cism, as well as good exam­ples of the things I’ve discussed. 

Con­tin­ue read­ing A Play­ful Stance — my Game Design Lon­don 2008 talk

Tools for having fun

ZoneTag Photo Friday 11:40 am 4/18/08 Copenhagen, Hovedstaden

One of the nicer things about GDC was the huge stack of free mag­a­zines I took home with me. Among those was an issue of Edge, the glossy games mag­a­zine designed to look good on a cof­fee table next to the likes of Vogue (or what­ev­er). I was briefly sub­scribed to Edge, but end­ed up not renew­ing because I could read reviews online and the arti­cles weren’t all that good.

The jan­u­ary 2008 issue I brought home did have some nice bits in it—in par­tic­u­lar an inter­view with Yoshi­nori Ono, the pro­duc­er of Street Fight­er IV. This lat­est incar­na­tion of the game aims to go back to what made Street Fight­er II great. What I liked about the inter­view was Ono’s clear ded­i­ca­tion to play­ers, not force feed­ing them what the design­ers think would be cool. Some­thing often lack­ing in game design.

“First of all, the most impor­tant thing about SFIV is ‘fair rules’, and by that I mean fair and clear rules that can be under­stood by every­one very eas­i­ly.” A les­son learned from the birth of mod­ern videogam­ing: ‘Avoid miss­ing ball for high score’.”

This of course is a ref­er­ence to PONG. Allan Alcorn (the design­er of the arcade coin oper­at­ed ver­sion of PONG) famous­ly refused to include instruc­tions with the game because he believed if a game need­ed writ­ten instruc­tions, it was crap.

Lat­er on in the same arti­cle, Ono says:

[…] what the game is — a tool for hav­ing fun. A tool to give the play­ers a vir­tu­al fight­ing stage — an imag­i­nary are­na, if you like.”

(Empha­sis mine.) I like the fact that he sees the game as some­thing to be used, as opposed to some­thing to be con­sumed. Admit­ted­ly, it is eas­i­er to think of a fight­ing game this way than for instance an adven­ture game—which has much more embed­ded narrative—but in any case I think it is a more pro­duc­tive view.

While we’re on the top­ic of mag­a­zines. A while back I read an enjoy­able lit­tle piece in my favorite free mag­a­zine Vice about the alleged clash between ‘hard­core’ and ‘casu­al’ gamers:

Casu­al games are tak­ing off like nev­er before, with half of today’s games being lit­tle fun quizzes or about play­ing ten­nis or golf by wav­ing your arms around. The Hard­core crowd are shit­ting them­selves that there might not be a Halo 4 if girls and old peo­ple car­ry on buy­ing sim­ple games where every­one’s a win­ner and all you have to do is wave a mag­ic wand around and press a but­ton every few times.”

Only half seri­ous, to be sure, but could it be at least part­ly true? I would­n’t mind it to be so. I appre­ci­ate the rise of the casu­al game main­ly for the way it brings focus back to play­er cen­tred game design. Sim­i­lar to Yoshi­nori Ono’s atti­tude in redesign­ing Street Fight­er.

Playyoo goes beta

Today Playy­oo went beta. Playy­oo is a mobile games com­mu­ni­ty I have been involved with as a free­lance inter­ac­tion design­er since july of this year. I don’t have time for an elab­o­rate post-mortem, but here are some pre­lim­i­nary notes on what Playy­oo is and what part I’ve played in its conception.

Playyoo's here

Playy­oo brings some cool inno­va­tions to the mobile games space. It allows you to snack on free casu­al mobile games while on the go, using a per­son­al­ized mobile web page. It stores your high scores and allows you to inter­act with your friends (and foes) on an accom­pa­ny­ing reg­u­lar web site. Playy­oo is a plat­form for indie mobile game devel­op­ers. Any­one can pub­lish their Flash Lite game on it. Best of all — even if you’re not a mobile games devel­op­er, you can cre­ate a game of your own.

It’s that last bit I’ve worked on the most. I took care of the inter­ac­tion design for an appli­ca­tion imag­i­na­tive­ly called the Game Cre­ator. It allows you to take well known games (such as Lunar Lan­der) and give them your own per­son­al twist. Obvi­ous­ly this includes the game’s graph­ics, but we’ve gone one step fur­ther. You can change the way the game works as well.

Screenshot of my lolcats pairs game on Playyoo

So in the exam­ple of Lunar Lan­der you can make the space­ship look like what­ev­er you want. But you can also change the grav­i­ty, con­trol­ling the speed with which your ship drops to the sur­face. Best of all, you can cre­ate your own plan­et sur­face, as easy as draw­ing a line on paper. This is why Lunar Lan­der in the Playy­oo Game Cre­ator is called Line Lan­der. (See? Anoth­er imag­i­na­tive title!)

At the moment there are six games in the Game Cre­ator: Tic-Tac-Toe, Pairs, Revenge, Snake, Ping-Pong, and the afore­men­tioned Line Lan­der. There’s long list of oth­er games I’d like to put in there. I’m sure there will be more to come.

Since today’s launch, peo­ple have already start­ed cre­at­ing crazy stuff with it. There’s a maze-like snake game, for instance. And a game where you need to land a spi­der crab on the head of some per­son called Rebec­ca… I decid­ed to chip in with a pairs game full of lol­cats (an idea I’ve had since doing the very first wire­frame.) Any­way, the mind bog­gles to think of what peo­ple might come up with next! That’s the cool part about cre­at­ing a tool for cre­ative expression.

Screenshot of a Line Lander game in progress in the Playyoo Game Creator

So although mak­ing a game is very dif­fer­ent from play­ing one, I hope I man­aged to make it fun nonethe­less. My ambi­tion was to cre­ate a toy-like appli­ca­tion that makes ‘cre­at­ing’ a game a fun and engag­ing way to kill a few min­utes — much like Mii cre­ation on the Nin­ten­do Wii, or play­ing with Spore’s edi­tors (although we still haven’t had the chance to actu­al­ly play with lat­ter, yet.) And who knows, per­haps it’ll inspire a few peo­ple to start devel­op­ing games of their own. That would prob­a­bly be the ulti­mate compliment.

In any case, I’d love to hear your com­ments, both pos­i­tive and neg­a­tive. And if you have a Flash Lite com­pat­i­ble phone, be sure to sign up with Playy­oo. There is no oth­er place offer­ing you an end­less stream of snack sized casu­al games on your phone. Once you’ve had a taste of that, I’m sure you’ll won­der how you ever got by with­out it.

Let’s see if we can post from IMified

So I’m giv­ing IMi­fied (www.imified.com) a spin and have just added the Word­Press ser­vice to see if it works. For those that haven’t heard about IMi­fied yet; it allows you to do a num­ber of things through instant mes­sag­ing (MSN, Google Talk, what­ev­er). For instance add stuff to your Back­pack account, or like I’m doing now, write a blog post. Let’s pub­lish this to see what hap­pens, hit­ting ‘return’…

Update: Looks like it’s work­ing! I had to man­u­al­ly insert the link to the web­site and also go into Word­Press to add some cat­e­gories, so it’s only real­ly use­ful when you want to fire off a quick note. As a bonus, here’s the Adi­um win­dow with a tran­script of the IMi­fied ses­sion.