Doing UX inside of Scrum

Some notes on how I am cur­rent­ly “doing user expe­ri­ence” inside of Scrum. This approach has evolved from my projects at Hub­bub as well as more recent­ly my work with ARTO and on a project at Eden­spiek­er­mann. So I have found it works with both star­tups and agency style projects.

The start­ing point is to under­stand that Scrum is intend­ed to be a con­tain­er. It is a process frame­work. It should be able to hold any oth­er activ­i­ty you think you need as a team. So if we feel we need to add UX some­how, we should try to make it part of Scrum and not some­thing that is tacked onto Scrum. Why not tack some­thing on? Because it sig­nals design is some­how dis­tinct from devel­op­ment. And the whole point of doing agile is to have cross-func­tion­al teams. If you set up a sep­a­rate process for design you are high­ly like­ly not to ben­e­fit from the full col­lec­tive intel­li­gence of the com­bined design and devel­op­ment team. So no, design needs to be inside of the Scrum container.

Stag­gered sprints are not the answer either because you are still split­ting the team into design and devel­op­ment, ham­per­ing cross-col­lab­o­ra­tion and trans­paren­cy. You’re basi­cal­ly invit­ing Tay­lorism back into your process—the very thing you were try­ing to get­ting away from.

When you are uncom­fort­able with putting design­ers and devel­op­ers all in the same team and the same process the answer is not to make your process more elab­o­rate, par­cel things up, and decrease “messy” inter­ac­tions. The answer is increas­ing con­ver­sa­tion, not elim­i­nat­ing it.

It turns out things aren’t remote­ly as com­pli­cat­ed as they appear to be. The key is under­stand­ing Scrum’s events. The big event hold­ing all oth­er events is the sprint. The sprint out­puts a releasable incre­ment of “done” prod­uct. The devel­op­ment team does every­thing required to achieve the sprint goal col­lab­o­ra­tive­ly deter­mined dur­ing sprint plan­ning. Nat­u­ral­ly this includes any design need­ed for the prod­uct. I think of this as the ‘pro­duc­tion’ type of design. It typ­i­cal­ly con­sists most­ly of UI design. There may already be some pre­lim­i­nary UI design avail­able at the start of the sprint but it does not have to be finished. 

What about the kind of design that is required for fig­ur­ing out what to build in the first place? It might not be obvi­ous at first, but Scrum actu­al­ly has an ongo­ing process which read­i­ly accom­mo­dates it: back­log refine­ment. These are all activ­i­ties required to get a prod­uct back­log item in shape for sprint plan­ning. This is emphat­i­cal­ly not a solo show for the prod­uct man­ag­er to con­duct. It is some­thing the whole team col­lab­o­rates on. Devel­op­ers and design­ers. In my expe­ri­ence design­ers are great at facil­i­tat­ing back­log refine­ment ses­sions. At the white­board, fig­ur­ing stuff out with the whole team ‘Lean UX’ style. 

I will admit prod­uct back­log refine­ment is Scrum’s weak point. Where it offers a lot of struc­ture for the sprints, it offers hard­ly any for the back­log refine­ment (or groom­ing as some call it). But that’s okay, we can evolve our own.

I like to use Kan­ban to man­age the process of back­log refine­ment. Items come into the pipeline as some­thing we want to elab­o­rate because we have decid­ed we want to build it (in some form or oth­er, can be just an exper­i­ment) in the next sprint or two. It then goes through var­i­ous stages of elab­o­ra­tion. At the very least cap­tur­ing require­ments in the form of user sto­ries or job sto­ries, doing sketch­es, a lo-fi pro­to­type, mock­ups and a hi-fi pro­to­type and final­ly break­ing the item down into work to be done and attach­ing an esti­mate to it. At this point it is ready to be part of a sprint. Cru­cial­ly, dur­ing this life­cy­cle of an item as it is being refined, we can and should do user research if we feel we need more data, or user test­ing if we feel it is too risky to com­mit to a fea­ture outright. 

For this kind of fig­ur­ing stuff out, this ‘plan­ning’ type of design, it makes no sense to have it be part of a sprint-like struc­ture because the work required to get it to a ‘ready’ state is much more unpre­dictable. The point of hav­ing a loos­er groom­ing flow is that it exists to elim­i­nate uncer­tain­ty for when we com­mit to an item in a sprint.

So between the sprint and back­log refine­ment, Scrum read­i­ly accom­mo­dates design. ‘Pro­duc­tion’ type design hap­pens inside of the sprint and design­ers are con­sid­ered part of the devel­op­ment team. ‘Plan­ning’ type of design hap­pens as part of back­log refinement.

So no need to tack on a sep­a­rate process. It keeps the process sim­ple and under­stand­able, thus increas­ing trans­paren­cy for the whole team. It pre­vents design from becom­ing a black box to oth­ers. And when we make design part of the con­tain­er process frame­work that is Scrum, we reap the rewards of the team’s col­lec­tive intel­li­gence and we increase our agility.

Prototyping is a team sport

Late­ly I have been bing­ing on books, pre­sen­ta­tions and arti­cles relat­ed to ‘Lean UX’. I don’t like the term, but then I don’t like the tech industry’s love for invent­ing a new label for every damn thing. I do like the things empha­sis­es: shared under­stand­ing, deep col­lab­o­ra­tion, con­tin­u­ous user feed­back. These are prin­ci­ples that have always implic­it­ly guid­ed the choic­es I made when lead­ing teams at Hub­bub and now also as a mem­ber of sev­er­al teams in the role of prod­uct designer.

In all these lean UX read­ings a thing that keeps com­ing up again and again is pro­to­typ­ing. Pro­to­types are the go-to way of doing ‘exper­i­ments’, in lean-speak. Oth­er things can be done as well—surveys, inter­views, whatever—but more often than not, assump­tions are test­ed with prototypes. 

Which is great! And also unsur­pris­ing as pro­to­typ­ing has real­ly been embraced by the tech world. And tools for rapid pro­to­typ­ing are get­ting a lot of atten­tion and inter­est as a result. How­ev­er, this comes with a cou­ple of risks. For one, some­times it is fine to stick to paper. But the lure of shiny pro­to­typ­ing tools is strong. You’d rather not show a crap­py draw­ing to a user. What if they hate it? How­ev­er, high fideli­ty pro­to­typ­ing is always more cost­ly than paper. So although well-inten­tioned, pro­to­typ­ing tools can encour­age waste­ful­ness, the bane of lean. 

There is a big­ger dan­ger which runs against the lean ethos, though. Some tools afford deep col­lab­o­ra­tion more than oth­ers. Let’s be real: none afford deep­er col­lab­o­ra­tion than paper and white­boards. There is one per­son behind the con­trols when pro­to­typ­ing with a tool. So in my view, one should only ever progress to that step once a team effort has been made to hash out the rough out­lines of what is to be pro­to­typed. Basi­cal­ly: always paper pro­to­type the dig­i­tal pro­to­type. Together. 

I have had a lot of fun late­ly play­ing with brows­er pro­to­types and with pro­to­typ­ing in Framer. But as I was get­ting back into all of this I did notice this risk: All of a sud­den there is a per­son on the team who does the pro­to­types. Unless this solo pro­to­typ­ing is pre­ced­ed by shared pro­to­typ­ing, this is a prob­lem. Because the rest of the team is left out of the think­ing-through-mak­ing which makes the pro­to­typ­ing process so valu­able in addi­tion to the testable arte­facts it outputs.

It is I think a key over­sight of the ‘should design­ers code’ debaters and to an extent one made by all pro­to­typ­ing tool man­u­fac­tur­ers: Indi­vid­u­als don’t pro­to­type, teams do. Pro­to­typ­ing is a team sport. And so the suc­cess of a tool depends not only on how well it sup­ports indi­vid­ual pro­to­typ­ing activ­i­ties but also how well it embeds itself in col­lab­o­ra­tive workflows. 

In addi­tion to the tools them­selves get­ting bet­ter at sup­port­ing col­lab­o­ra­tive work­flows, I would also love to see more tuto­ri­als, both offi­cial and from the com­mu­ni­ty, about how to use a pro­to­typ­ing tool with­in the larg­er con­text of a team doing some form of agile. Most tuto­ri­als now focus on “how do I make this thing with this tool”. Use­ful, up to a point. But a large part of pro­to­typ­ing is to arrive at “the thing” together. 

One of the lean UX things I devoured was this pre­sen­ta­tion by Bill Scott in which he talks about align­ing a pro­to­typ­ing and a devel­op­ment tech stack, so that the gap between design and engi­neer­ing is bridged not just with process­es but also with tool­ing. His exam­ple applies to web devel­op­ment and app devel­op­ment using web tech­nolo­gies. I won­der what a sim­i­lar approach looks like for native mobile app devel­op­ment. But this is the sort of thing I am talk­ing about: Smart think­ing about how to actu­al­ly do this lean thing in the real world. I believe organ­is­ing our­selves so that we can pro­to­type as a team is absolute­ly key. I will pick my tools and process­es accord­ing­ly in future.

All of the above is as usu­al most­ly a reminder to self: As a design­er your role is not to go off and work solo on bril­liant pro­to­types. Your role is to facil­i­tate such efforts by the whole team. Sure, there will be solo deep design­er­ly craft­ing hap­pen­ing. But it will not add up to any­thing if it is not embed­ded in a col­lab­o­ra­tive design and devel­op­ment framework.

Running

Running the bay

I have start­ed run­ning. I nev­er thought I would. I think I was put off run­ning in high school when dur­ing phys­i­cal edu­ca­tion they would occa­sion­al­ly force us to run 5k out of the blue. I remem­ber it was tor­ture. I nev­er did par­tic­u­lar­ly well and was sore after­wards so I came to asso­ciate it with fail­ure and tedi­um and so on.

When we moved to Sin­ga­pore I decid­ed I would do some more exer­cise and at some point read this thing about how run­ning is a cheap and easy way to get some exer­cise in and that it is par­tic­u­lar­ly fun to do with your partner.

My wife has always been a run­ner so I sug­gest­ed we start­ed run­ning togeth­er. She was sur­prised I think but agreed it would be a good idea. The next week­end I went and got a pair of shoes and that same day we were off to our first run.

In the begin­ning it was a hard to find good routes. I could not man­age much more than 3k which did not help. But 3k turned into 5k. Around that time we found a field in the neigh­bour­hood we could do laps around. When that became bor­ing we decid­ed to switch to the bay, a very pop­u­lar spot, and we start­ed doing 6–8k runs there. We’ve been stick­ing to that ever since.

Now we run rough­ly two to three times a week. Some­times we try one of the parks and run some trails. I find I enjoy trail run­ning even more because it is less about speed and more about aware­ness. And there is more to see so it nev­er gets boring.

We were on hol­i­day on Bali the oth­er week and I brought my gear and ran some trails through rice fields and along the beach there as well and it struck me that this is pos­si­bly the great­est thing about run­ning. You can do it any­where you go, and once you build up a decent amount of sta­mi­na, your legs will take you wher­ev­er it is you want to go. It is an incred­i­bly lib­er­at­ing feeling.

It took a bit of doing to get to that point. But now that I’ve reached it, I’m hooked.

ARTO

Time for a sta­tus update on my stay in Sin­ga­pore. I have already entered the final three months of my time here. Time flies when you’re hav­ing fun eat­ing every­thing in sight, it turns out. 

On the work front I have indeed found the time to do some think­ing about what my next big thing will be. Noth­ing has firmed up to the point where I feel like shar­ing it here but I am enjoy­ing the con­ver­sa­tions I am hav­ing with var­i­ous peo­ple about it.

In the mean­time, I have been keep­ing busy work­ing with a local start­up called ARTO. I have tak­en on the role of prod­uct design­er and I am also respon­si­ble for prod­uct man­age­ment of the user-fac­ing parts of the thing we are building. 

That “thing” is about art. There are many peo­ple who are inter­est­ed in art but don’t know where to start when it comes to find­ing, enjoy­ing and acquir­ing it. We’re build­ing a mobile and TV app that should make that a whole lot more easy and fun.

When I say art I mean com­mer­cial, pop­u­lar and con­tem­po­rary art of the 2D vari­ety. So paint­ing, illus­tra­tion, pho­tog­ra­phy, etc. Things you might buy orig­i­nals or prints of and put on your liv­ing room wall. Oth­ers are doing a fine job on the high end of the art mar­ket. We think there are parts remain­ing that have been under­served to date. 

There are many mov­ing parts to this prod­uct, rang­ing from a rec­om­men­da­tion engine, con­tent man­age­ment sys­tem, mobile app, TV app and more so I am nev­er bored. There is always some­thing to fig­ure out in terms of what to build and how it should work and look. For the past cou­ple of years I was always too busy man­ag­ing the stu­dio to real­ly get into the details of design but now I can total­ly focus on that and it real­ly is a pleasure.

On the peo­ple side we have a small but grow­ing team of bril­liant indi­vid­u­als hal­ing from var­i­ous parts of the region includ­ing Viet­nam, Myan­mar and India. This lends an addi­tion­al lay­er of fun chal­lenge to the goings on as we con­stant­ly nego­ti­ate our dif­fer­ences but also dis­cov­er the many com­mon­al­i­ties afford­ed by the glob­alised tech indus­try. I also get to trav­el to Ho Chi Minh City reg­u­lar­ly which is a nice change from the extreme order that is Singapore. 

It is ear­ly days so I not only get to help shape the prod­uct from the very start but also the com­pa­ny itself. This includes fig­ur­ing out and main­tain­ing design and devel­op­ment process­es. For this I find my Boy­di­an explo­rations quite use­ful, paired with what is now more than 13 years of indus­try expe­ri­ence (how did that hap­pen?) I have also con­duct­ed more hir­ing inter­views in the past few months than I did in the ten years before.

In a month or two a first ver­sion of the prod­uct should be in the mar­ket. When we’ve got­ten to that point I will do anoth­er of these updates. In the mean­time just know I am up to my armpits in think­ing-through-mak­ing about art dis­cov­ery and enjoy­ment on screens small and large. If you have any­thing relat­ed to share, or would like to be one of the first to test-dri­ve the thing when it arrives, let me know.

Artificial intelligence, creativity and metis

Boris point­ed me to Cre­ativeAI, an inter­est­ing arti­cle about cre­ativ­i­ty and arti­fi­cial intel­li­gence. It offers a real­ly nice overview of the devel­op­ment of the idea of aug­ment­ing human capa­bil­i­ties through tech­nol­o­gy. One of the claims the authors make is that arti­fi­cial intel­li­gence is mak­ing cre­ativ­i­ty more acces­si­ble. Because tools with AI in them sup­port humans in a range of cre­ative tasks in a way that short­cuts the tra­di­tion­al require­ments of long prac­tice to acquire the nec­es­sary tech­ni­cal skills. 

For exam­ple, Shad­ow­Draw (PDF) is a pro­gram that helps peo­ple with free­hand draw­ing by guess­ing what they are try­ing to cre­ate and show­ing a dynam­i­cal­ly updat­ed ‘shad­ow image’ on the can­vas which peo­ple can use as a guide.

It is an inter­est­ing idea and in some ways these kinds of soft­ware indeed low­er the thresh­old for peo­ple to engage in cre­ative tasks. They are good exam­ples of arti­fi­cial intel­li­gence as part­ner in stead of mas­ter or servant. 

While read­ing Cre­ativeAI I wasn’t entire­ly com­fort­able though and I think it may have been caused by two things. 

One is that I care about cre­ativ­i­ty and I think that a good under­stand­ing of it and a dai­ly prac­tice at it—in the broad sense of the word—improves lives. I am also in some ways old-fash­ioned about it and I think the joy of cre­ativ­i­ty stems from the infi­nite­ly high skill ceil­ing involved and the nev­er-end­ing prac­tice it affords. Let’s call it the Jiro per­spec­tive, after the sushi chef made famous by a won­der­ful doc­u­men­tary.

So, claim­ing that cre­ative tools with AI in them can short­cut all of this life-long joy­ful toil pro­duces a degree of pan­ic for me. Although it’s prob­a­bly a Pas­toral world­view which would be bet­ter to aban­don. In a world eat­en by soft­ware, it’s bet­ter to be a Promethean.

The sec­ond rea­son might hold more water but real­ly is more of an open ques­tion than some­thing I have researched in any mean­ing­ful way. I think there is more to cre­ativ­i­ty than just the tech­ni­cal skill required and as such the Cre­ativeAI sto­ry runs the risk of being reduc­tion­ist. While read­ing the arti­cle I was also slow­ly but sure­ly mak­ing my way through one of the final chap­ters of James C. Scott’s See­ing Like a State, which is about the con­cept of metis.

It is prob­a­bly the most inter­est­ing chap­ter of the whole book. Scott intro­duces metis as a form of knowl­edge dif­fer­ent from that pro­duced by sci­ence. Here are some quick excerpts from the book that pro­vide a sense of what it is about. But I real­ly can’t do the rich­ness of his descrip­tion jus­tice here. I am try­ing to keep this short.

The kind of knowl­edge required in such endeav­ors is not deduc­tive knowl­edge from first prin­ci­ples but rather what Greeks of the clas­si­cal peri­od called metis, a con­cept to which we shall return. […] metis is bet­ter under­stood as the kind of knowl­edge that can be acquired only by long prac­tice at sim­i­lar but rarely iden­ti­cal tasks, which requires con­stant adap­ta­tion to chang­ing cir­cum­stances. […] It is to this kind of knowl­edge that [social­ist writer] Lux­em­burg appealed when she char­ac­ter­ized the build­ing of social­ism as “new ter­ri­to­ry” demand­ing “impro­vi­sa­tion” and “cre­ativ­i­ty.”

Scott’s argu­ment is about how author­i­tar­i­an high-mod­ernist schemes priv­i­lege sci­en­tif­ic knowl­edge over metis. His explo­ration of what metis means is super inter­est­ing to any­one ded­i­cat­ed to hon­ing a craft, or to cul­ti­vat­ing organ­i­sa­tions con­ducive to the devel­op­ment and appli­ca­tion of craft in the face of uncer­tain­ty. There is a close link between metis and the con­cept of agility.

So cir­cling back to arti­fi­cial­ly intel­li­gent tools for cre­ativ­i­ty I would be inter­est­ed in explor­ing not only how we can dimin­ish the need for the acqui­si­tion of the tech­ni­cal skills required, but to also accel­er­ate the acqui­si­tion of the prac­ti­cal knowl­edge required to apply such skills in the ever-chang­ing real world. I sug­gest we expand our under­stand­ing of what it means to be cre­ative, but with­out los­ing the link to actu­al practice.

For the ancient Greeks metis became syn­ony­mous with a kind of wis­dom and cun­ning best exem­pli­fied by such fig­ures as Odysseus and notably also Prometheus. The lat­ter in par­tic­u­lar exem­pli­fies the use of cre­ativ­i­ty towards trans­for­ma­tive ends. This is the real promise of AI for cre­ativ­i­ty in my eyes. Not to sim­ply make it eas­i­er to repro­duce things that used to be hard to cre­ate but to cre­ate new kinds of tools which have the capac­i­ty to sur­prise their users and to pro­duce results that were impos­si­ble to cre­ate before.

Artificial intelligence as partner

Some notes on arti­fi­cial intel­li­gence, tech­nol­o­gy as part­ner and relat­ed user inter­face design chal­lenges. Most­ly notes to self, not sure I am adding much to the debate. Just sum­maris­ing what I think is impor­tant to think about more. Warn­ing: Dense with links.

Matt Jones writes about how arti­fi­cial intel­li­gence does not have to be a slave, but can also be partner.

I’m per­son­al­ly much more inter­est­ed in machine intel­li­gence as human aug­men­ta­tion rather than the oft-hyped AI assis­tant as a sep­a­rate embodiment.

I would add a third pos­si­bil­i­ty, which is AI as mas­ter. A com­mon fear we humans have and one I think only grow­ing as things like Alpha­Go and new Boston Dynam­ics robots keep happening.

I have had a tweet pinned to my time­line for a while now, which is a quote from Play Mat­ters.

tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being” 

So this idea actu­al­ly does not just apply to AI but to tech in gen­er­al. Of course, as tech gets smarter and more inde­pen­dent from humans, the idea of a ‘third way’ only grows in importance. 

More tweet­ing. A while back, short­ly after AlphaGo’s vic­to­ry, James tweet­ed:

On the one hand, we must insist, as Kas­parov did, on Advanced Go, and then Advanced Every­thing Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear exam­ple of humans and AI part­ner­ing. And it is also an exam­ple of tech­nol­o­gy as a source of expres­sion and a way of being.

Also, in a WIRED arti­cle on Alpha­Go, some­one who had played the AI repeat­ed­ly says his game has improved tremendously. 

So that is the promise: Arti­fi­cial­ly intel­li­gent sys­tems which work togeth­er with humans for mutu­al benefit. 

Now of course these AIs don’t just arrive into the world ful­ly formed. They are cre­at­ed by humans with par­tic­u­lar goals in mind. So there is a design com­po­nent there. We can design them to be part­ners but we can also design them to be mas­ters or slaves.

As an aside: Maybe AIs that make use of deep learn­ing are par­tic­u­lar­ly well suit­ed to this part­ner mod­el? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynam­ics. There appar­ent­ly is a sig­nif­i­cant dif­fer­ence between holis­tic and reduc­tion­ist approach­es, deep learn­ing being holis­tic. I imag­ine reduc­tion­ist AI might be more depen­dent on humans. But this is just wild spec­u­la­tion. I don’t know if there is any­thing there.

This insis­tence of James on “advanced every­thing else” is a world view. A pol­i­tics. To allow our­selves to be increas­ing­ly entan­gled with these sys­tems, to not be afraid of them. Because if we are afraid, we either want to sub­ju­gate them or they will sub­ju­gate us. It is also about not obscur­ing the sys­tems we are part of. This is a sen­ti­ment also expressed by James in the same series of tweets I quot­ed from earlier:

These emer­gences are also the best mod­el we have ever built for describ­ing the true state of the world as it always already exists.

And there is over­lap here with ideas expressed by Kevin in ‘Design as Par­tic­i­pa­tion’:

[W]e are no longer just using com­put­ers. We are using com­put­ers to use the world. The obscured and com­plex code and engi­neer­ing now engages with peo­ple, resources, civics, com­mu­ni­ties and ecosys­tems. Should design­ers con­tin­ue to priv­i­lege users above all oth­ers in the sys­tem? What would it mean to design for par­tic­i­pants instead? For all the participants?

AI part­ners might help us to bet­ter see the sys­tems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emer­gence of arti­fi­cial intel­li­gence (only this time with a bud­dy-style user inter­face that actu­al­ly works), this ques­tion of “doing some­thing for me” vs “allow­ing me to do even more” is going to get even more pro­nounced. Both are effec­tive, but the first sucks… or at least, it sucks accord­ing to my own per­son­al pol­i­tics, because I regard indi­vid­ual alien­ation from soci­ety and com­plex sys­tems as one of the huge threats in the 21st century.

I am remind­ed of the mixed-ini­tia­tive sys­tems being researched in the area of pro­ce­dur­al con­tent gen­er­a­tion for games. I wrote about these a while back on the Hub­bub blog. Such sys­tems are part­ners of design­ers. They give some­thing like super pow­ers. Now imag­ine such pow­ers applied to oth­er prob­lems. Quite exciting.

Actu­al­ly, in the afore­men­tioned arti­cle I dis­tin­guish between tools for mak­ing things and tools for inspect­ing pos­si­bil­i­ty spaces. In the first case design­ers manip­u­late more abstract rep­re­sen­ta­tions of the intend­ed out­come and the sys­tem gen­er­ates the actu­al out­put. In the sec­ond case the sys­tem visu­alis­es the range of pos­si­ble out­comes giv­en a par­tic­u­lar con­fig­u­ra­tion of the abstract rep­re­sen­ta­tion. These two are best paired. 

From a design per­spec­tive, a lot remains to be fig­ured out. If I look at those mixed-ini­tia­tive tools I am struck by how poor­ly they com­mu­ni­cate what the AI is doing and what its capa­bil­i­ties are. There is a huge user inter­face design chal­lenge there. 

For stuff focused on get­ting infor­ma­tion, a con­ver­sa­tion­al UI seems to be the cur­rent local opti­mum for work­ing with an AI. But for tools for cre­ativ­i­ty, to use the two-way split pro­posed by Vic­tor, dif­fer­ent UIs will be required.

What shape will they take? What visu­al lan­guage do we need to express the par­tic­u­lar prop­er­ties of arti­fi­cial intel­li­gence? What approach­es can we take in addi­tion to per­son­i­fy­ing AI as bots or char­ac­ters? I don’t know and I can hard­ly think of any good exam­ples that point towards promis­ing approach­es. Lots to be done.

Prototyping in the browser

When you are design­ing a web site or web app I think you should pro­to­type in the brows­er. Why? You might as well ask why pro­to­type at all. Answer: To enable con­tin­u­ous test­ing and refine­ment of your design. Since you are design­ing for the web it makes sense to do this test­ing and refine­ment with an arte­fact com­posed of the web’s material.

There are many ways to do pro­to­typ­ing. A com­mon way is to make wire­frames and then make them ‘click­able’. But when I am design­ing a web site or a web app and I get to the point where it is time to do wire­frames I often pre­fer to go straight to the browser. 

Before this step I have sketched out all the screens on paper of course. I have done mul­ti­ple sketch­es of each page. I’ve had them cri­tiqued by team mem­bers and I have reworked them. 

Drawing pictures of web pages

But then I open my draw­ing pro­gram—Sketch, in my case—and my heart sinks. Not because Sketch sucks. Sketch is great. But it some­how feels wrong to draw pic­tures of web pages on my screen. I find it cum­ber­some. My draw­ing pro­gram does not behave like a brows­er. That is to say in stead of defin­ing a bunch of rules for ele­ments and hav­ing the brows­er fig­ure out how to ren­der them on a page togeth­er I need to fol­low those rules myself in my head as I put each ele­ment in its place.

And don’t get me start­ed on how wire­frames are sup­posed to be with­out visu­al design. That is non­sense. If you are using con­trast, rep­e­ti­tion, align­ment and prox­im­i­ty, you are doing lay­out. That is visu­al design. I can’t stand wire­frames with a bad visu­al hierarchy.

If I per­se­vere, and I have a set of wire­frames in my draw­ing pro­gram, they are sta­t­ic. I can’t use them. I then need to export them to some oth­er often clunky pro­gram to make the pic­tures click­able. Which always results in a poor resem­blance of the actu­al expe­ri­ence. (I use Mar­vel. It’s okay but it is hard­ly a joy to use. For mobile apps I still use it, for web sites I pre­fer not to.)

Prototyping in the browser

When I pro­to­type in the brows­er I don’t have to deal with these issues. I am doing lay­out in a way that is native to the medi­um. And once I have some pages set up they are imme­di­ate­ly usable. So I can hand it to some­one, a team mem­ber or a test par­tic­i­pant, and let them play with it.

That is why, for web sites and web apps, I skip wire­frames alto­geth­er and pro­to­type in the brows­er. I do not know how com­mon this is in the indus­try nowa­days. So I thought I would share my approach here. It may be of use to some. 

It used to be the case that it was quite a bit of has­sle to get up and run­ning with a brows­er pro­to­type so nat­u­ral­ly open­ing a draw­ing pack­age seemed more attrac­tive. Not so any­more. Tools have come a long way. Case in point: My set­up nowa­days involves zero screw­ing around on the com­mand line.

CodeKit

The core of it is a paid-for Mac app called CodeK­it, a so-called task man­ag­er. It allows you to install a front-end devel­op­ment frame­work I like called Zurb Foun­da­tion with a cou­ple of clicks and has a built in web serv­er so you can play with your pro­to­type on any device on your local net­work. As you make changes to the code of your pro­to­type it gets auto­mat­i­cal­ly updat­ed on all your devices. No more man­u­al refresh­ing. Saves a huge amount of time.

I know you can do most of what CodeK­it does for you with stuff like Grunt but that involves tedious con­fig­u­ra­tion and work­ing the com­mand line. This is fine when you’re a devel­op­er, but not fine when you are a design­er. I want to be up and run­ning as fast as pos­si­ble. CodeK­it allows me to do that and has some oth­er fea­tures built in that are ide­al for pro­to­typ­ing which I will talk about more below. Long sto­ry short: CodeK­it has saved me a huge amount of time and is well worth the money.

Okay so on with the show. Yes, this whole pro­to­typ­ing in the brows­er thing involves ‘cod­ing’. But hon­est­ly, if you can’t write some HTML and CSS you real­ly shouldn’t be doing design for the web in the first place. I don’t care if you con­sid­er your­self a UX design­er and some­how above all this low­ly tech­ni­cal stuff. You are not. Nobody is say­ing you should become a fron­tend devel­op­er but you need to have an acquain­tance with the mate­ri­als your prod­uct is made of. Fol­low a few cours­es on Codecadamy or some­thing. There real­ly isn’t an excuse any­more these days for not know­ing this stuff. If you want to lev­el up, learn SASS.

Zurb Foundation

I like Zurb Foun­da­tion because it offers a coher­ent and com­pre­hen­sive library of ele­ments which cov­ers almost all the com­mon pat­terns found in web sites and apps. It offers a grid and some default typog­ra­phy styles as well. All of it doesn’t look flashy at all which is how I like it when I am pro­to­typ­ing. A pro­to­type at this stage does not require per­son­al­i­ty yet. Just a clear visu­al hier­ar­chy. Work­ing with Foun­da­tion is almost like play­ing with LEGO. You just click togeth­er the stuff you need. It’s pain­less and looks and works great.

I hard­ly do any styling but the few changes I do want to make I can eas­i­ly add to Foundation’s app.scss using SASS. I usu­al­ly have a few styles in there for tweak­ing some mar­gins on par­tic­u­lar ele­ments, for exam­ple a foot­er. But I try to focus on the struc­ture and behav­iour of my pages and for that I am most­ly doing HTML

GitHub

Test­ing local­ly I already men­tioned. For that, CodeK­it has you cov­ered. Of course, you want to be able to share your pro­to­type with oth­ers. For this I like to use GitHub and their Pages fea­ture. Once again, using their desk­top client, this involves zero com­mand line work. You just add the fold­er with your CodeK­it project as a new repos­i­to­ry and sync it with GitHub. Then you need to add a branch named ‘gh-pages’ and do ‘update from mas­ter’. Presto, your pro­to­type is now on the web for any­one with the URL to see and use. Per­fect if you’re work­ing in a dis­trib­uted team. 

Don’t be intim­i­dat­ed by using GitHub. Their on-board­ing is pret­ty impres­sive nowa­days. You’ll be up and run­ning in no time. Using ver­sion con­trol, even if it is just you work­ing on the pro­to­type, adds some much need­ed struc­ture and con­trol over changes. And when you are col­lab­o­rat­ing on your pro­to­type with team mem­bers it is indispensable. 

But in most cas­es I am the only one build­ing the pro­to­type so I just work on the mas­ter branch and once every while I update the gh-pages branch from mas­ter and sync it and I am done. If you use Slack you can add a GitHub bot to a chan­nel and have your team mem­bers receive an auto­mat­ic update every time you change the prototype. 

The Kit Language

If your project is of any size beyond the very small you will like­ly have repeat­ing ele­ments in your design. Head­ers, foot­ers, recur­ring wid­gets and so on. CodeK­it has recent­ly added sup­port for some­thing called the Kit Lan­guage. This adds sup­port for imports and vari­ables to reg­u­lar HTML. It is absolute­ly great for pro­to­typ­ing. For each repeat­ing ele­ment you cre­ate a ‘par­tial’ and import it wher­ev­er you need it. Vari­ables are great for chang­ing the con­tents of such repeat­ing ele­ments. CodeK­it com­piles it all into plain sta­t­ic HTML for you so your pro­to­type runs anywhere.

The Kit Lan­guage real­ly was the miss­ing piece of the puz­zle for me. With it in place I am very com­fort­able rec­om­mend­ing this way of work­ing to anyone.

So that’s my set­up: CodeK­it, Zurb Foun­da­tion and GitHub. Togeth­er they make for a very pleas­ant and pro­duc­tive way to do pro­to­typ­ing in the brows­er. I don’t imag­ine myself going back to draw­ing pic­tures of web pages any­time soon.

Writing for conversational user interfaces

Last year at Hub­bub we worked on two projects fea­tur­ing a con­ver­sa­tion­al user inter­face. I thought I would share a few notes on how we did the writ­ing for them. Because for con­ver­sa­tion­al user inter­faces a large part of the design is in the writing. 

At the moment, there aren’t real­ly that many tools well suit­ed for doing this. Twine comes to mind but it is real­ly more focused on pub­lish­ing as opposed to author­ing. So while we were work­ing on these projects we just grabbed what­ev­er we were famil­iar with and felt would get the job done. 

I actu­al­ly think there is an oppor­tu­ni­ty here. If this con­ver­sa­tion­al ui thing takes off design­ers would ben­e­fit a lot from bet­ter tools to sketch and pro­to­type them. After all this is the only way to fig­ure out if a con­ver­sa­tion­al user inter­face is suit­able for a par­tic­u­lar project. In the words of Bill Bux­ton:

Every­thing is best for some­thing and worst for some­thing else.”

Okay so below are my notes. The two projects are KOKORO (a code­name) and Free Birds. We have yet to pub­lish exten­sive­ly on both, so a quick descrip­tion is in order.

KOKORO is a dig­i­tal coach for teenagers to help them man­age and improve their men­tal health. It is cur­rent­ly a pro­to­type mobile web app not pub­licly avail­able. (The engine we built to dri­ve it is avail­able on GitHub, though.)

Free Birds (Vri­je Vogels in Dutch) is a game about civ­il lib­er­ties for fam­i­lies vis­it­ing a war and resis­tance muse­um in the Nether­lands. It is a loca­tion-based iOS app cur­rent­ly avail­able on the Dutch app store and playable in Air­borne Muse­um Harten­stein in Oosterbeek.


For KOKORO we used Gingko to write the con­ver­sa­tion branch­es. This is good enough for a pro­to­type but it becomes unwieldy at scale. And any­way you don’t want to be lim­it­ed to a tree struc­ture. You want to at least be able to loop back to a par­ent branch, some­thing that isn’t sup­port­ed by Gingko. And maybe you don’t want to use the branch­ing pat­tern at all.

Free Birds’s sto­ry has a very lin­ear struc­ture. So in this case we just wrote our con­ver­sa­tions in Quip with some basic rules for for­mat­ting, not unlike a screenplay. 

In Free Birds play­er choic­es ‘colour’ the events that come imme­di­ate­ly after, but the path stays the same.

This approach was inspired by the Walk­ing Dead games. Those are super clever at giv­ing play­ers a sense of agency with­out the need for sprawl­ing sto­ry trees. I remem­ber see­ing the cre­ators present this strat­e­gy at PRACTICE and some­thing clicked for me. The impor­tant point is, choic­es don’t have to branch out to dif­fer­ent direc­tions to feel meaningful.

KOKORO’s choic­es did have to lead to dif­fer­ent paths so we had to build a tree struc­ture. But we also kept track of things a user says. This allows the app to “learn” about the user. Sub­se­quent seg­ments of the con­ver­sa­tion are adapt­ed based on this learn­ing. This allows for more flex­i­bil­i­ty and it scales bet­ter. A sec­tion of a con­ver­sa­tion has var­i­ous states between which we switch depend­ing on what a user has said in the past. 

We did some­thing sim­i­lar in Free Birds but used it to a far more lim­it­ed degree, real­ly just to once again colour cer­tain pieces of dia­logue. This is already enough to give a play­er a sense of agency.


As you can see, it’s all far from rock­et surgery but you can get sur­pris­ing­ly good results just by stick­ing to these sim­ple pat­terns. If I were to inves­ti­gate more advanced strate­gies I would look into NLP for input and pro­ce­dur­al gen­er­a­tion for out­put. Who knows, maybe I will get to work on a project involv­ing those things some time in the future.

Hardware interfaces for tuning the feel of microinteractions

In Dig­i­tal Ground Mal­colm McCul­lough talks about how tun­ing is a cen­tral part of inter­ac­tion design prac­tice. How part of the chal­lenge of any project is to get to a point where you can start tweak­ing the vari­ables that deter­mine the behav­iour of your inter­face for the best feel.

Feel” is a word I bor­row from game design. There is a book on it by Steve Swink. It is a fun­ny term. We are try­ing to sim­u­late sen­sa­tions that are derived from the phys­i­cal realm. We are try­ing to make things that are pure­ly visu­al behave in such a way that they evoke these sen­sa­tions. There are many games that heav­i­ly depend on get­ting feel right. Basi­cal­ly all games that are built on a physics sim­u­la­tion of some kind require good feel for a good play­er expe­ri­ence to emerge.

Physics sim­u­la­tions have been find­ing their way into non-game soft­ware prod­ucts for some time now and they are becom­ing an increas­ing part of what makes a prod­uct, er, feel great. They are often at the foun­da­tion of sig­na­ture moments that set a prod­uct apart from the pack. These sig­na­ture moments are also known as microin­t­er­ac­tions. To get them just right, being able to tune well is very important.

The behav­iour of microin­t­er­ac­tions based on physics sim­u­la­tions is deter­mined by vari­ables. For exam­ple, the feel of a spring is deter­mined by the mass of the weight attached to the spring, the spring’s stiff­ness and the fric­tion that resists the motion of the weight. These vari­ables inter­act in ways that are hard to mod­el in your head so you need to make repeat­ed changes to each vari­able and try the sim­u­la­tion to get it just right. This is time-con­sum­ing, cum­ber­some and resists the easy explo­ration of alter­na­tives essen­tial to a good design process.

In The Set­up game design­er Ben­nett Fod­dy talks about a way to improve on this work­flow. Many of his games (if not all of them) are playable physics sim­u­la­tions with pun­ish­ing­ly hard con­trols. He sug­gests using a hard­ware inter­face (a MIDI con­troller) to tune the vari­ables that deter­mine the feel of his game while it runs. In this way the loop between chang­ing a vari­able and see­ing its effect in game is dra­mat­i­cal­ly short­ened and many dif­fer­ent com­bi­na­tions of val­ues can be explored eas­i­ly. Once a sat­is­fac­to­ry set of val­ues for the vari­ables has been found they can be writ­ten back to the soft­ware for future use.

I do believe such a set­up is still non-triv­ial to make work with todays tools. A quick check ver­i­fies that Framer does not have OSC sup­port, for exam­ple. There is an oppor­tu­ni­ty here for pro­to­typ­ing envi­ron­ments such as Framer and oth­ers to sup­port it. The approach is not lim­it­ed to motion-based microin­t­er­ac­tions but can be extend­ed to the tun­ing of vari­ables that con­trol oth­er aspects of an app’s behaviour. 

For exam­ple, when we were mak­ing Stand­ing, we would have ben­e­fit­ed huge­ly from hard­ware con­trols to tweak the sen­si­tiv­i­ty of its motion-sens­ing func­tions as we were using the app. We were forced to do it by repeat­ed­ly chang­ing num­bers in the code and build­ing the app again and again. It was quite a pain to get right. To this day I have the feel­ing we could have made it bet­ter if only we would have had the tools to do it.

Judg­ing from sna­fus such as the poor feel of the lat­est Twit­ter desk­top client, there is a real need for bet­ter tools for tun­ing microin­t­er­ac­tions. Just like pen tablets have become indis­pens­able for those design­ing the form of user inter­faces on screens. I think we might soon find a small set of hard­ware knobs on the desks of those design­ers work­ing on the behav­iour of user interfaces.

Design without touching the surface

I am prepar­ing two class­es at the moment. One is an intro­duc­tion to user expe­ri­ence design, the oth­er to user inter­face design. I did not come up with this divi­sion, it was part of the assign­ment. I thought it was odd at first. I wasn’t sure where one dis­ci­pline ends and the oth­er begins. I still am not sure. But I made a prag­mat­ic deci­sion to have the UX class focus on the high lev­el process of design­ing (soft­ware) prod­ucts, and the UI class focus on the visu­al aspects of a product’s inter­face. The UI class deals with a product’s sur­face, form, and to some extent also its behav­iour, but on a micro lev­el. Where­as the UX class focus­es on behav­iour on the macro lev­el. Sim­ply speaking—the UX class is about behav­iour across screens, the UI class is about behav­iour with­in screens.

The solu­tion is work­able. But I am still not entire­ly com­fort­able with it. I am not com­fort­able with the idea of being able to prac­tice UX with­out ‘touch­ing the sur­face’, so to speak. And it seems my two class­es are advo­cat­ing this. Also, I am pret­ty sure this is every­day real­i­ty for many UX prac­ti­tion­ers. Notice I say “prac­ti­tion­er”, because I am not sure ‘design­er’ is the right term in these cas­es. To be hon­est I do not think you can prac­tice design with­out doing sketch­ing and pro­to­typ­ing of some sort. (See Bill Buxton’s ‘Sketch­ing User Expe­ri­ences’ for an expand­ed argu­ment on why this is.) And when it comes to design­ing soft­ware prod­ucts this means touch­ing the sur­face, the form.

Again, the real­i­ty is, ‘UX design­er’ and ‘UI design­er’ are com­mon terms now. Cer­tain­ly here in Sin­ga­pore peo­ple know they need both to make good prod­ucts. Some prac­ti­tion­ers say they do both, oth­ers one or the oth­er. The lat­ter appears to be the most com­mon and expect­ed case. (By the way, in Sin­ga­pore no-one I’ve met talks about inter­ac­tion design.)

My con­cern is that by encour­ag­ing the prac­tice of doing UX design with­out touch­ing the sur­face of a prod­uct, we get shit­ty designs. In a process where UX and UI are seen as sep­a­rate things the risk is one comes before the oth­er. The UX design­er draws the wire­frames, the UI design­er gets to turn them into pret­ty pic­tures, with no back-and-forth between the two. An iter­a­tive process can mit­i­gate some of the dam­age such an arti­fi­cial divi­sion of labour pro­duces, but I think we still start out on the wrong foot. I think a bet­ter prac­tice might entail includ­ing visu­al con­sid­er­a­tions from the very begin­ning of the design process (as we are sketching).

Two things I came across as I was prepar­ing these class­es are some­how in sup­port of this idea. Both result­ed from a call I did for resources on user inter­face design. I asked for books about visu­al aspects, but I got a lot more.

  1. In ‘Mag­ic Ink’ Bret Vic­tor writes about how the design of infor­ma­tion soft­ware is huge­ly indebt­ed to graph­ic design and more specif­i­cal­ly infor­ma­tion design in the tra­di­tion of Tufte. (He also men­tions indus­tri­al design as an equal­ly big prog­en­i­tor of inter­ac­tion design, but for soft­ware that is main­ly about manip­u­la­tion, not infor­ma­tion.) The arti­cle is big, but the start of it is actu­al­ly a pret­ty good if unortho­dox gen­er­al intro­duc­tion to inter­ac­tion design. For soft­ware that is about learn­ing through look­ing at infor­ma­tion Vic­tor says inter­ac­tion should be a last resort. So that leaves us with a task that is 80% if not more visu­al design. Touch­ing the sur­face. Which makes me think you might as well get to it as quick­ly as pos­si­ble and start sketch­ing and pro­to­typ­ing aimed not just at struc­ture and behav­iour but also form. (Hat tip to Pieter Diepen­maat for this one.)

  2. In ‘Jump­ing to the End’ Matt Jones ram­bles enter­tain­ing­ly about design fic­tion. He argues for pay­ing atten­tion to details and that a lot of the design he prac­tices is about ‘sig­na­ture moments’ aka micro-inter­ac­tions. So yeah, again, I can’t imag­ine design­ing these effec­tive­ly with­out doing sketch­ing and pro­to­typ­ing of the sort that includes the visu­al. And in fact Matt men­tions this more or less at one point, when he talks about the fact that his team’s deliv­er­ables at Google are almost all visu­al. They are high fideli­ty mock­ups, ani­ma­tions, videos, and so on. These then become the start­ing points for fur­ther devel­op­ment. (Hat tip to Alexan­der Zeh for this one.)

In sum­ma­ry, I think dis­tin­guish­ing UX design from UI design is non­sense. Because you can­not prac­tice design with­out sketch­ing and pro­to­typ­ing. And you can­not sketch and pro­to­type a soft­ware prod­uct with­out touch­ing its sur­face. In stead of tak­ing visu­al design for grant­ed, or talk­ing about it like it is some innate tal­ent, some kind of mag­i­cal skill some peo­ple are born with and oth­ers aren’t, user expe­ri­ence prac­ti­tion­ers should con­sid­er being less enam­oured with acquir­ing more skills from busi­ness, mar­ket­ing and engi­neer­ing and in stead prac­tice at the skills that define the fields user expe­ri­ence design is indebt­ed to the most: graph­ic design and indus­tri­al design. In oth­er words, you can’t do user expe­ri­ence design with­out touch­ing the surface.