Design × AI coffee meetup

If you work in the field of design or arti­fi­cial intel­li­gence and are inter­est­ed in explor­ing the oppor­tu­ni­ties at their inter­sec­tion, con­sid­er your­self invit­ed to an infor­mal cof­fee meet­up on Feb­ru­ary 15, 10am at Brix in Amsterdam.

Erik van der Plui­jm and myself have for a while now been car­ry­ing on a con­ver­sa­tion about AI and design and we felt it was time to expand the cir­cle a bit. We are very curi­ous who else out there shares our excitement.

Ques­tions we are mulling over include: How does the design process change when cre­at­ing intel­li­gent prod­ucts? And: How can teams col­lab­o­rate with intel­li­gent design tools to solve prob­lems in new and inter­est­ing ways?

Any­way, lots to chew on.

No need to sign up or any­thing, just show up and we’ll see what happens.

Move 37

Design­ers make choic­es. They should be able to pro­vide ratio­nales for those choic­es. (Although some­times they can’t.) Being able to explain the think­ing that went into a design move to your­self, your team­mates and clients is part of being a pro­fes­sion­al.

Move 37. This was the move Alpha­Go made which took every­one by sur­prise because it appeared so wrong at first.

The inter­est­ing thing is that in hind­sight it appeared Alpha­Go had good rea­sons for this move. Based on a cal­cu­la­tion of odds, basically.

If asked at the time, would Alpha­Go have been able to pro­vide this rationale?

It’s a thing that pops up in a lot of the read­ing I am doing around AI. This idea of trans­paren­cy. In some fields you don’t just want an AI to pro­vide you with a deci­sion, but also with the argu­ments sup­port­ing that deci­sion. Obvi­ous exam­ples would include a sys­tem that helps diag­nose dis­ease. You want it to pro­vide more than just the diag­no­sis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cul­tur­al and also legal requirement.

It’s inter­est­ing.

Although lives don’t depend on it, the same might apply to intel­li­gent design tools. If I am work­ing with a sys­tem and it is offer­ing me design direc­tions or solu­tions, I want to know why it is sug­gest­ing these things as well. Because my rea­son for pick­ing one over the oth­er depends not just on the sur­face lev­el prop­er­ties of the design but also the under­ly­ing rea­sons. It might be impor­tant because I need to be able to tell stake­hold­ers about it.

An added side effect of this is that a design­er work­ing with such a sys­tem is be exposed to machine rea­son­ing about design choic­es. This could inform their own future think­ing too.

Trans­par­ent AI might help peo­ple improve them­selves. A black box can’t teach you much about the craft it’s per­form­ing. Look­ing at out­comes can be inspi­ra­tional or help­ful, but the process­es that lead up to them can be equal­ly infor­ma­tive. If not more so.

Imag­ine work­ing with an intel­li­gent design tool and get­ting the equiv­a­lent of an Alpha­Go move 37 moment. Huge­ly inspi­ra­tional. Game changer.

This idea gets me much more excit­ed than automat­ing design tasks does.

Waiting for the smart city

Nowa­days when we talk about the smart city we don’t nec­es­sar­i­ly talk about smart­ness or cities.

I feel like when the term is used it often obscures more than it reveals. 

Here a few rea­sons why. 

To begin with, the term sug­gests some­thing that is yet to arrive. Some kind of tech-enabled utopia. But actu­al­ly, cur­rent day cities are already smart to a greater or less­er degree depend­ing on where and how you look.

This is impor­tant because too often we post­pone action as we wait for the smart city to arrive. We don’t have to wait. We can act to improve things right now.

Fur­ther­more, ‘smart city’ sug­gests some­thing mono­lith­ic that can be designed as a whole. But a smart city, like any city, is a huge mess of inter­con­nect­ed things. It resists top­down design. 

His­to­ry is lit­tered with failed attempts at author­i­tar­i­an high-mod­ernist city design. Just stop it.

Smart­ness should not be an end but a means. 

I read ‘smart’ as a short­hand for ‘tech­no­log­i­cal­ly aug­ment­ed’. A smart city is a city eat­en by soft­ware. All cities are being eat­en (or have been eat­en) by soft­ware to a greater or less­er extent. Uber and Airbnb are obvi­ous exam­ples. Small­er more sub­tle ones abound.

The ques­tion is, smart to what end? Effi­cien­cy? Leg­i­bil­i­ty? Con­trol­la­bil­i­ty? Anti-fragili­ty? Playa­bil­i­ty? Live­abil­i­ty? Sus­tain­abil­i­ty? The answer depends on your outlook.

These are ways in which the smart city label obscures. It obscures agency. It obscures net­works. It obscures intent.

I’m not say­ing don’t ever use it. But in many cas­es you can get by with­out it. You can talk about spe­cif­ic parts that make up the whole of a city, spe­cif­ic tech­nolo­gies and spe­cif­ic aims. 


Post­script 1

We can do the same exer­cise with the ‘city’ part of the meme. 

The same process that is mak­ing cities smart (soft­ware eat­ing the world) is also mak­ing every­thing else smart. Smart towns. Smart coun­try­sides. The ends are dif­fer­ent. The net­works are dif­fer­ent. The process­es play out in dif­fer­ent ways.

It’s okay to think about cities but don’t think they have a monop­oly on ‘dis­rup­tion’.

Post­script 2

Some of this inspired by clever things I heard Sebas­t­ian Quack say at Play­ful Design for Smart Cities and Usman Haque at ThingsCon Ams­ter­dam.

Playful Design for Smart Cities

Ear­li­er this week I escaped the mis­er­able weath­er and food of the Nether­lands to spend a cou­ple of days in Barcelona, where I attend­ed the ‘Play­ful Design for Smart Cities’ work­shop at RMIT Europe.

I helped Jus­si Holopainen run a work­shop in which par­tic­i­pants from indus­try, gov­ern­ment and acad­e­mia togeth­er defined projects aimed at fur­ther explor­ing this idea of play­ful design with­in the con­text of smart cities, with­out falling into the trap of solu­tion­ism.

Before the work­shop I pre­sent­ed a sum­ma­ry of my chap­ter in The Game­ful World, along with some of my cur­rent think­ing on it. There were also great talks by Judith Ack­er­mann, Flo­ri­an ‘Floyd’ Müller, and Gilly Kar­jevsky and Sebas­t­ian Quack.

Below are the slides for my talk and links to all the arti­cles, books and exam­ples I explic­it­ly and implic­it­ly ref­er­enced throughout.

Adapting intelligent tools for creativity

I read Alper’s book on con­ver­sa­tion­al user inter­faces over the week­end and was struck by this paragraph:

The holy grail of a con­ver­sa­tion­al sys­tem would be one that’s aware of itself — one that knows its own mod­el and inter­nal struc­ture and allows you to change all of that by talk­ing to it. Imag­ine being able to tell Siri to tone it down a bit with the jokes and that it would then actu­al­ly do that.”

His point stuck with me because I think this is of par­tic­u­lar impor­tance to cre­ative tools. These need to be flex­i­ble so that a vari­ety of peo­ple can use them in dif­fer­ent cir­cum­stances. This adapt­abil­i­ty is what lends a tool depth.

The depth I am think­ing of in cre­ative tools is sim­i­lar to the one in games, which appears to be derived from a kind of semi-ordered­ness. In short, you’re look­ing for a sweet spot between too sim­ple and too complex.

And of course, you need good defaults.

Back to adap­ta­tion. This can hap­pen in at least two ways on the inter­face lev­el: modal or mod­e­less. A sim­ple exam­ple of the for­mer would be to go into a pref­er­ences win­dow to change the behav­iour of your draw­ing pack­age. Sim­i­lar­ly, mod­e­less adap­ta­tion hap­pens when you rearrange some pan­els to bet­ter suit the task at hand.

Return­ing to Siri, the equiv­a­lence of mod­e­less adap­ta­tion would be to tell her to tone it down when her sense of humor irks you. 

For the modal solu­tion, imag­ine a humor slid­er in a set­tings screen some­where. This would be a ter­ri­ble solu­tion because it offers a poor map­ping of a con­trol to a per­son­al­i­ty trait. Can you pin­point on a scale of 1 to 10 your pre­ferred amount of humor in your hypo­thet­i­cal per­son­al assis­tant? And any­way, doesn’t it depend on a lot of sit­u­a­tion­al things such as your mood, the par­tic­u­lar task you’re try­ing to com­plete and so on? In short, this requires some­thing more sit­u­at­ed and adaptive. 

So just being able to tell Siri to tone it down would be the equiv­a­lent of rear­rang­ing your Pho­to­shop palets. And in a next inter­ac­tion Siri might care­ful­ly try some humor again to gauge your response. And if you encour­age her, she might be more humor­ous again.

Enough about fun­ny Siri for now because it’s a bit of a sil­ly example.

Fun­ny Siri, although she’s a bit of a Sil­ly exam­ple, does illus­trate anoth­er prob­lem I am try­ing to wrap my head around. How does an intel­li­gent tool for cre­ativ­i­ty com­mu­ni­cate its inter­nal state? Because it is prob­a­bilis­tic, it can’t be eas­i­ly mapped to a graph­ic infor­ma­tion dis­play. And so our old way of manip­u­lat­ing state, and more specif­i­cal­ly adapt­ing a tool to our needs becomes very dif­fer­ent too.

It seems to be best for an intel­li­gent sys­tem to be open to sug­ges­tions from users about how to behave. Adapt­ing an intel­li­gent cre­ative tool is less like rear­rang­ing your work­space and more like coor­di­nat­ing with a coworker. 

My ide­al is for this to be done in the same mode (and so using the same con­trols) as when doing the work itself. I expect this to allow for more flu­id inter­ac­tions, going back and forth between doing the work at hand, and meta-com­mu­ni­ca­tion about how the sys­tem sup­ports the work. I think if we look at how peo­ple col­lab­o­rate this hap­pens a lot, com­mu­ni­ca­tion and meta-com­mu­ni­ca­tion going on con­tin­u­ous­ly in the same channels.

We don’t need a self-aware arti­fi­cial intel­li­gence to do this. We need to apply what com­put­er sci­en­tists call super­vised learn­ing. The basic idea is to pro­vide a sys­tem with exam­ple inputs and desired out­puts, and let it infer the nec­es­sary rules from them. If the results are unsat­is­fac­to­ry, you sim­ply con­tin­ue train­ing it until it per­forms well enough. 

A super fun exam­ple of this approach is the Wek­ina­tor, a piece of machine learn­ing soft­ware for cre­at­ing musi­cal instru­ments. Below is a video in which Wekinator’s cre­ator Rebec­ca Fiebrink per­forms sev­er­al demos.

Here we have an intel­li­gent sys­tem learn­ing from exam­ples. A per­son manip­u­lat­ing data in stead of code to get to a par­tic­u­lar desired behav­iour. But what Wek­ina­tor lacks and what I expect will be required for this type of thing to real­ly catch on is for the train­ing to hap­pen in the same mode or medi­um as the per­for­mance. The tech­nol­o­gy seems to be get­ting there, but there are many inter­ac­tion design prob­lems remain­ing to be solved. 

Generating UI design variations

AI design tool for UI design alternatives

I am still think­ing about AI and design. How is the design process of AI prod­ucts dif­fer­ent? How is the user expe­ri­ence of AI prod­ucts dif­fer­ent? Can design tools be improved with AI?

When it comes to improv­ing design tools with AI my start­ing point is game design and devel­op­ment. What fol­lows is a quick sketch of one idea, just to get it out of my system.

Mixed-ini­tia­tive’ tools for pro­ce­dur­al gen­er­a­tion (such as Tana­gra) allow design­ers to cre­ate high-lev­el struc­tures which a machine uses to pro­duce full-fledged game con­tent (such as lev­els). It hap­pens in a real-time. There is a con­tin­u­ous back-and-forth between design­er and machine.

Soft­ware user inter­faces, on mobile in par­tic­u­lar, are increas­ing­ly fre­quent­ly assem­bled from ready-made com­po­nents accord­ing to more or less well-described rules tak­en from design lan­guages such as Mate­r­i­al Design. These design lan­guages are cur­rent­ly pri­mar­i­ly described for human con­sump­tion. But it should be a small step to make a design lan­guage machine-readable.

So I see an oppor­tu­ni­ty here where a design­er might assem­ble a UI like they do now, and a machine can do sev­er­al things. For exam­ple it can test for adher­ence to design lan­guage rules, sug­gest cor­rec­tions or even auto-cor­rect as the design­er works.

More inter­est­ing­ly, a machine might take one UI mock­up, and pro­vide the design­er with sev­er­al more pos­si­ble vari­a­tions. To do this it could use dif­fer­ent lay­outs, or alter­na­tive com­po­nents that serve a same or sim­i­lar pur­pose to the ones used. 

In high pres­sure work envi­ron­ments where time is scarce, cor­ners are often cut in the diver­gence phase of design. Machines could aug­ment design­ers so that gen­er­at­ing many design alter­na­tives becomes less labo­ri­ous both men­tal­ly and phys­i­cal­ly. Ide­al­ly, machines would sur­prise and even inspire us. And the final say would still be ours.

Engagement design worksheets

Engagement design workshop at General Assembly Singapore

In June/July of this year I helped Michael Fil­lié teach two class­es about engage­ment design at Gen­er­al Assem­bly Sin­ga­pore. The first was the­o­ret­i­cal and the sec­ond prac­ti­cal. For the prac­ti­cal class we cre­at­ed a cou­ple of work­sheets which par­tic­i­pants used in groups to grad­u­al­ly build a design con­cept for a new prod­uct or prod­uct improve­ment aimed at long-term engage­ment. Below are the work­sheets along with some notes on how to use them. I’m hop­ing they may be use­ful in your own practice.

A prac­ti­cal note: Each of these work­sheets is designed to be print­ed on A1 paper. (Click on the images to get the PDFs.) We worked on them using post-it notes so that it is easy to add, change or remove things as you go.

Problem statement and persona

01-problem-statement-and-persona

We start­ed with under­stand­ing the prob­lem and the user. This work­sheet is an adap­ta­tion of the per­sona sheet by Strat­e­gyz­er. To use it you begin at the top, flesh­ing out the prob­lem in the form of stat­ing the engage­ment chal­lenge, and the busi­ness goals. Then, you select a user seg­ment which is rel­e­vant to the problem.

The mid­dle sec­tion of the sheet is used to describe them in the form of a per­sona. Start with putting a face on them. Give the per­sona a name and add some demo­graph­ic details rel­e­vant for the user’s behav­iour. Then, move on to explor­ing what their envi­ron­ment looks and sounds like and what they are think­ing and feel­ing. Final­ly, try to describe what issues the user is hav­ing that are addressed by the prod­uct and what the user stands to gain from using the product.

The third sec­tion of this sheet is used to wrap up the first exer­cise by doing a quick gap analy­sis of what the busi­ness would like to see in terms of user behav­iour and what the user is cur­rent­ly doing. This will help pin down the engage­ment design con­cept fleshed out in the next exercises.

Engagement loop

02-engagement-loop

Exer­cise two builds on the under­stand­ing of the prob­lem and the user and offers a struc­tured way of think­ing through a pos­si­ble solu­tion. For this we use the engage­ment loop mod­el devel­oped by Sebas­t­ian Deter­d­ing. There are dif­fer­ent places we can start here but one that often works well is to start imag­in­ing the Big Hairy Auda­cious Goal the user is look­ing to achieve. This is the chal­lenge. It is a thing (usu­al­ly a skill) the user can improve at. Note this chal­lenge down in the mid­dle. Then, work­ing around the chal­lenge, describe a mea­sur­able goal the user can achieve on their way to mas­ter­ing the chal­lenge. Describe the action the user can take with the prod­uct towards that goal, and the feed­back the prod­uct will give them to let them know their action has suc­ceed­ed and how much clos­er it has got­ten them to the goal. Final­ly and cru­cial­ly, try to describe what kind of moti­va­tion the user is dri­ven by and make sure the goals, actions and feed­back make sense in that light. If not, adjust things until it all clicks.

Storyboard

03-storyboard

The final exer­cise is devot­ed to visu­al­is­ing and telling a sto­ry about the engage­ment loop we devel­oped in the abstract in the pre­vi­ous block. It is a typ­i­cal sto­ry­board, but we have con­strained it to a set of sto­ry beats you must hit to build a sat­is­fy­ing nar­ra­tive. We go from intro­duc­ing the user and their chal­lenge, to how the prod­uct com­mu­ni­cates the goal and action to what a user does with it and how they get feed­back on that to (fast-for­ward) how they feel when they ulti­mate­ly mas­ter the chal­lenge. It makes the design con­cept relat­able to out­siders and can serve as a jump­ing off point for fur­ther design and development.

Use, adapt and share

Togeth­er, these three exer­cis­es and work­sheets are a great way to think through an engage­ment design prob­lem. We used them for teach­ing but I can also imag­ine teams using them to explore a solu­tion to a prob­lem they might be hav­ing with an exist­ing prod­uct, or as a way to kick­start the devel­op­ment of a new product.

We’ve built on oth­er people’s work for these so it only makes sense to share them again for oth­ers to use and build on. If you do use them I would love to hear about your experiences. 

Doing UX inside of Scrum

Some notes on how I am cur­rent­ly “doing user expe­ri­ence” inside of Scrum. This approach has evolved from my projects at Hub­bub as well as more recent­ly my work with ARTO and on a project at Eden­spiek­er­mann. So I have found it works with both star­tups and agency style projects.

The start­ing point is to under­stand that Scrum is intend­ed to be a con­tain­er. It is a process frame­work. It should be able to hold any oth­er activ­i­ty you think you need as a team. So if we feel we need to add UX some­how, we should try to make it part of Scrum and not some­thing that is tacked onto Scrum. Why not tack some­thing on? Because it sig­nals design is some­how dis­tinct from devel­op­ment. And the whole point of doing agile is to have cross-func­tion­al teams. If you set up a sep­a­rate process for design you are high­ly like­ly not to ben­e­fit from the full col­lec­tive intel­li­gence of the com­bined design and devel­op­ment team. So no, design needs to be inside of the Scrum container.

Stag­gered sprints are not the answer either because you are still split­ting the team into design and devel­op­ment, ham­per­ing cross-col­lab­o­ra­tion and trans­paren­cy. You’re basi­cal­ly invit­ing Tay­lorism back into your process—the very thing you were try­ing to get­ting away from.

When you are uncom­fort­able with putting design­ers and devel­op­ers all in the same team and the same process the answer is not to make your process more elab­o­rate, par­cel things up, and decrease “messy” inter­ac­tions. The answer is increas­ing con­ver­sa­tion, not elim­i­nat­ing it.

It turns out things aren’t remote­ly as com­pli­cat­ed as they appear to be. The key is under­stand­ing Scrum’s events. The big event hold­ing all oth­er events is the sprint. The sprint out­puts a releasable incre­ment of “done” prod­uct. The devel­op­ment team does every­thing required to achieve the sprint goal col­lab­o­ra­tive­ly deter­mined dur­ing sprint plan­ning. Nat­u­ral­ly this includes any design need­ed for the prod­uct. I think of this as the ‘pro­duc­tion’ type of design. It typ­i­cal­ly con­sists most­ly of UI design. There may already be some pre­lim­i­nary UI design avail­able at the start of the sprint but it does not have to be finished. 

What about the kind of design that is required for fig­ur­ing out what to build in the first place? It might not be obvi­ous at first, but Scrum actu­al­ly has an ongo­ing process which read­i­ly accom­mo­dates it: back­log refine­ment. These are all activ­i­ties required to get a prod­uct back­log item in shape for sprint plan­ning. This is emphat­i­cal­ly not a solo show for the prod­uct man­ag­er to con­duct. It is some­thing the whole team col­lab­o­rates on. Devel­op­ers and design­ers. In my expe­ri­ence design­ers are great at facil­i­tat­ing back­log refine­ment ses­sions. At the white­board, fig­ur­ing stuff out with the whole team ‘Lean UX’ style. 

I will admit prod­uct back­log refine­ment is Scrum’s weak point. Where it offers a lot of struc­ture for the sprints, it offers hard­ly any for the back­log refine­ment (or groom­ing as some call it). But that’s okay, we can evolve our own.

I like to use Kan­ban to man­age the process of back­log refine­ment. Items come into the pipeline as some­thing we want to elab­o­rate because we have decid­ed we want to build it (in some form or oth­er, can be just an exper­i­ment) in the next sprint or two. It then goes through var­i­ous stages of elab­o­ra­tion. At the very least cap­tur­ing require­ments in the form of user sto­ries or job sto­ries, doing sketch­es, a lo-fi pro­to­type, mock­ups and a hi-fi pro­to­type and final­ly break­ing the item down into work to be done and attach­ing an esti­mate to it. At this point it is ready to be part of a sprint. Cru­cial­ly, dur­ing this life­cy­cle of an item as it is being refined, we can and should do user research if we feel we need more data, or user test­ing if we feel it is too risky to com­mit to a fea­ture outright. 

For this kind of fig­ur­ing stuff out, this ‘plan­ning’ type of design, it makes no sense to have it be part of a sprint-like struc­ture because the work required to get it to a ‘ready’ state is much more unpre­dictable. The point of hav­ing a loos­er groom­ing flow is that it exists to elim­i­nate uncer­tain­ty for when we com­mit to an item in a sprint.

So between the sprint and back­log refine­ment, Scrum read­i­ly accom­mo­dates design. ‘Pro­duc­tion’ type design hap­pens inside of the sprint and design­ers are con­sid­ered part of the devel­op­ment team. ‘Plan­ning’ type of design hap­pens as part of back­log refinement.

So no need to tack on a sep­a­rate process. It keeps the process sim­ple and under­stand­able, thus increas­ing trans­paren­cy for the whole team. It pre­vents design from becom­ing a black box to oth­ers. And when we make design part of the con­tain­er process frame­work that is Scrum, we reap the rewards of the team’s col­lec­tive intel­li­gence and we increase our agility.

Prototyping is a team sport

Late­ly I have been bing­ing on books, pre­sen­ta­tions and arti­cles relat­ed to ‘Lean UX’. I don’t like the term, but then I don’t like the tech industry’s love for invent­ing a new label for every damn thing. I do like the things empha­sis­es: shared under­stand­ing, deep col­lab­o­ra­tion, con­tin­u­ous user feed­back. These are prin­ci­ples that have always implic­it­ly guid­ed the choic­es I made when lead­ing teams at Hub­bub and now also as a mem­ber of sev­er­al teams in the role of prod­uct designer.

In all these lean UX read­ings a thing that keeps com­ing up again and again is pro­to­typ­ing. Pro­to­types are the go-to way of doing ‘exper­i­ments’, in lean-speak. Oth­er things can be done as well—surveys, inter­views, whatever—but more often than not, assump­tions are test­ed with prototypes. 

Which is great! And also unsur­pris­ing as pro­to­typ­ing has real­ly been embraced by the tech world. And tools for rapid pro­to­typ­ing are get­ting a lot of atten­tion and inter­est as a result. How­ev­er, this comes with a cou­ple of risks. For one, some­times it is fine to stick to paper. But the lure of shiny pro­to­typ­ing tools is strong. You’d rather not show a crap­py draw­ing to a user. What if they hate it? How­ev­er, high fideli­ty pro­to­typ­ing is always more cost­ly than paper. So although well-inten­tioned, pro­to­typ­ing tools can encour­age waste­ful­ness, the bane of lean. 

There is a big­ger dan­ger which runs against the lean ethos, though. Some tools afford deep col­lab­o­ra­tion more than oth­ers. Let’s be real: none afford deep­er col­lab­o­ra­tion than paper and white­boards. There is one per­son behind the con­trols when pro­to­typ­ing with a tool. So in my view, one should only ever progress to that step once a team effort has been made to hash out the rough out­lines of what is to be pro­to­typed. Basi­cal­ly: always paper pro­to­type the dig­i­tal pro­to­type. Together. 

I have had a lot of fun late­ly play­ing with brows­er pro­to­types and with pro­to­typ­ing in Framer. But as I was get­ting back into all of this I did notice this risk: All of a sud­den there is a per­son on the team who does the pro­to­types. Unless this solo pro­to­typ­ing is pre­ced­ed by shared pro­to­typ­ing, this is a prob­lem. Because the rest of the team is left out of the think­ing-through-mak­ing which makes the pro­to­typ­ing process so valu­able in addi­tion to the testable arte­facts it outputs.

It is I think a key over­sight of the ‘should design­ers code’ debaters and to an extent one made by all pro­to­typ­ing tool man­u­fac­tur­ers: Indi­vid­u­als don’t pro­to­type, teams do. Pro­to­typ­ing is a team sport. And so the suc­cess of a tool depends not only on how well it sup­ports indi­vid­ual pro­to­typ­ing activ­i­ties but also how well it embeds itself in col­lab­o­ra­tive workflows. 

In addi­tion to the tools them­selves get­ting bet­ter at sup­port­ing col­lab­o­ra­tive work­flows, I would also love to see more tuto­ri­als, both offi­cial and from the com­mu­ni­ty, about how to use a pro­to­typ­ing tool with­in the larg­er con­text of a team doing some form of agile. Most tuto­ri­als now focus on “how do I make this thing with this tool”. Use­ful, up to a point. But a large part of pro­to­typ­ing is to arrive at “the thing” together. 

One of the lean UX things I devoured was this pre­sen­ta­tion by Bill Scott in which he talks about align­ing a pro­to­typ­ing and a devel­op­ment tech stack, so that the gap between design and engi­neer­ing is bridged not just with process­es but also with tool­ing. His exam­ple applies to web devel­op­ment and app devel­op­ment using web tech­nolo­gies. I won­der what a sim­i­lar approach looks like for native mobile app devel­op­ment. But this is the sort of thing I am talk­ing about: Smart think­ing about how to actu­al­ly do this lean thing in the real world. I believe organ­is­ing our­selves so that we can pro­to­type as a team is absolute­ly key. I will pick my tools and process­es accord­ing­ly in future.

All of the above is as usu­al most­ly a reminder to self: As a design­er your role is not to go off and work solo on bril­liant pro­to­types. Your role is to facil­i­tate such efforts by the whole team. Sure, there will be solo deep design­er­ly craft­ing hap­pen­ing. But it will not add up to any­thing if it is not embed­ded in a col­lab­o­ra­tive design and devel­op­ment framework.

Prototyping in the browser

When you are design­ing a web site or web app I think you should pro­to­type in the brows­er. Why? You might as well ask why pro­to­type at all. Answer: To enable con­tin­u­ous test­ing and refine­ment of your design. Since you are design­ing for the web it makes sense to do this test­ing and refine­ment with an arte­fact com­posed of the web’s material.

There are many ways to do pro­to­typ­ing. A com­mon way is to make wire­frames and then make them ‘click­able’. But when I am design­ing a web site or a web app and I get to the point where it is time to do wire­frames I often pre­fer to go straight to the browser. 

Before this step I have sketched out all the screens on paper of course. I have done mul­ti­ple sketch­es of each page. I’ve had them cri­tiqued by team mem­bers and I have reworked them. 

Drawing pictures of web pages

But then I open my draw­ing pro­gram—Sketch, in my case—and my heart sinks. Not because Sketch sucks. Sketch is great. But it some­how feels wrong to draw pic­tures of web pages on my screen. I find it cum­ber­some. My draw­ing pro­gram does not behave like a brows­er. That is to say in stead of defin­ing a bunch of rules for ele­ments and hav­ing the brows­er fig­ure out how to ren­der them on a page togeth­er I need to fol­low those rules myself in my head as I put each ele­ment in its place.

And don’t get me start­ed on how wire­frames are sup­posed to be with­out visu­al design. That is non­sense. If you are using con­trast, rep­e­ti­tion, align­ment and prox­im­i­ty, you are doing lay­out. That is visu­al design. I can’t stand wire­frames with a bad visu­al hierarchy.

If I per­se­vere, and I have a set of wire­frames in my draw­ing pro­gram, they are sta­t­ic. I can’t use them. I then need to export them to some oth­er often clunky pro­gram to make the pic­tures click­able. Which always results in a poor resem­blance of the actu­al expe­ri­ence. (I use Mar­vel. It’s okay but it is hard­ly a joy to use. For mobile apps I still use it, for web sites I pre­fer not to.)

Prototyping in the browser

When I pro­to­type in the brows­er I don’t have to deal with these issues. I am doing lay­out in a way that is native to the medi­um. And once I have some pages set up they are imme­di­ate­ly usable. So I can hand it to some­one, a team mem­ber or a test par­tic­i­pant, and let them play with it.

That is why, for web sites and web apps, I skip wire­frames alto­geth­er and pro­to­type in the brows­er. I do not know how com­mon this is in the indus­try nowa­days. So I thought I would share my approach here. It may be of use to some. 

It used to be the case that it was quite a bit of has­sle to get up and run­ning with a brows­er pro­to­type so nat­u­ral­ly open­ing a draw­ing pack­age seemed more attrac­tive. Not so any­more. Tools have come a long way. Case in point: My set­up nowa­days involves zero screw­ing around on the com­mand line.

CodeKit

The core of it is a paid-for Mac app called CodeK­it, a so-called task man­ag­er. It allows you to install a front-end devel­op­ment frame­work I like called Zurb Foun­da­tion with a cou­ple of clicks and has a built in web serv­er so you can play with your pro­to­type on any device on your local net­work. As you make changes to the code of your pro­to­type it gets auto­mat­i­cal­ly updat­ed on all your devices. No more man­u­al refresh­ing. Saves a huge amount of time.

I know you can do most of what CodeK­it does for you with stuff like Grunt but that involves tedious con­fig­u­ra­tion and work­ing the com­mand line. This is fine when you’re a devel­op­er, but not fine when you are a design­er. I want to be up and run­ning as fast as pos­si­ble. CodeK­it allows me to do that and has some oth­er fea­tures built in that are ide­al for pro­to­typ­ing which I will talk about more below. Long sto­ry short: CodeK­it has saved me a huge amount of time and is well worth the money.

Okay so on with the show. Yes, this whole pro­to­typ­ing in the brows­er thing involves ‘cod­ing’. But hon­est­ly, if you can’t write some HTML and CSS you real­ly shouldn’t be doing design for the web in the first place. I don’t care if you con­sid­er your­self a UX design­er and some­how above all this low­ly tech­ni­cal stuff. You are not. Nobody is say­ing you should become a fron­tend devel­op­er but you need to have an acquain­tance with the mate­ri­als your prod­uct is made of. Fol­low a few cours­es on Codecadamy or some­thing. There real­ly isn’t an excuse any­more these days for not know­ing this stuff. If you want to lev­el up, learn SASS.

Zurb Foundation

I like Zurb Foun­da­tion because it offers a coher­ent and com­pre­hen­sive library of ele­ments which cov­ers almost all the com­mon pat­terns found in web sites and apps. It offers a grid and some default typog­ra­phy styles as well. All of it doesn’t look flashy at all which is how I like it when I am pro­to­typ­ing. A pro­to­type at this stage does not require per­son­al­i­ty yet. Just a clear visu­al hier­ar­chy. Work­ing with Foun­da­tion is almost like play­ing with LEGO. You just click togeth­er the stuff you need. It’s pain­less and looks and works great.

I hard­ly do any styling but the few changes I do want to make I can eas­i­ly add to Foundation’s app.scss using SASS. I usu­al­ly have a few styles in there for tweak­ing some mar­gins on par­tic­u­lar ele­ments, for exam­ple a foot­er. But I try to focus on the struc­ture and behav­iour of my pages and for that I am most­ly doing HTML

GitHub

Test­ing local­ly I already men­tioned. For that, CodeK­it has you cov­ered. Of course, you want to be able to share your pro­to­type with oth­ers. For this I like to use GitHub and their Pages fea­ture. Once again, using their desk­top client, this involves zero com­mand line work. You just add the fold­er with your CodeK­it project as a new repos­i­to­ry and sync it with GitHub. Then you need to add a branch named ‘gh-pages’ and do ‘update from mas­ter’. Presto, your pro­to­type is now on the web for any­one with the URL to see and use. Per­fect if you’re work­ing in a dis­trib­uted team. 

Don’t be intim­i­dat­ed by using GitHub. Their on-board­ing is pret­ty impres­sive nowa­days. You’ll be up and run­ning in no time. Using ver­sion con­trol, even if it is just you work­ing on the pro­to­type, adds some much need­ed struc­ture and con­trol over changes. And when you are col­lab­o­rat­ing on your pro­to­type with team mem­bers it is indispensable. 

But in most cas­es I am the only one build­ing the pro­to­type so I just work on the mas­ter branch and once every while I update the gh-pages branch from mas­ter and sync it and I am done. If you use Slack you can add a GitHub bot to a chan­nel and have your team mem­bers receive an auto­mat­ic update every time you change the prototype. 

The Kit Language

If your project is of any size beyond the very small you will like­ly have repeat­ing ele­ments in your design. Head­ers, foot­ers, recur­ring wid­gets and so on. CodeK­it has recent­ly added sup­port for some­thing called the Kit Lan­guage. This adds sup­port for imports and vari­ables to reg­u­lar HTML. It is absolute­ly great for pro­to­typ­ing. For each repeat­ing ele­ment you cre­ate a ‘par­tial’ and import it wher­ev­er you need it. Vari­ables are great for chang­ing the con­tents of such repeat­ing ele­ments. CodeK­it com­piles it all into plain sta­t­ic HTML for you so your pro­to­type runs anywhere.

The Kit Lan­guage real­ly was the miss­ing piece of the puz­zle for me. With it in place I am very com­fort­able rec­om­mend­ing this way of work­ing to anyone.

So that’s my set­up: CodeK­it, Zurb Foun­da­tion and GitHub. Togeth­er they make for a very pleas­ant and pro­duc­tive way to do pro­to­typ­ing in the brows­er. I don’t imag­ine myself going back to draw­ing pic­tures of web pages any­time soon.