Machine Learning for Designers’ workshop

On Wednes­day Péter Kun, Hol­ly Rob­bins and myself taught a one-day work­shop on machine learn­ing at Delft Uni­ver­si­ty of Tech­nol­o­gy. We had about thir­ty master’s stu­dents from the indus­tri­al design engi­neer­ing fac­ul­ty. The aim was to get them acquaint­ed with the tech­nol­o­gy through hands-on tin­ker­ing with the Wek­ina­tor as cen­tral teach­ing tool.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Background

The rea­son­ing behind this work­shop is twofold. 

On the one hand I expect design­ers will find them­selves work­ing on projects involv­ing machine learn­ing more and more often. The tech­nol­o­gy has cer­tain prop­er­ties that dif­fer from tra­di­tion­al soft­ware. Most impor­tant­ly, machine learn­ing is prob­a­bilis­tic in stead of deter­min­is­tic. It is impor­tant that design­ers under­stand this because oth­er­wise they are like­ly to make bad deci­sions about its application. 

The sec­ond rea­son is that I have a strong sense machine learn­ing can play a role in the aug­men­ta­tion of the design process itself. So-called intel­li­gent design tools could make design­ers more effi­cient and effec­tive. They could also enable the cre­ation of designs that would oth­er­wise be impos­si­ble or very hard to achieve.

The work­shop explored both ideas.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Format

The struc­ture was rough­ly as follows: 

In the morn­ing we start­ed out pro­vid­ing a very broad intro­duc­tion to the tech­nol­o­gy. We talked about the very basic premise of (super­vised) learn­ing. Name­ly, pro­vid­ing exam­ples of inputs and desired out­puts and train­ing a mod­el based on those exam­ples. To make these con­cepts tan­gi­ble we then intro­duced the Wek­ina­tor and walked the stu­dents through get­ting it up and run­ning using basic exam­ples from the web­site. The final step was to invite them to explore alter­na­tive inputs and out­puts (such as game con­trollers and Arduino boards).

In the after­noon we pro­vid­ed a design brief, ask­ing the stu­dents to pro­to­type a data-enabled object with the set of tools they had acquired in the morn­ing. We assist­ed with tech­ni­cal hur­dles where nec­es­sary (of which there were more than a few) and closed out the day with demos and a group dis­cus­sion reflect­ing on their expe­ri­ences with the technology.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Results

As I tweet­ed on the way home that evening, the results were… interesting. 

Not all groups man­aged to put some­thing togeth­er in the admit­ted­ly short amount of time they were pro­vid­ed with. They were most often stymied by get­ting an Arduino to talk to the Wek­ina­tor. Max was often picked as a go-between because the Wek­ina­tor receives OSC mes­sages over UDP, where­as the quick­est way to get an Arduino to talk to a com­put­er is over ser­i­al. But Max in my expe­ri­ence is a fick­le beast and would more than once crap out on us.

The groups that did build some­thing main­ly assem­bled pro­to­types from the exam­ples on hand. Which is fine, but since we were main­ly work­ing with the exam­ples from the Wek­ina­tor web­site they tend­ed towards the inter­ac­tive instru­ment side of things. We were hop­ing for explo­rations of IoT prod­uct con­cepts. For that more hand-rolling was required and this was only achiev­able for the stu­dents on the high­er end of the tech­ni­cal exper­tise spec­trum (and the more tena­cious ones).

The dis­cus­sion yield­ed some inter­est­ing insights into men­tal mod­els of the tech­nol­o­gy and how they are affect­ed by hands-on expe­ri­ence. A com­ment I heard more than once was: Why is this con­sid­ered learn­ing at all? The Wek­ina­tor was not per­ceived to be learn­ing any­thing. When chal­lenged on this by reit­er­at­ing the under­ly­ing prin­ci­ples it became clear the black box nature of the Wek­ina­tor ham­pers appre­ci­a­tion of some of the very real achieve­ments of the tech­nol­o­gy. It seems (for our stu­dents at least) machine learn­ing is stuck in a grey area between too-high expec­ta­tions and too-low recog­ni­tion of its capabilities.

Next steps

These results, and oth­ers, point towards some obvi­ous improve­ments which can be made to the work­shop for­mat, and to teach­ing design stu­dents about machine learn­ing more broadly. 

  1. We can improve the toolset so that some of the heavy lift­ing involved with get­ting the var­i­ous parts to talk to each oth­er is made eas­i­er and more reliable.
  2. We can build exam­ples that are geared towards the prac­tice of design­ing IoT prod­ucts and are ready for adap­ta­tion and hacking.
  3. And final­ly, and prob­a­bly most chal­leng­ing­ly, we can make the work­ings of machine learn­ing more trans­par­ent so that it becomes eas­i­er to devel­op a feel for its capa­bil­i­ties and shortcomings.

We do intend to improve and teach the work­shop again. If you’re inter­est­ed in host­ing one (either in an edu­ca­tion­al or pro­fes­sion­al con­text) let me know. And stay tuned for updates on this and oth­er efforts to get design­ers to work in a hands-on man­ner with machine learning.

Spe­cial thanks to the bril­liant Ianus Keller for con­nect­ing me to Péter and for allow­ing us to pilot this crazy idea at IDE Acad­e­my.

References

Sources used dur­ing prepa­ra­tion and run­ning of the workshop:

  • The Wek­ina­tor – the UI is infu­ri­at­ing­ly poor but when it comes to get­ting start­ed with machine learn­ing this tool is unmatched.
  • Arduino – I have become par­tic­u­lar­ly fond of the MKR1000 board. Add a lithi­um-poly­mer bat­tery and you have every­thing you need to pro­to­type IoT products.
  • OSC for ArduinoCNMAT’s imple­men­ta­tion of the open sound con­trol (OSC) encod­ing. Key puz­zle piece for get­ting the above two tools talk­ing to each other.
  • Machine Learn­ing for Design­ers – my pre­ferred intro­duc­tion to the tech­nol­o­gy from a design­er­ly perspective.
  • A Visu­al Intro­duc­tion to Machine Learn­ing – a very acces­si­ble visu­al expla­na­tion of the basic under­pin­nings of com­put­ers apply­ing sta­tis­ti­cal learning.
  • Remote Con­trol Theremin – an exam­ple project I pre­pared for the work­shop demo­ing how to have the Wek­ina­tor talk to an Arduino MKR1000 with OSC over UDP.

Design × AI coffee meetup

If you work in the field of design or arti­fi­cial intel­li­gence and are inter­est­ed in explor­ing the oppor­tu­ni­ties at their inter­sec­tion, con­sid­er your­self invit­ed to an infor­mal cof­fee meet­up on Feb­ru­ary 15, 10am at Brix in Amsterdam.

Erik van der Plui­jm and myself have for a while now been car­ry­ing on a con­ver­sa­tion about AI and design and we felt it was time to expand the cir­cle a bit. We are very curi­ous who else out there shares our excitement.

Ques­tions we are mulling over include: How does the design process change when cre­at­ing intel­li­gent prod­ucts? And: How can teams col­lab­o­rate with intel­li­gent design tools to solve prob­lems in new and inter­est­ing ways?

Any­way, lots to chew on.

No need to sign up or any­thing, just show up and we’ll see what happens.

Move 37

Design­ers make choic­es. They should be able to pro­vide ratio­nales for those choic­es. (Although some­times they can’t.) Being able to explain the think­ing that went into a design move to your­self, your team­mates and clients is part of being a pro­fes­sion­al.

Move 37. This was the move Alpha­Go made which took every­one by sur­prise because it appeared so wrong at first.

The inter­est­ing thing is that in hind­sight it appeared Alpha­Go had good rea­sons for this move. Based on a cal­cu­la­tion of odds, basically.

If asked at the time, would Alpha­Go have been able to pro­vide this rationale?

It’s a thing that pops up in a lot of the read­ing I am doing around AI. This idea of trans­paren­cy. In some fields you don’t just want an AI to pro­vide you with a deci­sion, but also with the argu­ments sup­port­ing that deci­sion. Obvi­ous exam­ples would include a sys­tem that helps diag­nose dis­ease. You want it to pro­vide more than just the diag­no­sis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cul­tur­al and also legal requirement.

It’s inter­est­ing.

Although lives don’t depend on it, the same might apply to intel­li­gent design tools. If I am work­ing with a sys­tem and it is offer­ing me design direc­tions or solu­tions, I want to know why it is sug­gest­ing these things as well. Because my rea­son for pick­ing one over the oth­er depends not just on the sur­face lev­el prop­er­ties of the design but also the under­ly­ing rea­sons. It might be impor­tant because I need to be able to tell stake­hold­ers about it.

An added side effect of this is that a design­er work­ing with such a sys­tem is be exposed to machine rea­son­ing about design choic­es. This could inform their own future think­ing too.

Trans­par­ent AI might help peo­ple improve them­selves. A black box can’t teach you much about the craft it’s per­form­ing. Look­ing at out­comes can be inspi­ra­tional or help­ful, but the process­es that lead up to them can be equal­ly infor­ma­tive. If not more so.

Imag­ine work­ing with an intel­li­gent design tool and get­ting the equiv­a­lent of an Alpha­Go move 37 moment. Huge­ly inspi­ra­tional. Game changer.

This idea gets me much more excit­ed than automat­ing design tasks does.

Waiting for the smart city

Nowa­days when we talk about the smart city we don’t nec­es­sar­i­ly talk about smart­ness or cities.

I feel like when the term is used it often obscures more than it reveals. 

Here a few rea­sons why. 

To begin with, the term sug­gests some­thing that is yet to arrive. Some kind of tech-enabled utopia. But actu­al­ly, cur­rent day cities are already smart to a greater or less­er degree depend­ing on where and how you look.

This is impor­tant because too often we post­pone action as we wait for the smart city to arrive. We don’t have to wait. We can act to improve things right now.

Fur­ther­more, ‘smart city’ sug­gests some­thing mono­lith­ic that can be designed as a whole. But a smart city, like any city, is a huge mess of inter­con­nect­ed things. It resists top­down design. 

His­to­ry is lit­tered with failed attempts at author­i­tar­i­an high-mod­ernist city design. Just stop it.

Smart­ness should not be an end but a means. 

I read ‘smart’ as a short­hand for ‘tech­no­log­i­cal­ly aug­ment­ed’. A smart city is a city eat­en by soft­ware. All cities are being eat­en (or have been eat­en) by soft­ware to a greater or less­er extent. Uber and Airbnb are obvi­ous exam­ples. Small­er more sub­tle ones abound.

The ques­tion is, smart to what end? Effi­cien­cy? Leg­i­bil­i­ty? Con­trol­la­bil­i­ty? Anti-fragili­ty? Playa­bil­i­ty? Live­abil­i­ty? Sus­tain­abil­i­ty? The answer depends on your outlook.

These are ways in which the smart city label obscures. It obscures agency. It obscures net­works. It obscures intent.

I’m not say­ing don’t ever use it. But in many cas­es you can get by with­out it. You can talk about spe­cif­ic parts that make up the whole of a city, spe­cif­ic tech­nolo­gies and spe­cif­ic aims. 


Post­script 1

We can do the same exer­cise with the ‘city’ part of the meme. 

The same process that is mak­ing cities smart (soft­ware eat­ing the world) is also mak­ing every­thing else smart. Smart towns. Smart coun­try­sides. The ends are dif­fer­ent. The net­works are dif­fer­ent. The process­es play out in dif­fer­ent ways.

It’s okay to think about cities but don’t think they have a monop­oly on ‘dis­rup­tion’.

Post­script 2

Some of this inspired by clever things I heard Sebas­t­ian Quack say at Play­ful Design for Smart Cities and Usman Haque at ThingsCon Ams­ter­dam.

Playful Design for Smart Cities

Ear­li­er this week I escaped the mis­er­able weath­er and food of the Nether­lands to spend a cou­ple of days in Barcelona, where I attend­ed the ‘Play­ful Design for Smart Cities’ work­shop at RMIT Europe.

I helped Jus­si Holopainen run a work­shop in which par­tic­i­pants from indus­try, gov­ern­ment and acad­e­mia togeth­er defined projects aimed at fur­ther explor­ing this idea of play­ful design with­in the con­text of smart cities, with­out falling into the trap of solu­tion­ism.

Before the work­shop I pre­sent­ed a sum­ma­ry of my chap­ter in The Game­ful World, along with some of my cur­rent think­ing on it. There were also great talks by Judith Ack­er­mann, Flo­ri­an ‘Floyd’ Müller, and Gilly Kar­jevsky and Sebas­t­ian Quack.

Below are the slides for my talk and links to all the arti­cles, books and exam­ples I explic­it­ly and implic­it­ly ref­er­enced throughout.

Adapting intelligent tools for creativity

I read Alper’s book on con­ver­sa­tion­al user inter­faces over the week­end and was struck by this paragraph:

The holy grail of a con­ver­sa­tion­al sys­tem would be one that’s aware of itself — one that knows its own mod­el and inter­nal struc­ture and allows you to change all of that by talk­ing to it. Imag­ine being able to tell Siri to tone it down a bit with the jokes and that it would then actu­al­ly do that.”

His point stuck with me because I think this is of par­tic­u­lar impor­tance to cre­ative tools. These need to be flex­i­ble so that a vari­ety of peo­ple can use them in dif­fer­ent cir­cum­stances. This adapt­abil­i­ty is what lends a tool depth.

The depth I am think­ing of in cre­ative tools is sim­i­lar to the one in games, which appears to be derived from a kind of semi-ordered­ness. In short, you’re look­ing for a sweet spot between too sim­ple and too complex.

And of course, you need good defaults.

Back to adap­ta­tion. This can hap­pen in at least two ways on the inter­face lev­el: modal or mod­e­less. A sim­ple exam­ple of the for­mer would be to go into a pref­er­ences win­dow to change the behav­iour of your draw­ing pack­age. Sim­i­lar­ly, mod­e­less adap­ta­tion hap­pens when you rearrange some pan­els to bet­ter suit the task at hand.

Return­ing to Siri, the equiv­a­lence of mod­e­less adap­ta­tion would be to tell her to tone it down when her sense of humor irks you. 

For the modal solu­tion, imag­ine a humor slid­er in a set­tings screen some­where. This would be a ter­ri­ble solu­tion because it offers a poor map­ping of a con­trol to a per­son­al­i­ty trait. Can you pin­point on a scale of 1 to 10 your pre­ferred amount of humor in your hypo­thet­i­cal per­son­al assis­tant? And any­way, doesn’t it depend on a lot of sit­u­a­tion­al things such as your mood, the par­tic­u­lar task you’re try­ing to com­plete and so on? In short, this requires some­thing more sit­u­at­ed and adaptive. 

So just being able to tell Siri to tone it down would be the equiv­a­lent of rear­rang­ing your Pho­to­shop palets. And in a next inter­ac­tion Siri might care­ful­ly try some humor again to gauge your response. And if you encour­age her, she might be more humor­ous again.

Enough about fun­ny Siri for now because it’s a bit of a sil­ly example.

Fun­ny Siri, although she’s a bit of a Sil­ly exam­ple, does illus­trate anoth­er prob­lem I am try­ing to wrap my head around. How does an intel­li­gent tool for cre­ativ­i­ty com­mu­ni­cate its inter­nal state? Because it is prob­a­bilis­tic, it can’t be eas­i­ly mapped to a graph­ic infor­ma­tion dis­play. And so our old way of manip­u­lat­ing state, and more specif­i­cal­ly adapt­ing a tool to our needs becomes very dif­fer­ent too.

It seems to be best for an intel­li­gent sys­tem to be open to sug­ges­tions from users about how to behave. Adapt­ing an intel­li­gent cre­ative tool is less like rear­rang­ing your work­space and more like coor­di­nat­ing with a coworker. 

My ide­al is for this to be done in the same mode (and so using the same con­trols) as when doing the work itself. I expect this to allow for more flu­id inter­ac­tions, going back and forth between doing the work at hand, and meta-com­mu­ni­ca­tion about how the sys­tem sup­ports the work. I think if we look at how peo­ple col­lab­o­rate this hap­pens a lot, com­mu­ni­ca­tion and meta-com­mu­ni­ca­tion going on con­tin­u­ous­ly in the same channels.

We don’t need a self-aware arti­fi­cial intel­li­gence to do this. We need to apply what com­put­er sci­en­tists call super­vised learn­ing. The basic idea is to pro­vide a sys­tem with exam­ple inputs and desired out­puts, and let it infer the nec­es­sary rules from them. If the results are unsat­is­fac­to­ry, you sim­ply con­tin­ue train­ing it until it per­forms well enough. 

A super fun exam­ple of this approach is the Wek­ina­tor, a piece of machine learn­ing soft­ware for cre­at­ing musi­cal instru­ments. Below is a video in which Wekinator’s cre­ator Rebec­ca Fiebrink per­forms sev­er­al demos.

Here we have an intel­li­gent sys­tem learn­ing from exam­ples. A per­son manip­u­lat­ing data in stead of code to get to a par­tic­u­lar desired behav­iour. But what Wek­ina­tor lacks and what I expect will be required for this type of thing to real­ly catch on is for the train­ing to hap­pen in the same mode or medi­um as the per­for­mance. The tech­nol­o­gy seems to be get­ting there, but there are many inter­ac­tion design prob­lems remain­ing to be solved. 

Generating UI design variations

AI design tool for UI design alternatives

I am still think­ing about AI and design. How is the design process of AI prod­ucts dif­fer­ent? How is the user expe­ri­ence of AI prod­ucts dif­fer­ent? Can design tools be improved with AI?

When it comes to improv­ing design tools with AI my start­ing point is game design and devel­op­ment. What fol­lows is a quick sketch of one idea, just to get it out of my system.

Mixed-ini­tia­tive’ tools for pro­ce­dur­al gen­er­a­tion (such as Tana­gra) allow design­ers to cre­ate high-lev­el struc­tures which a machine uses to pro­duce full-fledged game con­tent (such as lev­els). It hap­pens in a real-time. There is a con­tin­u­ous back-and-forth between design­er and machine.

Soft­ware user inter­faces, on mobile in par­tic­u­lar, are increas­ing­ly fre­quent­ly assem­bled from ready-made com­po­nents accord­ing to more or less well-described rules tak­en from design lan­guages such as Mate­r­i­al Design. These design lan­guages are cur­rent­ly pri­mar­i­ly described for human con­sump­tion. But it should be a small step to make a design lan­guage machine-readable.

So I see an oppor­tu­ni­ty here where a design­er might assem­ble a UI like they do now, and a machine can do sev­er­al things. For exam­ple it can test for adher­ence to design lan­guage rules, sug­gest cor­rec­tions or even auto-cor­rect as the design­er works.

More inter­est­ing­ly, a machine might take one UI mock­up, and pro­vide the design­er with sev­er­al more pos­si­ble vari­a­tions. To do this it could use dif­fer­ent lay­outs, or alter­na­tive com­po­nents that serve a same or sim­i­lar pur­pose to the ones used. 

In high pres­sure work envi­ron­ments where time is scarce, cor­ners are often cut in the diver­gence phase of design. Machines could aug­ment design­ers so that gen­er­at­ing many design alter­na­tives becomes less labo­ri­ous both men­tal­ly and phys­i­cal­ly. Ide­al­ly, machines would sur­prise and even inspire us. And the final say would still be ours.

Engagement design worksheets

Engagement design workshop at General Assembly Singapore

In June/July of this year I helped Michael Fil­lié teach two class­es about engage­ment design at Gen­er­al Assem­bly Sin­ga­pore. The first was the­o­ret­i­cal and the sec­ond prac­ti­cal. For the prac­ti­cal class we cre­at­ed a cou­ple of work­sheets which par­tic­i­pants used in groups to grad­u­al­ly build a design con­cept for a new prod­uct or prod­uct improve­ment aimed at long-term engage­ment. Below are the work­sheets along with some notes on how to use them. I’m hop­ing they may be use­ful in your own practice.

A prac­ti­cal note: Each of these work­sheets is designed to be print­ed on A1 paper. (Click on the images to get the PDFs.) We worked on them using post-it notes so that it is easy to add, change or remove things as you go.

Problem statement and persona

01-problem-statement-and-persona

We start­ed with under­stand­ing the prob­lem and the user. This work­sheet is an adap­ta­tion of the per­sona sheet by Strat­e­gyz­er. To use it you begin at the top, flesh­ing out the prob­lem in the form of stat­ing the engage­ment chal­lenge, and the busi­ness goals. Then, you select a user seg­ment which is rel­e­vant to the problem.

The mid­dle sec­tion of the sheet is used to describe them in the form of a per­sona. Start with putting a face on them. Give the per­sona a name and add some demo­graph­ic details rel­e­vant for the user’s behav­iour. Then, move on to explor­ing what their envi­ron­ment looks and sounds like and what they are think­ing and feel­ing. Final­ly, try to describe what issues the user is hav­ing that are addressed by the prod­uct and what the user stands to gain from using the product.

The third sec­tion of this sheet is used to wrap up the first exer­cise by doing a quick gap analy­sis of what the busi­ness would like to see in terms of user behav­iour and what the user is cur­rent­ly doing. This will help pin down the engage­ment design con­cept fleshed out in the next exercises.

Engagement loop

02-engagement-loop

Exer­cise two builds on the under­stand­ing of the prob­lem and the user and offers a struc­tured way of think­ing through a pos­si­ble solu­tion. For this we use the engage­ment loop mod­el devel­oped by Sebas­t­ian Deter­d­ing. There are dif­fer­ent places we can start here but one that often works well is to start imag­in­ing the Big Hairy Auda­cious Goal the user is look­ing to achieve. This is the chal­lenge. It is a thing (usu­al­ly a skill) the user can improve at. Note this chal­lenge down in the mid­dle. Then, work­ing around the chal­lenge, describe a mea­sur­able goal the user can achieve on their way to mas­ter­ing the chal­lenge. Describe the action the user can take with the prod­uct towards that goal, and the feed­back the prod­uct will give them to let them know their action has suc­ceed­ed and how much clos­er it has got­ten them to the goal. Final­ly and cru­cial­ly, try to describe what kind of moti­va­tion the user is dri­ven by and make sure the goals, actions and feed­back make sense in that light. If not, adjust things until it all clicks.

Storyboard

03-storyboard

The final exer­cise is devot­ed to visu­al­is­ing and telling a sto­ry about the engage­ment loop we devel­oped in the abstract in the pre­vi­ous block. It is a typ­i­cal sto­ry­board, but we have con­strained it to a set of sto­ry beats you must hit to build a sat­is­fy­ing nar­ra­tive. We go from intro­duc­ing the user and their chal­lenge, to how the prod­uct com­mu­ni­cates the goal and action to what a user does with it and how they get feed­back on that to (fast-for­ward) how they feel when they ulti­mate­ly mas­ter the chal­lenge. It makes the design con­cept relat­able to out­siders and can serve as a jump­ing off point for fur­ther design and development.

Use, adapt and share

Togeth­er, these three exer­cis­es and work­sheets are a great way to think through an engage­ment design prob­lem. We used them for teach­ing but I can also imag­ine teams using them to explore a solu­tion to a prob­lem they might be hav­ing with an exist­ing prod­uct, or as a way to kick­start the devel­op­ment of a new product.

We’ve built on oth­er people’s work for these so it only makes sense to share them again for oth­ers to use and build on. If you do use them I would love to hear about your experiences. 

Doing UX inside of Scrum

Some notes on how I am cur­rent­ly “doing user expe­ri­ence” inside of Scrum. This approach has evolved from my projects at Hub­bub as well as more recent­ly my work with ARTO and on a project at Eden­spiek­er­mann. So I have found it works with both star­tups and agency style projects.

The start­ing point is to under­stand that Scrum is intend­ed to be a con­tain­er. It is a process frame­work. It should be able to hold any oth­er activ­i­ty you think you need as a team. So if we feel we need to add UX some­how, we should try to make it part of Scrum and not some­thing that is tacked onto Scrum. Why not tack some­thing on? Because it sig­nals design is some­how dis­tinct from devel­op­ment. And the whole point of doing agile is to have cross-func­tion­al teams. If you set up a sep­a­rate process for design you are high­ly like­ly not to ben­e­fit from the full col­lec­tive intel­li­gence of the com­bined design and devel­op­ment team. So no, design needs to be inside of the Scrum container.

Stag­gered sprints are not the answer either because you are still split­ting the team into design and devel­op­ment, ham­per­ing cross-col­lab­o­ra­tion and trans­paren­cy. You’re basi­cal­ly invit­ing Tay­lorism back into your process—the very thing you were try­ing to get­ting away from.

When you are uncom­fort­able with putting design­ers and devel­op­ers all in the same team and the same process the answer is not to make your process more elab­o­rate, par­cel things up, and decrease “messy” inter­ac­tions. The answer is increas­ing con­ver­sa­tion, not elim­i­nat­ing it.

It turns out things aren’t remote­ly as com­pli­cat­ed as they appear to be. The key is under­stand­ing Scrum’s events. The big event hold­ing all oth­er events is the sprint. The sprint out­puts a releasable incre­ment of “done” prod­uct. The devel­op­ment team does every­thing required to achieve the sprint goal col­lab­o­ra­tive­ly deter­mined dur­ing sprint plan­ning. Nat­u­ral­ly this includes any design need­ed for the prod­uct. I think of this as the ‘pro­duc­tion’ type of design. It typ­i­cal­ly con­sists most­ly of UI design. There may already be some pre­lim­i­nary UI design avail­able at the start of the sprint but it does not have to be finished. 

What about the kind of design that is required for fig­ur­ing out what to build in the first place? It might not be obvi­ous at first, but Scrum actu­al­ly has an ongo­ing process which read­i­ly accom­mo­dates it: back­log refine­ment. These are all activ­i­ties required to get a prod­uct back­log item in shape for sprint plan­ning. This is emphat­i­cal­ly not a solo show for the prod­uct man­ag­er to con­duct. It is some­thing the whole team col­lab­o­rates on. Devel­op­ers and design­ers. In my expe­ri­ence design­ers are great at facil­i­tat­ing back­log refine­ment ses­sions. At the white­board, fig­ur­ing stuff out with the whole team ‘Lean UX’ style. 

I will admit prod­uct back­log refine­ment is Scrum’s weak point. Where it offers a lot of struc­ture for the sprints, it offers hard­ly any for the back­log refine­ment (or groom­ing as some call it). But that’s okay, we can evolve our own.

I like to use Kan­ban to man­age the process of back­log refine­ment. Items come into the pipeline as some­thing we want to elab­o­rate because we have decid­ed we want to build it (in some form or oth­er, can be just an exper­i­ment) in the next sprint or two. It then goes through var­i­ous stages of elab­o­ra­tion. At the very least cap­tur­ing require­ments in the form of user sto­ries or job sto­ries, doing sketch­es, a lo-fi pro­to­type, mock­ups and a hi-fi pro­to­type and final­ly break­ing the item down into work to be done and attach­ing an esti­mate to it. At this point it is ready to be part of a sprint. Cru­cial­ly, dur­ing this life­cy­cle of an item as it is being refined, we can and should do user research if we feel we need more data, or user test­ing if we feel it is too risky to com­mit to a fea­ture outright. 

For this kind of fig­ur­ing stuff out, this ‘plan­ning’ type of design, it makes no sense to have it be part of a sprint-like struc­ture because the work required to get it to a ‘ready’ state is much more unpre­dictable. The point of hav­ing a loos­er groom­ing flow is that it exists to elim­i­nate uncer­tain­ty for when we com­mit to an item in a sprint.

So between the sprint and back­log refine­ment, Scrum read­i­ly accom­mo­dates design. ‘Pro­duc­tion’ type design hap­pens inside of the sprint and design­ers are con­sid­ered part of the devel­op­ment team. ‘Plan­ning’ type of design hap­pens as part of back­log refinement.

So no need to tack on a sep­a­rate process. It keeps the process sim­ple and under­stand­able, thus increas­ing trans­paren­cy for the whole team. It pre­vents design from becom­ing a black box to oth­ers. And when we make design part of the con­tain­er process frame­work that is Scrum, we reap the rewards of the team’s col­lec­tive intel­li­gence and we increase our agility.

Prototyping is a team sport

Late­ly I have been bing­ing on books, pre­sen­ta­tions and arti­cles relat­ed to ‘Lean UX’. I don’t like the term, but then I don’t like the tech industry’s love for invent­ing a new label for every damn thing. I do like the things empha­sis­es: shared under­stand­ing, deep col­lab­o­ra­tion, con­tin­u­ous user feed­back. These are prin­ci­ples that have always implic­it­ly guid­ed the choic­es I made when lead­ing teams at Hub­bub and now also as a mem­ber of sev­er­al teams in the role of prod­uct designer.

In all these lean UX read­ings a thing that keeps com­ing up again and again is pro­to­typ­ing. Pro­to­types are the go-to way of doing ‘exper­i­ments’, in lean-speak. Oth­er things can be done as well—surveys, inter­views, whatever—but more often than not, assump­tions are test­ed with prototypes. 

Which is great! And also unsur­pris­ing as pro­to­typ­ing has real­ly been embraced by the tech world. And tools for rapid pro­to­typ­ing are get­ting a lot of atten­tion and inter­est as a result. How­ev­er, this comes with a cou­ple of risks. For one, some­times it is fine to stick to paper. But the lure of shiny pro­to­typ­ing tools is strong. You’d rather not show a crap­py draw­ing to a user. What if they hate it? How­ev­er, high fideli­ty pro­to­typ­ing is always more cost­ly than paper. So although well-inten­tioned, pro­to­typ­ing tools can encour­age waste­ful­ness, the bane of lean. 

There is a big­ger dan­ger which runs against the lean ethos, though. Some tools afford deep col­lab­o­ra­tion more than oth­ers. Let’s be real: none afford deep­er col­lab­o­ra­tion than paper and white­boards. There is one per­son behind the con­trols when pro­to­typ­ing with a tool. So in my view, one should only ever progress to that step once a team effort has been made to hash out the rough out­lines of what is to be pro­to­typed. Basi­cal­ly: always paper pro­to­type the dig­i­tal pro­to­type. Together. 

I have had a lot of fun late­ly play­ing with brows­er pro­to­types and with pro­to­typ­ing in Framer. But as I was get­ting back into all of this I did notice this risk: All of a sud­den there is a per­son on the team who does the pro­to­types. Unless this solo pro­to­typ­ing is pre­ced­ed by shared pro­to­typ­ing, this is a prob­lem. Because the rest of the team is left out of the think­ing-through-mak­ing which makes the pro­to­typ­ing process so valu­able in addi­tion to the testable arte­facts it outputs.

It is I think a key over­sight of the ‘should design­ers code’ debaters and to an extent one made by all pro­to­typ­ing tool man­u­fac­tur­ers: Indi­vid­u­als don’t pro­to­type, teams do. Pro­to­typ­ing is a team sport. And so the suc­cess of a tool depends not only on how well it sup­ports indi­vid­ual pro­to­typ­ing activ­i­ties but also how well it embeds itself in col­lab­o­ra­tive workflows. 

In addi­tion to the tools them­selves get­ting bet­ter at sup­port­ing col­lab­o­ra­tive work­flows, I would also love to see more tuto­ri­als, both offi­cial and from the com­mu­ni­ty, about how to use a pro­to­typ­ing tool with­in the larg­er con­text of a team doing some form of agile. Most tuto­ri­als now focus on “how do I make this thing with this tool”. Use­ful, up to a point. But a large part of pro­to­typ­ing is to arrive at “the thing” together. 

One of the lean UX things I devoured was this pre­sen­ta­tion by Bill Scott in which he talks about align­ing a pro­to­typ­ing and a devel­op­ment tech stack, so that the gap between design and engi­neer­ing is bridged not just with process­es but also with tool­ing. His exam­ple applies to web devel­op­ment and app devel­op­ment using web tech­nolo­gies. I won­der what a sim­i­lar approach looks like for native mobile app devel­op­ment. But this is the sort of thing I am talk­ing about: Smart think­ing about how to actu­al­ly do this lean thing in the real world. I believe organ­is­ing our­selves so that we can pro­to­type as a team is absolute­ly key. I will pick my tools and process­es accord­ing­ly in future.

All of the above is as usu­al most­ly a reminder to self: As a design­er your role is not to go off and work solo on bril­liant pro­to­types. Your role is to facil­i­tate such efforts by the whole team. Sure, there will be solo deep design­er­ly craft­ing hap­pen­ing. But it will not add up to any­thing if it is not embed­ded in a col­lab­o­ra­tive design and devel­op­ment framework.