Doing UX inside of Scrum

Some notes on how I am currently “doing user experience” inside of Scrum. This approach has evolved from my projects at Hubbub as well as more recently my work with ARTO and on a project at Edenspiekermann. So I have found it works with both startups and agency style projects.

The starting point is to understand that Scrum is intended to be a container. It is a process framework. It should be able to hold any other activity you think you need as a team. So if we feel we need to add UX somehow, we should try to make it part of Scrum and not something that is tacked onto Scrum. Why not tack something on? Because it signals design is somehow distinct from development. And the whole point of doing agile is to have cross-functional teams. If you set up a separate process for design you are highly likely not to benefit from the full collective intelligence of the combined design and development team. So no, design needs to be inside of the Scrum container.

Staggered sprints are not the answer either because you are still splitting the team into design and development, hampering cross-collaboration and transparency. You’re basically inviting Taylorism back into your process—the very thing you were trying to getting away from.

When you are uncomfortable with putting designers and developers all in the same team and the same process the answer is not to make your process more elaborate, parcel things up, and decrease “messy” interactions. The answer is increasing conversation, not eliminating it.

It turns out things aren’t remotely as complicated as they appear to be. The key is understanding Scrum’s events. The big event holding all other events is the sprint. The sprint outputs a releasable increment of “done" product. The development team does everything required to achieve the sprint goal collaboratively determined during sprint planning. Naturally this includes any design needed for the product. I think of this as the ‘production’ type of design. It typically consists mostly of UI design. There may already be some preliminary UI design available at the start of the sprint but it does not have to be finished.

What about the kind of design that is required for figuring out what to build in the first place? It might not be obvious at first, but Scrum actually has an ongoing process which readily accommodates it: backlog refinement. These are all activities required to get a product backlog item in shape for sprint planning. This is emphatically not a solo show for the product manager to conduct. It is something the whole team collaborates on. Developers and designers. In my experience designers are great at facilitating backlog refinement sessions. At the whiteboard, figuring stuff out with the whole team ‘Lean UX’ style.

I will admit product backlog refinement is Scrum’s weak point. Where it offers a lot of structure for the sprints, it offers hardly any for the backlog refinement (or grooming as some call it). But that’s okay, we can evolve our own.

I like to use Kanban to manage the process of backlog refinement. Items come into the pipeline as something we want to elaborate because we have decided we want to build it (in some form or other, can be just an experiment) in the next sprint or two. It then goes through various stages of elaboration. At the very least capturing requirements in the form of user stories or job stories, doing sketches, a lo-fi prototype, mockups and a hi-fi prototype and finally breaking the item down into work to be done and attaching an estimate to it. At this point it is ready to be part of a sprint. Crucially, during this lifecycle of an item as it is being refined, we can and should do user research if we feel we need more data, or user testing if we feel it is too risky to commit to a feature outright.

For this kind of figuring stuff out, this ‘planning’ type of design, it makes no sense to have it be part of a sprint-like structure because the work required to get it to a ‘ready’ state is much more unpredictable. The point of having a looser grooming flow is that it exists to eliminate uncertainty for when we commit to an item in a sprint.

So between the sprint and backlog refinement, Scrum readily accommodates design. ‘Production’ type design happens inside of the sprint and designers are considered part of the development team. ‘Planning’ type of design happens as part of backlog refinement.

So no need to tack on a separate process. It keeps the process simple and understandable, thus increasing transparency for the whole team. It prevents design from becoming a black box to others. And when we make design part of the container process framework that is Scrum, we reap the rewards of the team’s collective intelligence and we increase our agility.

Prototyping is a team sport

Lately I have been binging on books, presentations and articles related to ‘Lean UX’. I don’t like the term, but then I don’t like the tech industry’s love for inventing a new label for every damn thing. I do like the things emphasises: shared understanding, deep collaboration, continuous user feedback. These are principles that have always implicitly guided the choices I made when leading teams at Hubbub and now also as a member of several teams in the role of product designer.

In all these lean UX readings a thing that keeps coming up again and again is prototyping. Prototypes are the go-to way of doing ‘experiments’, in lean-speak. Other things can be done as well—surveys, interviews, whatever—but more often than not, assumptions are tested with prototypes.

Which is great! And also unsurprising as prototyping has really been embraced by the tech world. And tools for rapid prototyping are getting a lot of attention and interest as a result. However, this comes with a couple of risks. For one, sometimes it is fine to stick to paper. But the lure of shiny prototyping tools is strong. You’d rather not show a crappy drawing to a user. What if they hate it? However, high fidelity prototyping is always more costly than paper. So although well-intentioned, prototyping tools can encourage wastefulness, the bane of lean.

There is a bigger danger which runs against the lean ethos, though. Some tools afford deep collaboration more than others. Let’s be real: none afford deeper collaboration than paper and whiteboards. There is one person behind the controls when prototyping with a tool. So in my view, one should only ever progress to that step once a team effort has been made to hash out the rough outlines of what is to be prototyped. Basically: always paper prototype the digital prototype. Together.

I have had a lot of fun lately playing with browser prototypes and with prototyping in Framer. But as I was getting back into all of this I did notice this risk: All of a sudden there is a person on the team who does the prototypes. Unless this solo prototyping is preceded by shared prototyping, this is a problem. Because the rest of the team is left out of the thinking-through-making which makes the prototyping process so valuable in addition to the testable artefacts it outputs.

It is I think a key oversight of the ‘should designers code’ debaters and to an extent one made by all prototyping tool manufacturers: Individuals don’t prototype, teams do. Prototyping is a team sport. And so the success of a tool depends not only on how well it supports individual prototyping activities but also how well it embeds itself in collaborative workflows.

In addition to the tools themselves getting better at supporting collaborative workflows, I would also love to see more tutorials, both official and from the community, about how to use a prototyping tool within the larger context of a team doing some form of agile. Most tutorials now focus on “how do I make this thing with this tool”. Useful, up to a point. But a large part of prototyping is to arrive at “the thing” together.

One of the lean UX things I devoured was this presentation by Bill Scott in which he talks about aligning a prototyping and a development tech stack, so that the gap between design and engineering is bridged not just with processes but also with tooling. His example applies to web development and app development using web technologies. I wonder what a similar approach looks like for native mobile app development. But this is the sort of thing I am talking about: Smart thinking about how to actually do this lean thing in the real world. I believe organising ourselves so that we can prototype as a team is absolutely key. I will pick my tools and processes accordingly in future.

All of the above is as usual mostly a reminder to self: As a designer your role is not to go off and work solo on brilliant prototypes. Your role is to facilitate such efforts by the whole team. Sure, there will be solo deep designerly crafting happening. But it will not add up to anything if it is not embedded in a collaborative design and development framework.

Running

Running the bay

I have started running. I never thought I would. I think I was put off running in high school when during physical education they would occasionally force us to run 5k out of the blue. I remember it was torture. I never did particularly well and was sore afterwards so I came to associate it with failure and tedium and so on.

When we moved to Singapore I decided I would do some more exercise and at some point read this thing about how running is a cheap and easy way to get some exercise in and that it is particularly fun to do with your partner.

My wife has always been a runner so I suggested we started running together. She was surprised I think but agreed it would be a good idea. The next weekend I went and got a pair of shoes and that same day we were off to our first run.

In the beginning it was a hard to find good routes. I could not manage much more than 3k which did not help. But 3k turned into 5k. Around that time we found a field in the neighbourhood we could do laps around. When that became boring we decided to switch to the bay, a very popular spot, and we started doing 6–8k runs there. We’ve been sticking to that ever since.

Now we run roughly two to three times a week. Sometimes we try one of the parks and run some trails. I find I enjoy trail running even more because it is less about speed and more about awareness. And there is more to see so it never gets boring.

We were on holiday on Bali the other week and I brought my gear and ran some trails through rice fields and along the beach there as well and it struck me that this is possibly the greatest thing about running. You can do it anywhere you go, and once you build up a decent amount of stamina, your legs will take you wherever it is you want to go. It is an incredibly liberating feeling.

It took a bit of doing to get to that point. But now that I’ve reached it, I’m hooked.

ARTO

Time for a status update on my stay in Singapore. I have already entered the final three months of my time here. Time flies when you’re having fun eating everything in sight, it turns out.

On the work front I have indeed found the time to do some thinking about what my next big thing will be. Nothing has firmed up to the point where I feel like sharing it here but I am enjoying the conversations I am having with various people about it.

In the meantime, I have been keeping busy working with a local startup called ARTO. I have taken on the role of product designer and I am also responsible for product management of the user-facing parts of the thing we are building.

That “thing” is about art. There are many people who are interested in art but don’t know where to start when it comes to finding, enjoying and acquiring it. We’re building a mobile and TV app that should make that a whole lot more easy and fun.

When I say art I mean commercial, popular and contemporary art of the 2D variety. So painting, illustration, photography, etc. Things you might buy originals or prints of and put on your living room wall. Others are doing a fine job on the high end of the art market. We think there are parts remaining that have been underserved to date.

There are many moving parts to this product, ranging from a recommendation engine, content management system, mobile app, TV app and more so I am never bored. There is always something to figure out in terms of what to build and how it should work and look. For the past couple of years I was always too busy managing the studio to really get into the details of design but now I can totally focus on that and it really is a pleasure.

On the people side we have a small but growing team of brilliant individuals haling from various parts of the region including Vietnam, Myanmar and India. This lends an additional layer of fun challenge to the goings on as we constantly negotiate our differences but also discover the many commonalities afforded by the globalised tech industry. I also get to travel to Ho Chi Minh City regularly which is a nice change from the extreme order that is Singapore.

It is early days so I not only get to help shape the product from the very start but also the company itself. This includes figuring out and maintaining design and development processes. For this I find my Boydian explorations quite useful, paired with what is now more than 13 years of industry experience (how did that happen?) I have also conducted more hiring interviews in the past few months than I did in the ten years before.

In a month or two a first version of the product should be in the market. When we’ve gotten to that point I will do another of these updates. In the meantime just know I am up to my armpits in thinking-through-making about art discovery and enjoyment on screens small and large. If you have anything related to share, or would like to be one of the first to test-drive the thing when it arrives, let me know.

Artificial intelligence, creativity and metis

Boris pointed me to CreativeAI, an interesting article about creativity and artificial intelligence. It offers a really nice overview of the development of the idea of augmenting human capabilities through technology. One of the claims the authors make is that artificial intelligence is making creativity more accessible. Because tools with AI in them support humans in a range of creative tasks in a way that shortcuts the traditional requirements of long practice to acquire the necessary technical skills.

For example, ShadowDraw (PDF) is a program that helps people with freehand drawing by guessing what they are trying to create and showing a dynamically updated ‘shadow image’ on the canvas which people can use as a guide.

It is an interesting idea and in some ways these kinds of software indeed lower the threshold for people to engage in creative tasks. They are good examples of artificial intelligence as partner in stead of master or servant.

While reading CreativeAI I wasn’t entirely comfortable though and I think it may have been caused by two things.

One is that I care about creativity and I think that a good understanding of it and a daily practice at it—in the broad sense of the word—improves lives. I am also in some ways old-fashioned about it and I think the joy of creativity stems from the infinitely high skill ceiling involved and the never-ending practice it affords. Let’s call it the Jiro perspective, after the sushi chef made famous by a wonderful documentary.

So, claiming that creative tools with AI in them can shortcut all of this life-long joyful toil produces a degree of panic for me. Although it’s probably a Pastoral worldview which would be better to abandon. In a world eaten by software, it’s better to be a Promethean.

The second reason might hold more water but really is more of an open question than something I have researched in any meaningful way. I think there is more to creativity than just the technical skill required and as such the CreativeAI story runs the risk of being reductionist. While reading the article I was also slowly but surely making my way through one of the final chapters of James C. Scott’s Seeing Like a State, which is about the concept of metis.

It is probably the most interesting chapter of the whole book. Scott introduces metis as a form of knowledge different from that produced by science. Here are some quick excerpts from the book that provide a sense of what it is about. But I really can’t do the richness of his description justice here. I am trying to keep this short.

The kind of knowledge required in such endeavors is not deductive knowledge from first principles but rather what Greeks of the classical period called metis, a concept to which we shall return. […] metis is better understood as the kind of knowledge that can be acquired only by long practice at similar but rarely identical tasks, which requires constant adaptation to changing circumstances. […] It is to this kind of knowledge that [socialist writer] Luxemburg appealed when she characterized the building of socialism as “new territory” demanding “improvisation” and “creativity.”

Scott’s argument is about how authoritarian high-modernist schemes privilege scientific knowledge over metis. His exploration of what metis means is super interesting to anyone dedicated to honing a craft, or to cultivating organisations conducive to the development and application of craft in the face of uncertainty. There is a close link between metis and the concept of agility.

So circling back to artificially intelligent tools for creativity I would be interested in exploring not only how we can diminish the need for the acquisition of the technical skills required, but to also accelerate the acquisition of the practical knowledge required to apply such skills in the ever-changing real world. I suggest we expand our understanding of what it means to be creative, but without losing the link to actual practice.

For the ancient Greeks metis became synonymous with a kind of wisdom and cunning best exemplified by such figures as Odysseus and notably also Prometheus. The latter in particular exemplifies the use of creativity towards transformative ends. This is the real promise of AI for creativity in my eyes. Not to simply make it easier to reproduce things that used to be hard to create but to create new kinds of tools which have the capacity to surprise their users and to produce results that were impossible to create before.

Artificial intelligence as partner

Some notes on artificial intelligence, technology as partner and related user interface design challenges. Mostly notes to self, not sure I am adding much to the debate. Just summarising what I think is important to think about more. Warning: Dense with links.

Matt Jones writes about how artificial intelligence does not have to be a slave, but can also be partner.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

I would add a third possibility, which is AI as master. A common fear we humans have and one I think only growing as things like AlphaGo and new Boston Dynamics robots keep happening.

I have had a tweet pinned to my timeline for a while now, which is a quote from Play Matters.

“tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actually does not just apply to AI but to tech in general. Of course, as tech gets smarter and more independent from humans, the idea of a ‘third way’ only grows in importance.

More tweeting. A while back, shortly after AlphaGo’s victory, James tweeted:

On the one hand, we must insist, as Kasparov did, on Advanced Go, and then Advanced Everything Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear example of humans and AI partnering. And it is also an example of technology as a source of expression and a way of being.

Also, in a WIRED article on AlphaGo, someone who had played the AI repeatedly says his game has improved tremendously.

So that is the promise: Artificially intelligent systems which work together with humans for mutual benefit.

Now of course these AIs don’t just arrive into the world fully formed. They are created by humans with particular goals in mind. So there is a design component there. We can design them to be partners but we can also design them to be masters or slaves.

As an aside: Maybe AIs that make use of deep learning are particularly well suited to this partner model? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynamics. There apparently is a significant difference between holistic and reductionist approaches, deep learning being holistic. I imagine reductionist AI might be more dependent on humans. But this is just wild speculation. I don’t know if there is anything there.

This insistence of James on “advanced everything else” is a world view. A politics. To allow ourselves to be increasingly entangled with these systems, to not be afraid of them. Because if we are afraid, we either want to subjugate them or they will subjugate us. It is also about not obscuring the systems we are part of. This is a sentiment also expressed by James in the same series of tweets I quoted from earlier:

These emergences are also the best model we have ever built for describing the true state of the world as it always already exists.

And there is overlap here with ideas expressed by Kevin in ‘Design as Participation’:

[W]e are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?

AI partners might help us to better see the systems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.

I am reminded of the mixed-initiative systems being researched in the area of procedural content generation for games. I wrote about these a while back on the Hubbub blog. Such systems are partners of designers. They give something like super powers. Now imagine such powers applied to other problems. Quite exciting.

Actually, in the aforementioned article I distinguish between tools for making things and tools for inspecting possibility spaces. In the first case designers manipulate more abstract representations of the intended outcome and the system generates the actual output. In the second case the system visualises the range of possible outcomes given a particular configuration of the abstract representation. These two are best paired.

From a design perspective, a lot remains to be figured out. If I look at those mixed-initiative tools I am struck by how poorly they communicate what the AI is doing and what its capabilities are. There is a huge user interface design challenge there.

For stuff focused on getting information, a conversational UI seems to be the current local optimum for working with an AI. But for tools for creativity, to use the two-way split proposed by Victor, different UIs will be required.

What shape will they take? What visual language do we need to express the particular properties of artificial intelligence? What approaches can we take in addition to personifying AI as bots or characters? I don’t know and I can hardly think of any good examples that point towards promising approaches. Lots to be done.

Prototyping in the browser

When you are designing a web site or web app I think you should prototype in the browser. Why? You might as well ask why prototype at all. Answer: To enable continuous testing and refinement of your design. Since you are designing for the web it makes sense to do this testing and refinement with an artefact composed of the web’s material.

There are many ways to do prototyping. A common way is to make wireframes and then make them ‘clickable’. But when I am designing a web site or a web app and I get to the point where it is time to do wireframes I often prefer to go straight to the browser.

Before this step I have sketched out all the screens on paper of course. I have done multiple sketches of each page. I’ve had them critiqued by team members and I have reworked them.

Drawing pictures of web pages

But then I open my drawing program—Sketch, in my case—and my heart sinks. Not because Sketch sucks. Sketch is great. But it somehow feels wrong to draw pictures of web pages on my screen. I find it cumbersome. My drawing program does not behave like a browser. That is to say in stead of defining a bunch of rules for elements and having the browser figure out how to render them on a page together I need to follow those rules myself in my head as I put each element in its place.

And don’t get me started on how wireframes are supposed to be without visual design. That is nonsense. If you are using contrast, repetition, alignment and proximity, you are doing layout. That is visual design. I can’t stand wireframes with a bad visual hierarchy.

If I persevere, and I have a set of wireframes in my drawing program, they are static. I can’t use them. I then need to export them to some other often clunky program to make the pictures clickable. Which always results in a poor resemblance of the actual experience. (I use Marvel. It’s okay but it is hardly a joy to use. For mobile apps I still use it, for web sites I prefer not to.)

Prototyping in the browser

When I prototype in the browser I don’t have to deal with these issues. I am doing layout in a way that is native to the medium. And once I have some pages set up they are immediately usable. So I can hand it to someone, a team member or a test participant, and let them play with it.

That is why, for web sites and web apps, I skip wireframes altogether and prototype in the browser. I do not know how common this is in the industry nowadays. So I thought I would share my approach here. It may be of use to some.

It used to be the case that it was quite a bit of hassle to get up and running with a browser prototype so naturally opening a drawing package seemed more attractive. Not so anymore. Tools have come a long way. Case in point: My setup nowadays involves zero screwing around on the command line.

CodeKit

The core of it is a paid-for Mac app called CodeKit, a so-called task manager. It allows you to install a front-end development framework I like called Zurb Foundation with a couple of clicks and has a built in web server so you can play with your prototype on any device on your local network. As you make changes to the code of your prototype it gets automatically updated on all your devices. No more manual refreshing. Saves a huge amount of time.

I know you can do most of what CodeKit does for you with stuff like Grunt but that involves tedious configuration and working the command line. This is fine when you’re a developer, but not fine when you are a designer. I want to be up and running as fast as possible. CodeKit allows me to do that and has some other features built in that are ideal for prototyping which I will talk about more below. Long story short: CodeKit has saved me a huge amount of time and is well worth the money.

Okay so on with the show. Yes, this whole prototyping in the browser thing involves ‘coding’. But honestly, if you can’t write some HTML and CSS you really shouldn’t be doing design for the web in the first place. I don’t care if you consider yourself a UX designer and somehow above all this lowly technical stuff. You are not. Nobody is saying you should become a frontend developer but you need to have an acquaintance with the materials your product is made of. Follow a few courses on Codecadamy or something. There really isn’t an excuse anymore these days for not knowing this stuff. If you want to level up, learn SASS.

Zurb Foundation

I like Zurb Foundation because it offers a coherent and comprehensive library of elements which covers almost all the common patterns found in web sites and apps. It offers a grid and some default typography styles as well. All of it doesn’t look flashy at all which is how I like it when I am prototyping. A prototype at this stage does not require personality yet. Just a clear visual hierarchy. Working with Foundation is almost like playing with LEGO. You just click together the stuff you need. It’s painless and looks and works great.

I hardly do any styling but the few changes I do want to make I can easily add to Foundation’s app.scss using SASS. I usually have a few styles in there for tweaking some margins on particular elements, for example a footer. But I try to focus on the structure and behaviour of my pages and for that I am mostly doing HTML.

GitHub

Testing locally I already mentioned. For that, CodeKit has you covered. Of course, you want to be able to share your prototype with others. For this I like to use GitHub and their Pages feature. Once again, using their desktop client, this involves zero command line work. You just add the folder with your CodeKit project as a new repository and sync it with GitHub. Then you need to add a branch named ‘gh-pages’ and do ‘update from master’. Presto, your prototype is now on the web for anyone with the URL to see and use. Perfect if you’re working in a distributed team.

Don’t be intimidated by using GitHub. Their on-boarding is pretty impressive nowadays. You’ll be up and running in no time. Using version control, even if it is just you working on the prototype, adds some much needed structure and control over changes. And when you are collaborating on your prototype with team members it is indispensable.

But in most cases I am the only one building the prototype so I just work on the master branch and once every while I update the gh-pages branch from master and sync it and I am done. If you use Slack you can add a GitHub bot to a channel and have your team members receive an automatic update every time you change the prototype.

The Kit Language

If your project is of any size beyond the very small you will likely have repeating elements in your design. Headers, footers, recurring widgets and so on. CodeKit has recently added support for something called the Kit Language. This adds support for imports and variables to regular HTML. It is absolutely great for prototyping. For each repeating element you create a ‘partial’ and import it wherever you need it. Variables are great for changing the contents of such repeating elements. CodeKit compiles it all into plain static HTML for you so your prototype runs anywhere.

The Kit Language really was the missing piece of the puzzle for me. With it in place I am very comfortable recommending this way of working to anyone.

So that’s my setup: CodeKit, Zurb Foundation and GitHub. Together they make for a very pleasant and productive way to do prototyping in the browser. I don’t imagine myself going back to drawing pictures of web pages anytime soon.

Writing for conversational user interfaces

Last year at Hubbub we worked on two projects featuring a conversational user interface. I thought I would share a few notes on how we did the writing for them. Because for conversational user interfaces a large part of the design is in the writing.

At the moment, there aren’t really that many tools well suited for doing this. Twine comes to mind but it is really more focused on publishing as opposed to authoring. So while we were working on these projects we just grabbed whatever we were familiar with and felt would get the job done.

I actually think there is an opportunity here. If this conversational ui thing takes off designers would benefit a lot from better tools to sketch and prototype them. After all this is the only way to figure out if a conversational user interface is suitable for a particular project. In the words of Bill Buxton:

“Everything is best for something and worst for something else.”

Okay so below are my notes. The two projects are KOKORO (a codename) and Free Birds. We have yet to publish extensively on both, so a quick description is in order.

KOKORO is a digital coach for teenagers to help them manage and improve their mental health. It is currently a prototype mobile web app not publicly available. (The engine we built to drive it is available on GitHub, though.)

Free Birds (Vrije Vogels in Dutch) is a game about civil liberties for families visiting a war and resistance museum in the Netherlands. It is a location-based iOS app currently available on the Dutch app store and playable in Airborne Museum Hartenstein in Oosterbeek.


For KOKORO we used Gingko to write the conversation branches. This is good enough for a prototype but it becomes unwieldy at scale. And anyway you don’t want to be limited to a tree structure. You want to at least be able to loop back to a parent branch, something that isn’t supported by Gingko. And maybe you don’t want to use the branching pattern at all.

Free Birds’s story has a very linear structure. So in this case we just wrote our conversations in Quip with some basic rules for formatting, not unlike a screenplay.

In Free Birds player choices ‘colour’ the events that come immediately after, but the path stays the same.

This approach was inspired by the Walking Dead games. Those are super clever at giving players a sense of agency without the need for sprawling story trees. I remember seeing the creators present this strategy at PRACTICE and something clicked for me. The important point is, choices don’t have to branch out to different directions to feel meaningful.

KOKORO’s choices did have to lead to different paths so we had to build a tree structure. But we also kept track of things a user says. This allows the app to “learn” about the user. Subsequent segments of the conversation are adapted based on this learning. This allows for more flexibility and it scales better. A section of a conversation has various states between which we switch depending on what a user has said in the past.

We did something similar in Free Birds but used it to a far more limited degree, really just to once again colour certain pieces of dialogue. This is already enough to give a player a sense of agency.


As you can see, it’s all far from rocket surgery but you can get surprisingly good results just by sticking to these simple patterns. If I were to investigate more advanced strategies I would look into NLP for input and procedural generation for output. Who knows, maybe I will get to work on a project involving those things some time in the future.

Hardware interfaces for tuning the feel of microinteractions

In Digital Ground Malcolm McCullough talks about how tuning is a central part of interaction design practice. How part of the challenge of any project is to get to a point where you can start tweaking the variables that determine the behaviour of your interface for the best feel.

“Feel” is a word I borrow from game design. There is a book on it by Steve Swink. It is a funny term. We are trying to simulate sensations that are derived from the physical realm. We are trying to make things that are purely visual behave in such a way that they evoke these sensations. There are many games that heavily depend on getting feel right. Basically all games that are built on a physics simulation of some kind require good feel for a good player experience to emerge.

Physics simulations have been finding their way into non-game software products for some time now and they are becoming an increasing part of what makes a product, er, feel great. They are often at the foundation of signature moments that set a product apart from the pack. These signature moments are also known as microinteractions. To get them just right, being able to tune well is very important.

The behaviour of microinteractions based on physics simulations is determined by variables. For example, the feel of a spring is determined by the mass of the weight attached to the spring, the spring’s stiffness and the friction that resists the motion of the weight. These variables interact in ways that are hard to model in your head so you need to make repeated changes to each variable and try the simulation to get it just right. This is time-consuming, cumbersome and resists the easy exploration of alternatives essential to a good design process.

In The Setup game designer Bennett Foddy talks about a way to improve on this workflow. Many of his games (if not all of them) are playable physics simulations with punishingly hard controls. He suggests using a hardware interface (a MIDI controller) to tune the variables that determine the feel of his game while it runs. In this way the loop between changing a variable and seeing its effect in game is dramatically shortened and many different combinations of values can be explored easily. Once a satisfactory set of values for the variables has been found they can be written back to the software for future use.

I do believe such a setup is still non-trivial to make work with todays tools. A quick check verifies that Framer does not have OSC support, for example. There is an opportunity here for prototyping environments such as Framer and others to support it. The approach is not limited to motion-based microinteractions but can be extended to the tuning of variables that control other aspects of an app’s behaviour.

For example, when we were making Standing, we would have benefited hugely from hardware controls to tweak the sensitivity of its motion-sensing functions as we were using the app. We were forced to do it by repeatedly changing numbers in the code and building the app again and again. It was quite a pain to get right. To this day I have the feeling we could have made it better if only we would have had the tools to do it.

Judging from snafus such as the poor feel of the latest Twitter desktop client, there is a real need for better tools for tuning microinteractions. Just like pen tablets have become indispensable for those designing the form of user interfaces on screens. I think we might soon find a small set of hardware knobs on the desks of those designers working on the behaviour of user interfaces.

Design without touching the surface

I am preparing two classes at the moment. One is an introduction to user experience design, the other to user interface design. I did not come up with this division, it was part of the assignment. I thought it was odd at first. I wasn’t sure where one discipline ends and the other begins. I still am not sure. But I made a pragmatic decision to have the UX class focus on the high level process of designing (software) products, and the UI class focus on the visual aspects of a product’s interface. The UI class deals with a product’s surface, form, and to some extent also its behaviour, but on a micro level. Whereas the UX class focuses on behaviour on the macro level. Simply speaking—the UX class is about behaviour across screens, the UI class is about behaviour within screens.

The solution is workable. But I am still not entirely comfortable with it. I am not comfortable with the idea of being able to practice UX without ‘touching the surface’, so to speak. And it seems my two classes are advocating this. Also, I am pretty sure this is everyday reality for many UX practitioners. Notice I say “practitioner”, because I am not sure ‘designer’ is the right term in these cases. To be honest I do not think you can practice design without doing sketching and prototyping of some sort. (See Bill Buxton’s ‘Sketching User Experiences’ for an expanded argument on why this is.) And when it comes to designing software products this means touching the surface, the form.

Again, the reality is, ‘UX designer’ and ‘UI designer’ are common terms now. Certainly here in Singapore people know they need both to make good products. Some practitioners say they do both, others one or the other. The latter appears to be the most common and expected case. (By the way, in Singapore no-one I’ve met talks about interaction design.)

My concern is that by encouraging the practice of doing UX design without touching the surface of a product, we get shitty designs. In a process where UX and UI are seen as separate things the risk is one comes before the other. The UX designer draws the wireframes, the UI designer gets to turn them into pretty pictures, with no back-and-forth between the two. An iterative process can mitigate some of the damage such an artificial division of labour produces, but I think we still start out on the wrong foot. I think a better practice might entail including visual considerations from the very beginning of the design process (as we are sketching).

Two things I came across as I was preparing these classes are somehow in support of this idea. Both resulted from a call I did for resources on user interface design. I asked for books about visual aspects, but I got a lot more.

  1. In ‘Magic Ink’ Bret Victor writes about how the design of information software is hugely indebted to graphic design and more specifically information design in the tradition of Tufte. (He also mentions industrial design as an equally big progenitor of interaction design, but for software that is mainly about manipulation, not information.) The article is big, but the start of it is actually a pretty good if unorthodox general introduction to interaction design. For software that is about learning through looking at information Victor says interaction should be a last resort. So that leaves us with a task that is 80% if not more visual design. Touching the surface. Which makes me think you might as well get to it as quickly as possible and start sketching and prototyping aimed not just at structure and behaviour but also form. (Hat tip to Pieter Diepenmaat for this one.)

  2. In ‘Jumping to the End’ Matt Jones rambles entertainingly about design fiction. He argues for paying attention to details and that a lot of the design he practices is about ‘signature moments’ aka micro-interactions. So yeah, again, I can’t imagine designing these effectively without doing sketching and prototyping of the sort that includes the visual. And in fact Matt mentions this more or less at one point, when he talks about the fact that his team’s deliverables at Google are almost all visual. They are high fidelity mockups, animations, videos, and so on. These then become the starting points for further development. (Hat tip to Alexander Zeh for this one.)

In summary, I think distinguishing UX design from UI design is nonsense. Because you cannot practice design without sketching and prototyping. And you cannot sketch and prototype a software product without touching its surface. In stead of taking visual design for granted, or talking about it like it is some innate talent, some kind of magical skill some people are born with and others aren’t, user experience practitioners should consider being less enamoured with acquiring more skills from business, marketing and engineering and in stead practice at the skills that define the fields user experience design is indebted to the most: graphic design and industrial design. In other words, you can’t do user experience design without touching the surface.