‘Machine Learning for Designers’ workshop

On Wednesday Péter Kun, Holly Robbins and myself taught a one-day workshop on machine learning at Delft University of Technology. We had about thirty master’s students from the industrial design engineering faculty. The aim was to get them acquainted with the technology through hands-on tinkering with the Wekinator as central teaching tool.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Background

The reasoning behind this workshop is twofold.

On the one hand I expect designers will find themselves working on projects involving machine learning more and more often. The technology has certain properties that differ from traditional software. Most importantly, machine learning is probabilistic in stead of deterministic. It is important that designers understand this because otherwise they are likely to make bad decisions about its application.

The second reason is that I have a strong sense machine learning can play a role in the augmentation of the design process itself. So-called intelligent design tools could make designers more efficient and effective. They could also enable the creation of designs that would otherwise be impossible or very hard to achieve.

The workshop explored both ideas.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Format

The structure was roughly as follows:

In the morning we started out providing a very broad introduction to the technology. We talked about the very basic premise of (supervised) learning. Namely, providing examples of inputs and desired outputs and training a model based on those examples. To make these concepts tangible we then introduced the Wekinator and walked the students through getting it up and running using basic examples from the website. The final step was to invite them to explore alternative inputs and outputs (such as game controllers and Arduino boards).

In the afternoon we provided a design brief, asking the students to prototype a data-enabled object with the set of tools they had acquired in the morning. We assisted with technical hurdles where necessary (of which there were more than a few) and closed out the day with demos and a group discussion reflecting on their experiences with the technology.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Results

As I tweeted on the way home that evening, the results were… interesting.

Not all groups managed to put something together in the admittedly short amount of time they were provided with. They were most often stymied by getting an Arduino to talk to the Wekinator. Max was often picked as a go-between because the Wekinator receives OSC messages over UDP, whereas the quickest way to get an Arduino to talk to a computer is over serial. But Max in my experience is a fickle beast and would more than once crap out on us.

The groups that did build something mainly assembled prototypes from the examples on hand. Which is fine, but since we were mainly working with the examples from the Wekinator website they tended towards the interactive instrument side of things. We were hoping for explorations of IoT product concepts. For that more hand-rolling was required and this was only achievable for the students on the higher end of the technical expertise spectrum (and the more tenacious ones).

The discussion yielded some interesting insights into mental models of the technology and how they are affected by hands-on experience. A comment I heard more than once was: Why is this considered learning at all? The Wekinator was not perceived to be learning anything. When challenged on this by reiterating the underlying principles it became clear the black box nature of the Wekinator hampers appreciation of some of the very real achievements of the technology. It seems (for our students at least) machine learning is stuck in a grey area between too-high expectations and too-low recognition of its capabilities.

Next steps

These results, and others, point towards some obvious improvements which can be made to the workshop format, and to teaching design students about machine learning more broadly.

  1. We can improve the toolset so that some of the heavy lifting involved with getting the various parts to talk to each other is made easier and more reliable.
  2. We can build examples that are geared towards the practice of designing IoT products and are ready for adaptation and hacking.
  3. And finally, and probably most challengingly, we can make the workings of machine learning more transparent so that it becomes easier to develop a feel for its capabilities and shortcomings.

We do intend to improve and teach the workshop again. If you’re interested in hosting one (either in an educational or professional context) let me know. And stay tuned for updates on this and other efforts to get designers to work in a hands-on manner with machine learning.

Special thanks to the brilliant Ianus Keller for connecting me to Péter and for allowing us to pilot this crazy idea at IDE Academy.

References

Sources used during preparation and running of the workshop:

  • The Wekinator – the UI is infuriatingly poor but when it comes to getting started with machine learning this tool is unmatched.
  • Arduino – I have become particularly fond of the MKR1000 board. Add a lithium-polymer battery and you have everything you need to prototype IoT products.
  • OSC for Arduino – CNMAT’s implementation of the open sound control (OSC) encoding. Key puzzle piece for getting the above two tools talking to each other.
  • Machine Learning for Designers – my preferred introduction to the technology from a designerly perspective.
  • A Visual Introduction to Machine Learning – a very accessible visual explanation of the basic underpinnings of computers applying statistical learning.
  • Remote Control Theremin – an example project I prepared for the workshop demoing how to have the Wekinator talk to an Arduino MKR1000 with OSC over UDP.

Design × AI coffee meetup

If you work in the field of design or artificial intelligence and are interested in exploring the opportunities at their intersection, consider yourself invited to an informal coffee meetup on February 15, 10am at Brix in Amsterdam.

Erik van der Pluijm and myself have for a while now been carrying on a conversation about AI and design and we felt it was time to expand the circle a bit. We are very curious who else out there shares our excitement.

Questions we are mulling over include: How does the design process change when creating intelligent products? And: How can teams collaborate with intelligent design tools to solve problems in new and interesting ways?

Anyway, lots to chew on.

No need to sign up or anything, just show up and we’ll see what happens.

Move 37

Designers make choices. They should be able to provide rationales for those choices. (Although sometimes they can’t.) Being able to explain the thinking that went into a design move to yourself, your teammates and clients is part of being a professional.

Move 37. This was the move AlphaGo made which took everyone by surprise because it appeared so wrong at first.

The interesting thing is that in hindsight it appeared AlphaGo had good reasons for this move. Based on a calculation of odds, basically.

If asked at the time, would AlphaGo have been able to provide this rationale?

It’s a thing that pops up in a lot of the reading I am doing around AI. This idea of transparency. In some fields you don’t just want an AI to provide you with a decision, but also with the arguments supporting that decision. Obvious examples would include a system that helps diagnose disease. You want it to provide more than just the diagnosis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cultural and also legal requirement.

It’s interesting.

Although lives don’t depend on it, the same might apply to intelligent design tools. If I am working with a system and it is offering me design directions or solutions, I want to know why it is suggesting these things as well. Because my reason for picking one over the other depends not just on the surface level properties of the design but also the underlying reasons. It might be important because I need to be able to tell stakeholders about it.

An added side effect of this is that a designer working with such a system is be exposed to machine reasoning about design choices. This could inform their own future thinking too.

Transparent AI might help people improve themselves. A black box can’t teach you much about the craft it’s performing. Looking at outcomes can be inspirational or helpful, but the processes that lead up to them can be equally informative. If not more so.

Imagine working with an intelligent design tool and getting the equivalent of an AlphaGo move 37 moment. Hugely inspirational. Game changer.

This idea gets me much more excited than automating design tasks does.

Waiting for the smart city

Nowadays when we talk about the smart city we don’t necessarily talk about smartness or cities.

I feel like when the term is used it often obscures more than it reveals.

Here a few reasons why.

To begin with, the term suggests something that is yet to arrive. Some kind of tech-enabled utopia. But actually, current day cities are already smart to a greater or lesser degree depending on where and how you look.

This is important because too often we postpone action as we wait for the smart city to arrive. We don’t have to wait. We can act to improve things right now.

Furthermore, ‘smart city’ suggests something monolithic that can be designed as a whole. But a smart city, like any city, is a huge mess of interconnected things. It resists topdown design.

History is littered with failed attempts at authoritarian high-modernist city design. Just stop it.

Smartness should not be an end but a means.

I read ‘smart’ as a shorthand for ‘technologically augmented’. A smart city is a city eaten by software. All cities are being eaten (or have been eaten) by software to a greater or lesser extent. Uber and Airbnb are obvious examples. Smaller more subtle ones abound.

The question is, smart to what end? Efficiency? Legibility? Controllability? Anti-fragility? Playability? Liveability? Sustainability? The answer depends on your outlook.

These are ways in which the smart city label obscures. It obscures agency. It obscures networks. It obscures intent.

I’m not saying don’t ever use it. But in many cases you can get by without it. You can talk about specific parts that make up the whole of a city, specific technologies and specific aims.


Postscript 1

We can do the same exercise with the ‘city’ part of the meme.

The same process that is making cities smart (software eating the world) is also making everything else smart. Smart towns. Smart countrysides. The ends are different. The networks are different. The processes play out in different ways.

It’s okay to think about cities but don’t think they have a monopoly on ‘disruption’.

Postscript 2

Some of this inspired by clever things I heard Sebastian Quack say at Playful Design for Smart Cities and Usman Haque at ThingsCon Amsterdam.

Playful Design for Smart Cities

Earlier this week I escaped the miserable weather and food of the Netherlands to spend a couple of days in Barcelona, where I attended the ‘Playful Design for Smart Cities’ workshop at RMIT Europe.

I helped Jussi Holopainen run a workshop in which participants from industry, government and academia together defined projects aimed at further exploring this idea of playful design within the context of smart cities, without falling into the trap of solutionism.

Before the workshop I presented a summary of my chapter in The Gameful World, along with some of my current thinking on it. There were also great talks by Judith Ackermann, Florian ‘Floyd’ Müller, and Gilly Karjevsky and Sebastian Quack.

Below are the slides for my talk and links to all the articles, books and examples I explicitly and implicitly referenced throughout.

Adapting intelligent tools for creativity

I read Alper’s book on conversational user interfaces over the weekend and was struck by this paragraph:

“The holy grail of a conversational system would be one that’s aware of itself — one that knows its own model and internal structure and allows you to change all of that by talking to it. Imagine being able to tell Siri to tone it down a bit with the jokes and that it would then actually do that.”

His point stuck with me because I think this is of particular importance to creative tools. These need to be flexible so that a variety of people can use them in different circumstances. This adaptability is what lends a tool depth.

The depth I am thinking of in creative tools is similar to the one in games, which appears to be derived from a kind of semi-orderedness. In short, you’re looking for a sweet spot between too simple and too complex.

And of course, you need good defaults.

Back to adaptation. This can happen in at least two ways on the interface level: modal or modeless. A simple example of the former would be to go into a preferences window to change the behaviour of your drawing package. Similarly, modeless adaptation happens when you rearrange some panels to better suit the task at hand.

Returning to Siri, the equivalence of modeless adaptation would be to tell her to tone it down when her sense of humor irks you.

For the modal solution, imagine a humor slider in a settings screen somewhere. This would be a terrible solution because it offers a poor mapping of a control to a personality trait. Can you pinpoint on a scale of 1 to 10 your preferred amount of humor in your hypothetical personal assistant? And anyway, doesn’t it depend on a lot of situational things such as your mood, the particular task you’re trying to complete and so on? In short, this requires something more situated and adaptive.

So just being able to tell Siri to tone it down would be the equivalent of rearranging your Photoshop palets. And in a next interaction Siri might carefully try some humor again to gauge your response. And if you encourage her, she might be more humorous again.

Enough about funny Siri for now because it’s a bit of a silly example.

Funny Siri, although she’s a bit of a Silly example, does illustrate another problem I am trying to wrap my head around. How does an intelligent tool for creativity communicate its internal state? Because it is probabilistic, it can’t be easily mapped to a graphic information display. And so our old way of manipulating state, and more specifically adapting a tool to our needs becomes very different too.

It seems to be best for an intelligent system to be open to suggestions from users about how to behave. Adapting an intelligent creative tool is less like rearranging your workspace and more like coordinating with a coworker.

My ideal is for this to be done in the same mode (and so using the same controls) as when doing the work itself. I expect this to allow for more fluid interactions, going back and forth between doing the work at hand, and meta-communication about how the system supports the work. I think if we look at how people collaborate this happens a lot, communication and meta-communication going on continuously in the same channels.

We don’t need a self-aware artificial intelligence to do this. We need to apply what computer scientists call supervised learning. The basic idea is to provide a system with example inputs and desired outputs, and let it infer the necessary rules from them. If the results are unsatisfactory, you simply continue training it until it performs well enough.

A super fun example of this approach is the Wekinator, a piece of machine learning software for creating musical instruments. Below is a video in which Wekinator’s creator Rebecca Fiebrink performs several demos.

Here we have an intelligent system learning from examples. A person manipulating data in stead of code to get to a particular desired behaviour. But what Wekinator lacks and what I expect will be required for this type of thing to really catch on is for the training to happen in the same mode or medium as the performance. The technology seems to be getting there, but there are many interaction design problems remaining to be solved.

Generating UI design variations

AI design tool for UI design alternatives

I am still thinking about AI and design. How is the design process of AI products different? How is the user experience of AI products different? Can design tools be improved with AI?

When it comes to improving design tools with AI my starting point is game design and development. What follows is a quick sketch of one idea, just to get it out of my system.

‘Mixed-initiative’ tools for procedural generation (such as Tanagra) allow designers to create high-level structures which a machine uses to produce full-fledged game content (such as levels). It happens in a real-time. There is a continuous back-and-forth between designer and machine.

Software user interfaces, on mobile in particular, are increasingly frequently assembled from ready-made components according to more or less well-described rules taken from design languages such as Material Design. These design languages are currently primarily described for human consumption. But it should be a small step to make a design language machine-readable.

So I see an opportunity here where a designer might assemble a UI like they do now, and a machine can do several things. For example it can test for adherence to design language rules, suggest corrections or even auto-correct as the designer works.

More interestingly, a machine might take one UI mockup, and provide the designer with several more possible variations. To do this it could use different layouts, or alternative components that serve a same or similar purpose to the ones used.

In high pressure work environments where time is scarce, corners are often cut in the divergence phase of design. Machines could augment designers so that generating many design alternatives becomes less laborious both mentally and physically. Ideally, machines would surprise and even inspire us. And the final say would still be ours.

Engagement design worksheets

Engagement design workshop at General Assembly Singapore

In June/July of this year I helped Michael Fillié teach two classes about engagement design at General Assembly Singapore. The first was theoretical and the second practical. For the practical class we created a couple of worksheets which participants used in groups to gradually build a design concept for a new product or product improvement aimed at long-term engagement. Below are the worksheets along with some notes on how to use them. I’m hoping they may be useful in your own practice.

A practical note: Each of these worksheets is designed to be printed on A1 paper. (Click on the images to get the PDFs.) We worked on them using post-it notes so that it is easy to add, change or remove things as you go.

Problem statement and persona

01-problem-statement-and-persona

We started with understanding the problem and the user. This worksheet is an adaptation of the persona sheet by Strategyzer. To use it you begin at the top, fleshing out the problem in the form of stating the engagement challenge, and the business goals. Then, you select a user segment which is relevant to the problem.

The middle section of the sheet is used to describe them in the form of a persona. Start with putting a face on them. Give the persona a name and add some demographic details relevant for the user’s behaviour. Then, move on to exploring what their environment looks and sounds like and what they are thinking and feeling. Finally, try to describe what issues the user is having that are addressed by the product and what the user stands to gain from using the product.

The third section of this sheet is used to wrap up the first exercise by doing a quick gap analysis of what the business would like to see in terms of user behaviour and what the user is currently doing. This will help pin down the engagement design concept fleshed out in the next exercises.

Engagement loop

02-engagement-loop

Exercise two builds on the understanding of the problem and the user and offers a structured way of thinking through a possible solution. For this we use the engagement loop model developed by Sebastian Deterding. There are different places we can start here but one that often works well is to start imagining the Big Hairy Audacious Goal the user is looking to achieve. This is the challenge. It is a thing (usually a skill) the user can improve at. Note this challenge down in the middle. Then, working around the challenge, describe a measurable goal the user can achieve on their way to mastering the challenge. Describe the action the user can take with the product towards that goal, and the feedback the product will give them to let them know their action has succeeded and how much closer it has gotten them to the goal. Finally and crucially, try to describe what kind of motivation the user is driven by and make sure the goals, actions and feedback make sense in that light. If not, adjust things until it all clicks.

Storyboard

03-storyboard

The final exercise is devoted to visualising and telling a story about the engagement loop we developed in the abstract in the previous block. It is a typical storyboard, but we have constrained it to a set of story beats you must hit to build a satisfying narrative. We go from introducing the user and their challenge, to how the product communicates the goal and action to what a user does with it and how they get feedback on that to (fast-forward) how they feel when they ultimately master the challenge. It makes the design concept relatable to outsiders and can serve as a jumping off point for further design and development.

Use, adapt and share

Together, these three exercises and worksheets are a great way to think through an engagement design problem. We used them for teaching but I can also imagine teams using them to explore a solution to a problem they might be having with an existing product, or as a way to kickstart the development of a new product.

We’ve built on other people’s work for these so it only makes sense to share them again for others to use and build on. If you do use them I would love to hear about your experiences.

Doing UX inside of Scrum

Some notes on how I am currently “doing user experience” inside of Scrum. This approach has evolved from my projects at Hubbub as well as more recently my work with ARTO and on a project at Edenspiekermann. So I have found it works with both startups and agency style projects.

The starting point is to understand that Scrum is intended to be a container. It is a process framework. It should be able to hold any other activity you think you need as a team. So if we feel we need to add UX somehow, we should try to make it part of Scrum and not something that is tacked onto Scrum. Why not tack something on? Because it signals design is somehow distinct from development. And the whole point of doing agile is to have cross-functional teams. If you set up a separate process for design you are highly likely not to benefit from the full collective intelligence of the combined design and development team. So no, design needs to be inside of the Scrum container.

Staggered sprints are not the answer either because you are still splitting the team into design and development, hampering cross-collaboration and transparency. You’re basically inviting Taylorism back into your process—the very thing you were trying to getting away from.

When you are uncomfortable with putting designers and developers all in the same team and the same process the answer is not to make your process more elaborate, parcel things up, and decrease “messy” interactions. The answer is increasing conversation, not eliminating it.

It turns out things aren’t remotely as complicated as they appear to be. The key is understanding Scrum’s events. The big event holding all other events is the sprint. The sprint outputs a releasable increment of “done" product. The development team does everything required to achieve the sprint goal collaboratively determined during sprint planning. Naturally this includes any design needed for the product. I think of this as the ‘production’ type of design. It typically consists mostly of UI design. There may already be some preliminary UI design available at the start of the sprint but it does not have to be finished.

What about the kind of design that is required for figuring out what to build in the first place? It might not be obvious at first, but Scrum actually has an ongoing process which readily accommodates it: backlog refinement. These are all activities required to get a product backlog item in shape for sprint planning. This is emphatically not a solo show for the product manager to conduct. It is something the whole team collaborates on. Developers and designers. In my experience designers are great at facilitating backlog refinement sessions. At the whiteboard, figuring stuff out with the whole team ‘Lean UX’ style.

I will admit product backlog refinement is Scrum’s weak point. Where it offers a lot of structure for the sprints, it offers hardly any for the backlog refinement (or grooming as some call it). But that’s okay, we can evolve our own.

I like to use Kanban to manage the process of backlog refinement. Items come into the pipeline as something we want to elaborate because we have decided we want to build it (in some form or other, can be just an experiment) in the next sprint or two. It then goes through various stages of elaboration. At the very least capturing requirements in the form of user stories or job stories, doing sketches, a lo-fi prototype, mockups and a hi-fi prototype and finally breaking the item down into work to be done and attaching an estimate to it. At this point it is ready to be part of a sprint. Crucially, during this lifecycle of an item as it is being refined, we can and should do user research if we feel we need more data, or user testing if we feel it is too risky to commit to a feature outright.

For this kind of figuring stuff out, this ‘planning’ type of design, it makes no sense to have it be part of a sprint-like structure because the work required to get it to a ‘ready’ state is much more unpredictable. The point of having a looser grooming flow is that it exists to eliminate uncertainty for when we commit to an item in a sprint.

So between the sprint and backlog refinement, Scrum readily accommodates design. ‘Production’ type design happens inside of the sprint and designers are considered part of the development team. ‘Planning’ type of design happens as part of backlog refinement.

So no need to tack on a separate process. It keeps the process simple and understandable, thus increasing transparency for the whole team. It prevents design from becoming a black box to others. And when we make design part of the container process framework that is Scrum, we reap the rewards of the team’s collective intelligence and we increase our agility.

Prototyping is a team sport

Lately I have been binging on books, presentations and articles related to ‘Lean UX’. I don’t like the term, but then I don’t like the tech industry’s love for inventing a new label for every damn thing. I do like the things emphasises: shared understanding, deep collaboration, continuous user feedback. These are principles that have always implicitly guided the choices I made when leading teams at Hubbub and now also as a member of several teams in the role of product designer.

In all these lean UX readings a thing that keeps coming up again and again is prototyping. Prototypes are the go-to way of doing ‘experiments’, in lean-speak. Other things can be done as well—surveys, interviews, whatever—but more often than not, assumptions are tested with prototypes.

Which is great! And also unsurprising as prototyping has really been embraced by the tech world. And tools for rapid prototyping are getting a lot of attention and interest as a result. However, this comes with a couple of risks. For one, sometimes it is fine to stick to paper. But the lure of shiny prototyping tools is strong. You’d rather not show a crappy drawing to a user. What if they hate it? However, high fidelity prototyping is always more costly than paper. So although well-intentioned, prototyping tools can encourage wastefulness, the bane of lean.

There is a bigger danger which runs against the lean ethos, though. Some tools afford deep collaboration more than others. Let’s be real: none afford deeper collaboration than paper and whiteboards. There is one person behind the controls when prototyping with a tool. So in my view, one should only ever progress to that step once a team effort has been made to hash out the rough outlines of what is to be prototyped. Basically: always paper prototype the digital prototype. Together.

I have had a lot of fun lately playing with browser prototypes and with prototyping in Framer. But as I was getting back into all of this I did notice this risk: All of a sudden there is a person on the team who does the prototypes. Unless this solo prototyping is preceded by shared prototyping, this is a problem. Because the rest of the team is left out of the thinking-through-making which makes the prototyping process so valuable in addition to the testable artefacts it outputs.

It is I think a key oversight of the ‘should designers code’ debaters and to an extent one made by all prototyping tool manufacturers: Individuals don’t prototype, teams do. Prototyping is a team sport. And so the success of a tool depends not only on how well it supports individual prototyping activities but also how well it embeds itself in collaborative workflows.

In addition to the tools themselves getting better at supporting collaborative workflows, I would also love to see more tutorials, both official and from the community, about how to use a prototyping tool within the larger context of a team doing some form of agile. Most tutorials now focus on “how do I make this thing with this tool”. Useful, up to a point. But a large part of prototyping is to arrive at “the thing” together.

One of the lean UX things I devoured was this presentation by Bill Scott in which he talks about aligning a prototyping and a development tech stack, so that the gap between design and engineering is bridged not just with processes but also with tooling. His example applies to web development and app development using web technologies. I wonder what a similar approach looks like for native mobile app development. But this is the sort of thing I am talking about: Smart thinking about how to actually do this lean thing in the real world. I believe organising ourselves so that we can prototype as a team is absolutely key. I will pick my tools and processes accordingly in future.

All of the above is as usual mostly a reminder to self: As a designer your role is not to go off and work solo on brilliant prototypes. Your role is to facilitate such efforts by the whole team. Sure, there will be solo deep designerly crafting happening. But it will not add up to anything if it is not embedded in a collaborative design and development framework.