An Introduction to Value Sensitive Design

Phnom Bakheng
Phnom Bakheng

At a recent Tech Solidarity NL meetup we dove into Value Sensitive Design. This approach had been on my radar for a while so when we concluded that for our community it would be useful to talk about how to practice ethical design and development of technology, I figured we should check it out.

Value Sensitive Design has been around for ages. The earliest article I came across is by Batya Friedman in a 1996 edition of Interactions magazine. Ironically, or tragically, I must say I have only heard about the approach from academics and design theory nerds. In industry at large, Value Sensitive Design appears to be—to me at least—basically unknown. (A recent exception would be this interesting marriage of design sprints with Value Sensitive Design by Cennydd Bowles.)

For the meetup, I read a hand-full of papers and cobbled together a deck which attempts to summarise this ’framework’—the term favoured by its main proponents. I went through it and then we had a spirited discussion of how its ideas apply to our daily practice. A report of all of that can be found over at the Tech Solidarity NL website.

Below, I have attempted to pull together the most salient points from what is a rather dense twenty-plus-slides deck. I hope it is of some use to those professional designers and developers who are looking for better ways of building technology that serves the interest of the many, not the few.

What follows is mostly adapted from the chapter “Value Sensitive Design and Information Systems” in Human–computer interaction in management information systems: Foundations. All quotes are from there unless otherwise noted.

Background

The departure point is the observation that “there is a need for an overarching theoretical and methodological framework with which to handle the value dimensions of design work.” In other words, something that accounts for what we already know about how to deal with values in design work in terms of theory and concepts, as well as methods and techniques.

This is of course not a new concern. For example, famed cyberneticist Norbert Wiener argued that technology could help make us better human beings, and create a more just society. But for it to do so, he argued, we have to take control of the technology.

We have to reject the “worshiping [of] the new gadgets which are our own creation as if they were our masters.” (Wiener 1953)

We can find many more similar arguments throughout the history of information technology. Recently such concerns have flared up in industry as well as society at large. (Not always for the right reasons in my opinion, but that is something we will set aside for now.)

To address these concerns, Value Sensitive Design was developed. It is “a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process.” It has been applied successfully for over 20 years.

Defining Values

But what is a value? In the literature it is defined as “what a person or group of people consider important in life.” I like this definition because it is easy to grasp but also underlines the slippery nature of values. Some things to keep in mind when talking about values:

  • In a narrow sense, the word “value” refers simply to the economic worth of an object. This is not the meaning employed by Value Sensitive Design.
  • Values should not be conflated with facts (the “fact/value distinction”) especially insofar as facts do not logically entail value.
  • “Is” does not imply “ought” (the naturalistic fallacy).
  • Values cannot be motivated only by an empirical account of the external world, but depend substantively on the interests and desires of human beings within a cultural milieu. (So contrary to what some right-wingers like to say: “Facts do care about your feelings.”)

Investigations

Let’s dig into the way this all works. “Value Sensitive Design is an iterative methodology that integrates conceptual, empirical, and technical investigations.” So it distinguishes between three types of activities (“investigations”) and it prescribes cycling through these activities multiple times. Below are listed questions and notes that are relevant to each type of investigation. But in brief, this is how I understand them:

  1. Defining the specific values at play in a project;
  2. Observing, measuring, and documenting people’s behaviour and the context of use;
  3. Analysing the ways in which a particular technology supports or hinders particular values.

Conceptual Investigations

  • Who are the direct and indirect stakeholders affected by the design at hand?
  • How are both classes of stakeholders affected?
  • What values are implicated?
  • How should we engage in trade-offs among competing values in the design, implementation, and use of information systems (e.g., autonomy vs. security, or anonymity vs. trust)?
  • Should moral values (e.g., a right to privacy) have greater weight than, or even trump, non-moral values (e.g., aesthetic preferences)?

Empirical Investigations

  • How do stakeholders apprehend individual values in the interactive context?
  • How do they prioritise competing values in design trade-offs?
  • How do they prioritise individual values and usability considerations?
  • Are there differences between espoused practice (what people say) compared with actual practice (what people do)?

And, specifically focusing on organisations:

  • What are organisations’ motivations, methods of training and dissemination, reward structures, and economic incentives?

Technical Investigations

Not a list of questions here, but some notes:

Value Sensitive Design takes the position that technologies in general, and information and computer technologies in particular, have properties that make them more or less suitable for certain activities. A given technology more readily supports certain values while rendering other activities and values more difficult to realise.

Technical investigations involve the proactive design of systems to support values identified in the conceptual investigation.

Technical investigations focus on the technology itself. Empirical investigations focus on the individuals, groups, or larger social systems that configure, use, or are otherwise affected by the technology.

Significance

Below is a list of things that make Value Sensitive Design different from other approaches, particularly ones that preceded it such as Computer-Supported Cooperative Work and Participatory Design.

  1. Value Sensitive Design seeks to be proactive
  2. Value Sensitive Design enlarges the arena in which values arise to include not only the work place
  3. Value Sensitive Design contributes a unique methodology that employs conceptual, empirical, and technical investigations, applied iteratively and integratively
  4. Value Sensitive Design enlarges the scope of human values beyond those of cooperation (CSCW) and participation and democracy (Participatory Design) to include all values, especially those with moral import.
  5. Value Sensitive Design distinguishes between usability and human values with ethical import.
  6. Value Sensitive Design identifies and takes seriously two classes of stakeholders: direct and indirect.
  7. Value Sensitive Design is an interactional theory
  8. Value Sensitive Design builds from the psychological proposition that certain values are universally held, although how such values play out in a particular culture at a particular point in time can vary considerably

[ad 4] “By moral, we refer to issues that pertain to fairness, justice, human welfare and virtue, […] Value Sensitive Design also accounts for conventions (e.g., standardisation of protocols) and personal values”

[ad 5] “Usability refers to characteristics of a system that make it work in a functional sense, […] not all highly usable systems support ethical values”

[ad 6] “Often, indirect stakeholders are ignored in the design process.”

[ad 7] “values are viewed neither as inscribed into technology (an endogenous theory), nor as simply transmitted by social forces (an exogenous theory). […] the interactional position holds that while the features or properties that people design into technologies more readily support certain values and hinder others, the technology’s actual use depends on the goals of the people interacting with it. […] through human interaction, technology itself changes over time.”

[ad 8] “the more concretely (act-based) one conceptualises a value, the more one will be led to recognising cultural variation; conversely, the more abstractly one conceptualises a value, the more one will be led to recognising universals”

How-To

Value Sensitive Design doesn’t prescribe a particular process, which is fine by me, because I believe strongly in tailoring your process to the particular project at hand. Part of being a thoughtful designer is designing a project’s process as well. However, some guidance is offered for how to proceed in most cases. Here’s a list, plus some notes.

  1. Start with a value, technology, or context of use
  2. Identify direct and indirect stakeholders
  3. Identify benefits and harms for each stakeholder group
  4. Map benefits and harms onto corresponding values
  5. Conduct a conceptual investigation of key values
  6. Identify potential value conflicts
  7. Integrate value considerations into one’s organisational structure

[ad 1] “We suggest starting with the aspect that is most central to your work and interests.”

[ad 2] “direct stakeholders are those individuals who interact directly with the technology or with the technology’s output. Indirect stakeholders are those individuals who are also impacted by the system, though they never interact directly with it. […] Within each of these two overarching categories of stakeholders, there may be several subgroups. […] A single individual may be a member of more than one stakeholder group or subgroup. […] An organisational power structure is often orthogonal to the distinction between direct and indirect stakeholders.”

[ad 3] “one rule of thumb in the conceptual investigation is to give priority to indirect stakeholders who are strongly affected, or to large groups that are somewhat affected […] Attend to issues of technical, cognitive, and physical competency. […] personas have a tendency to lead to stereotypes because they require a list of “socially coherent” attributes to be associated with the “imagined individual.” […] we have deviated from the typical use of personas that maps a single persona onto a single user group, to allow for a single persona to map onto to multiple stakeholder groups”

[ad 4] “In some cases, the corresponding values will be obvious, but not always.”

[ad 5] “the philosophical ontological literature can help provide criteria for what a value is, and thereby how to assess it empirically.”

[ad 6] “value conflicts should usually not be conceived of as “either/or” situations, but as constraints on the design space.”

[ad 7] “In the real world, of course, human values (especially those with ethical import) may collide with economic objectives, power, and other factors. However, even in such situations, Value Sensitive Design should be able to make positive contributions, by showing alternate designs that better support enduring human values.”

Considering Values

Human values with ethical import often implicated in system design
Human values with ethical import often implicated in system design

This table is a useful heuristic tool for values that might be considered. The authors note that it is not intended as a complete list of human values that might be implicated. Another more elaborate tool of a similar sort are the Envisioning Cards.

For the ethics nerds, it may be interesting to note that most of the values in this table hinge on the deontological and consequentialist moral orientations. In addition, the authors have chose several other values related to system design.

Interviewing Stakeholders

When doing the empirical investigations you’ll probably rely on stakeholder interviews quite heavily. Stakeholder interviews shouldn’t be a new thing to any design professional worth their salt. But the authors do offer some practical pointers to keep in mind.

First of all, keep the interview somewhat open-ended. This means conducting a semi-structured interview. This will allow you to ask the things you want to know, but also creates the opportunity for new and unexpected insights to emerge.

Laddering—repeatedly asking the question “Why?” can get you quite far.

The most important thing, before interviewing stakeholders, is to have a good understanding of the subject at hand. Demarcate it using criteria that can be explained to outsiders. Use descriptions of issues or tasks for participants to engage in, so that the subject of the investigation becomes more concrete.

Technical Investigations

Two things I find interesting here. First of all, we are encouraged to map the relationship between design trade-offs, value conflicts and stakeholder groups. The goal of this exercise is to be able to see how stakeholder groups are affected in different ways.

The second useful suggestion for technical investigations is to build flexibility into a product or service’s technical infrastructure. The reason for this is that over time, new values and value conflicts can emerge. As designers we are not always around anymore once a system is deployed so it is good practice to enable the stakeholders to adapt our design to their evolving needs. (I was very much reminded of the approach advocated by Stewart Brand in How Buildings Learn.)

Conclusion

When discussing matters of ethics in design with peers I often notice a reluctance to widen the scope of our practice to include these issues. Frequently, folks argue that since it is impossible to foresee all the potential consequences of design choices, we can’t possibly be held accountable for all the terrible things that can happen as a result of a new technology being introduced into society.

I think that’s a misunderstanding of what ethical design is about. We may not always be directly responsible for the consequences of our design (both good and bad). But we are responsible for what we choose to make part of our concerns as we practice design. This should include the values considered important by the people impacted by our designs.

In the 1996 article mentioned at the start of this post, Friedman concludes as follows:

“As with the traditional criteria of reliability, efficiency, and correctness, we do not require perfection in value-sensitive design, but a commitment. And progress.” (Friedman 1996)

I think that is an apt place to end it here as well.

References

  • Friedman, Batya. “Value-sensitive design.” interactions 3.6 (1996): 16-23.
  • Friedman, Batya, Peter Kahn, and Alan Borning. “Value sensitive design: Theory and methods.” University of Washington technical report (2002): 02-12.
  • Le Dantec, Christopher A., Erika Shehan Poole, and Susan P. Wyche. “Values as lived experience: evolving value sensitive design in support of value discovery.” Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2009.
  • Borning, Alan, and Michael Muller. “Next steps for value sensitive design.” Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2012.
  • Freidman, B., P. Kahn, and A. Borning. “Value sensitive design and information systems.” Human–computer interaction in management information systems: Foundations (2006): 348-372.

Design and machine learning – an annotated reading list

Earlier this year I coached Design for Interaction master students at Delft University of Technology in the course Research Methodology. The students organised three seminars for which I provided the claims and assigned reading. In the seminars they argued about my claims using the Toulmin Model of Argumentation. The readings served as sources for backing and evidence.

The claims and readings were all related to my nascent research project about machine learning. We delved into both designing for machine learning, and using machine learning as a design tool.

Below are the readings I assigned, with some notes on each, which should help you decide if you want to dive into them yourself.

Hebron, Patrick. 2016. Machine Learning for Designers. Sebastopol: O’Reilly.

The only non-academic piece in this list. This served the purpose of getting all students on the same page with regards to what machine learning is, its applications of machine learning in interaction design, and common challenges encountered. I still can’t think of any other single resource that is as good a starting point for the subject as this one.

Fiebrink, Rebecca. 2016. “Machine Learning as Meta-Instrument: Human-Machine Partnerships Shaping Expressive Instrumental Creation.” In Musical Instruments in the 21st Century, 14:137–51. Singapore: Springer Singapore. doi:10.1007/978–981–10–2951–6_10.

Fiebrink’s Wekinator is groundbreaking, fun and inspiring so I had to include some of her writing in this list. This is mostly of interest for those looking into the use of machine learning for design and other creative and artistic endeavours. An important idea explored here is that tools that make use of (interactive, supervised) machine learning can be thought of as instruments. Using such a tool is like playing or performing, exploring a possibility space, engaging in a dialogue with the tool. For a tool to feel like an instrument requires a tight action-feedback loop.

Dove, Graham, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. The 2017 CHI Conference. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.

A really good survey of how designers currently deal with machine learning. Key takeaways include that in most cases, the application of machine learning is still engineering-led as opposed to design-led, which hampers the creation of non-obvious machine learning applications. It also makes it hard for designers to consider ethical implications of design choices. A key reason for this is that at the moment, prototyping with machine learning is prohibitively cumbersome.

Fiebrink, Rebecca, Perry R Cook, and Dan Trueman. 2011. “Human Model Evaluation in Interactive Supervised Learning.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.

The second Fiebrink piece in this list, which is more of a deep dive into how people use Wekinator. As with the chapter listed above this is required reading for those working on design tools which make use of interactive machine learning. An important finding here is that users of intelligent design tools might have very different criteria for evaluating the ‘correctness’ of a trained model than engineers do. Such criteria are likely subjective and evaluation requires first-hand use of the model in real time.

Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M Ramsey, 316–34. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139046855.020.

Bostrom is known for his somewhat crazy but thoughtprovoking book on superintelligence and although a large part of this chapter is about the ethics of general artificial intelligence (which at the very least is still a way out), the first section discusses the ethics of current “narrow” artificial intelligence. It makes for a good checklist of things designers should keep in mind when they create new applications of machine learning. Key insight: when a machine learning system takes on work with social dimensions—tasks previously performed by humans—the system inherits its social requirements.

Yang, Qian, John Zimmerman, Aaron Steinfeld, and Anthony Tomasic. 2016. Planning Adaptive Mobile Experiences When Wireframing. The 2016 ACM Conference. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.

Finally, a feet-in-the-mud exploration of what it actually means to design for machine learning with the tools most commonly used by designers today: drawings and diagrams of various sorts. In this case the focus is on using machine learning to make an interface adaptive. It includes an interesting discussion of how to balance the use of implicit and explicit user inputs for adaptation, and how to deal with inference errors. Once again the limitations of current sketching and prototyping tools is mentioned, and related to the need for designers to develop tacit knowledge about machine learning. Such tacit knowledge will only be gained when designers can work with machine learning in a hands-on manner.

Supplemental material

Floyd, Christiane. 1984. “A Systematic Look at Prototyping.” In Approaches to Prototyping, 1–18. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978–3–642–69796–8_1.

I provided this to students so that they get some additional grounding in the various kinds of prototyping that are out there. It helps to prevent reductive notions of prototyping, and it makes for a nice complement to Buxton’s work on sketching.

Blevis, E, Y Lim, and E Stolterman. 2006. “Regarding Software as a Material of Design.”

Some of the papers refer to machine learning as a “design material” and this paper helps to understand what that idea means. Software is a material without qualities (it is extremely malleable, it can simulate nearly anything). Yet, it helps to consider it as a physical material in the metaphorical sense because we can then apply ways of design thinking and doing to software programming.

Generating UI design variations

AI design tool for UI design alternatives

I am still thinking about AI and design. How is the design process of AI products different? How is the user experience of AI products different? Can design tools be improved with AI?

When it comes to improving design tools with AI my starting point is game design and development. What follows is a quick sketch of one idea, just to get it out of my system.

‘Mixed-initiative’ tools for procedural generation (such as Tanagra) allow designers to create high-level structures which a machine uses to produce full-fledged game content (such as levels). It happens in a real-time. There is a continuous back-and-forth between designer and machine.

Software user interfaces, on mobile in particular, are increasingly frequently assembled from ready-made components according to more or less well-described rules taken from design languages such as Material Design. These design languages are currently primarily described for human consumption. But it should be a small step to make a design language machine-readable.

So I see an opportunity here where a designer might assemble a UI like they do now, and a machine can do several things. For example it can test for adherence to design language rules, suggest corrections or even auto-correct as the designer works.

More interestingly, a machine might take one UI mockup, and provide the designer with several more possible variations. To do this it could use different layouts, or alternative components that serve a same or similar purpose to the ones used.

In high pressure work environments where time is scarce, corners are often cut in the divergence phase of design. Machines could augment designers so that generating many design alternatives becomes less laborious both mentally and physically. Ideally, machines would surprise and even inspire us. And the final say would still be ours.

Artificial intelligence as partner

Some notes on artificial intelligence, technology as partner and related user interface design challenges. Mostly notes to self, not sure I am adding much to the debate. Just summarising what I think is important to think about more. Warning: Dense with links.

Matt Jones writes about how artificial intelligence does not have to be a slave, but can also be partner.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

I would add a third possibility, which is AI as master. A common fear we humans have and one I think only growing as things like AlphaGo and new Boston Dynamics robots keep happening.

I have had a tweet pinned to my timeline for a while now, which is a quote from Play Matters.

“tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actually does not just apply to AI but to tech in general. Of course, as tech gets smarter and more independent from humans, the idea of a ‘third way’ only grows in importance.

More tweeting. A while back, shortly after AlphaGo’s victory, James tweeted:

On the one hand, we must insist, as Kasparov did, on Advanced Go, and then Advanced Everything Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear example of humans and AI partnering. And it is also an example of technology as a source of expression and a way of being.

Also, in a WIRED article on AlphaGo, someone who had played the AI repeatedly says his game has improved tremendously.

So that is the promise: Artificially intelligent systems which work together with humans for mutual benefit.

Now of course these AIs don’t just arrive into the world fully formed. They are created by humans with particular goals in mind. So there is a design component there. We can design them to be partners but we can also design them to be masters or slaves.

As an aside: Maybe AIs that make use of deep learning are particularly well suited to this partner model? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynamics. There apparently is a significant difference between holistic and reductionist approaches, deep learning being holistic. I imagine reductionist AI might be more dependent on humans. But this is just wild speculation. I don’t know if there is anything there.

This insistence of James on “advanced everything else” is a world view. A politics. To allow ourselves to be increasingly entangled with these systems, to not be afraid of them. Because if we are afraid, we either want to subjugate them or they will subjugate us. It is also about not obscuring the systems we are part of. This is a sentiment also expressed by James in the same series of tweets I quoted from earlier:

These emergences are also the best model we have ever built for describing the true state of the world as it always already exists.

And there is overlap here with ideas expressed by Kevin in ‘Design as Participation’:

[W]e are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?

AI partners might help us to better see the systems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.

I am reminded of the mixed-initiative systems being researched in the area of procedural content generation for games. I wrote about these a while back on the Hubbub blog. Such systems are partners of designers. They give something like super powers. Now imagine such powers applied to other problems. Quite exciting.

Actually, in the aforementioned article I distinguish between tools for making things and tools for inspecting possibility spaces. In the first case designers manipulate more abstract representations of the intended outcome and the system generates the actual output. In the second case the system visualises the range of possible outcomes given a particular configuration of the abstract representation. These two are best paired.

From a design perspective, a lot remains to be figured out. If I look at those mixed-initiative tools I am struck by how poorly they communicate what the AI is doing and what its capabilities are. There is a huge user interface design challenge there.

For stuff focused on getting information, a conversational UI seems to be the current local optimum for working with an AI. But for tools for creativity, to use the two-way split proposed by Victor, different UIs will be required.

What shape will they take? What visual language do we need to express the particular properties of artificial intelligence? What approaches can we take in addition to personifying AI as bots or characters? I don’t know and I can hardly think of any good examples that point towards promising approaches. Lots to be done.

Prototyping in the browser

When you are designing a web site or web app I think you should prototype in the browser. Why? You might as well ask why prototype at all. Answer: To enable continuous testing and refinement of your design. Since you are designing for the web it makes sense to do this testing and refinement with an artefact composed of the web’s material.

There are many ways to do prototyping. A common way is to make wireframes and then make them ‘clickable’. But when I am designing a web site or a web app and I get to the point where it is time to do wireframes I often prefer to go straight to the browser.

Before this step I have sketched out all the screens on paper of course. I have done multiple sketches of each page. I’ve had them critiqued by team members and I have reworked them.

Drawing pictures of web pages

But then I open my drawing program—Sketch, in my case—and my heart sinks. Not because Sketch sucks. Sketch is great. But it somehow feels wrong to draw pictures of web pages on my screen. I find it cumbersome. My drawing program does not behave like a browser. That is to say in stead of defining a bunch of rules for elements and having the browser figure out how to render them on a page together I need to follow those rules myself in my head as I put each element in its place.

And don’t get me started on how wireframes are supposed to be without visual design. That is nonsense. If you are using contrast, repetition, alignment and proximity, you are doing layout. That is visual design. I can’t stand wireframes with a bad visual hierarchy.

If I persevere, and I have a set of wireframes in my drawing program, they are static. I can’t use them. I then need to export them to some other often clunky program to make the pictures clickable. Which always results in a poor resemblance of the actual experience. (I use Marvel. It’s okay but it is hardly a joy to use. For mobile apps I still use it, for web sites I prefer not to.)

Prototyping in the browser

When I prototype in the browser I don’t have to deal with these issues. I am doing layout in a way that is native to the medium. And once I have some pages set up they are immediately usable. So I can hand it to someone, a team member or a test participant, and let them play with it.

That is why, for web sites and web apps, I skip wireframes altogether and prototype in the browser. I do not know how common this is in the industry nowadays. So I thought I would share my approach here. It may be of use to some.

It used to be the case that it was quite a bit of hassle to get up and running with a browser prototype so naturally opening a drawing package seemed more attractive. Not so anymore. Tools have come a long way. Case in point: My setup nowadays involves zero screwing around on the command line.

CodeKit

The core of it is a paid-for Mac app called CodeKit, a so-called task manager. It allows you to install a front-end development framework I like called Zurb Foundation with a couple of clicks and has a built in web server so you can play with your prototype on any device on your local network. As you make changes to the code of your prototype it gets automatically updated on all your devices. No more manual refreshing. Saves a huge amount of time.

I know you can do most of what CodeKit does for you with stuff like Grunt but that involves tedious configuration and working the command line. This is fine when you’re a developer, but not fine when you are a designer. I want to be up and running as fast as possible. CodeKit allows me to do that and has some other features built in that are ideal for prototyping which I will talk about more below. Long story short: CodeKit has saved me a huge amount of time and is well worth the money.

Okay so on with the show. Yes, this whole prototyping in the browser thing involves ‘coding’. But honestly, if you can’t write some HTML and CSS you really shouldn’t be doing design for the web in the first place. I don’t care if you consider yourself a UX designer and somehow above all this lowly technical stuff. You are not. Nobody is saying you should become a frontend developer but you need to have an acquaintance with the materials your product is made of. Follow a few courses on Codecadamy or something. There really isn’t an excuse anymore these days for not knowing this stuff. If you want to level up, learn SASS.

Zurb Foundation

I like Zurb Foundation because it offers a coherent and comprehensive library of elements which covers almost all the common patterns found in web sites and apps. It offers a grid and some default typography styles as well. All of it doesn’t look flashy at all which is how I like it when I am prototyping. A prototype at this stage does not require personality yet. Just a clear visual hierarchy. Working with Foundation is almost like playing with LEGO. You just click together the stuff you need. It’s painless and looks and works great.

I hardly do any styling but the few changes I do want to make I can easily add to Foundation’s app.scss using SASS. I usually have a few styles in there for tweaking some margins on particular elements, for example a footer. But I try to focus on the structure and behaviour of my pages and for that I am mostly doing HTML.

GitHub

Testing locally I already mentioned. For that, CodeKit has you covered. Of course, you want to be able to share your prototype with others. For this I like to use GitHub and their Pages feature. Once again, using their desktop client, this involves zero command line work. You just add the folder with your CodeKit project as a new repository and sync it with GitHub. Then you need to add a branch named ‘gh-pages’ and do ‘update from master’. Presto, your prototype is now on the web for anyone with the URL to see and use. Perfect if you’re working in a distributed team.

Don’t be intimidated by using GitHub. Their on-boarding is pretty impressive nowadays. You’ll be up and running in no time. Using version control, even if it is just you working on the prototype, adds some much needed structure and control over changes. And when you are collaborating on your prototype with team members it is indispensable.

But in most cases I am the only one building the prototype so I just work on the master branch and once every while I update the gh-pages branch from master and sync it and I am done. If you use Slack you can add a GitHub bot to a channel and have your team members receive an automatic update every time you change the prototype.

The Kit Language

If your project is of any size beyond the very small you will likely have repeating elements in your design. Headers, footers, recurring widgets and so on. CodeKit has recently added support for something called the Kit Language. This adds support for imports and variables to regular HTML. It is absolutely great for prototyping. For each repeating element you create a ‘partial’ and import it wherever you need it. Variables are great for changing the contents of such repeating elements. CodeKit compiles it all into plain static HTML for you so your prototype runs anywhere.

The Kit Language really was the missing piece of the puzzle for me. With it in place I am very comfortable recommending this way of working to anyone.

So that’s my setup: CodeKit, Zurb Foundation and GitHub. Together they make for a very pleasant and productive way to do prototyping in the browser. I don’t imagine myself going back to drawing pictures of web pages anytime soon.

Writing for conversational user interfaces

Last year at Hubbub we worked on two projects featuring a conversational user interface. I thought I would share a few notes on how we did the writing for them. Because for conversational user interfaces a large part of the design is in the writing.

At the moment, there aren’t really that many tools well suited for doing this. Twine comes to mind but it is really more focused on publishing as opposed to authoring. So while we were working on these projects we just grabbed whatever we were familiar with and felt would get the job done.

I actually think there is an opportunity here. If this conversational ui thing takes off designers would benefit a lot from better tools to sketch and prototype them. After all this is the only way to figure out if a conversational user interface is suitable for a particular project. In the words of Bill Buxton:

“Everything is best for something and worst for something else.”

Okay so below are my notes. The two projects are KOKORO (a codename) and Free Birds. We have yet to publish extensively on both, so a quick description is in order.

KOKORO is a digital coach for teenagers to help them manage and improve their mental health. It is currently a prototype mobile web app not publicly available. (The engine we built to drive it is available on GitHub, though.)

Free Birds (Vrije Vogels in Dutch) is a game about civil liberties for families visiting a war and resistance museum in the Netherlands. It is a location-based iOS app currently available on the Dutch app store and playable in Airborne Museum Hartenstein in Oosterbeek.


For KOKORO we used Gingko to write the conversation branches. This is good enough for a prototype but it becomes unwieldy at scale. And anyway you don’t want to be limited to a tree structure. You want to at least be able to loop back to a parent branch, something that isn’t supported by Gingko. And maybe you don’t want to use the branching pattern at all.

Free Birds’s story has a very linear structure. So in this case we just wrote our conversations in Quip with some basic rules for formatting, not unlike a screenplay.

In Free Birds player choices ‘colour’ the events that come immediately after, but the path stays the same.

This approach was inspired by the Walking Dead games. Those are super clever at giving players a sense of agency without the need for sprawling story trees. I remember seeing the creators present this strategy at PRACTICE and something clicked for me. The important point is, choices don’t have to branch out to different directions to feel meaningful.

KOKORO’s choices did have to lead to different paths so we had to build a tree structure. But we also kept track of things a user says. This allows the app to “learn” about the user. Subsequent segments of the conversation are adapted based on this learning. This allows for more flexibility and it scales better. A section of a conversation has various states between which we switch depending on what a user has said in the past.

We did something similar in Free Birds but used it to a far more limited degree, really just to once again colour certain pieces of dialogue. This is already enough to give a player a sense of agency.


As you can see, it’s all far from rocket surgery but you can get surprisingly good results just by sticking to these simple patterns. If I were to investigate more advanced strategies I would look into NLP for input and procedural generation for output. Who knows, maybe I will get to work on a project involving those things some time in the future.

Hardware interfaces for tuning the feel of microinteractions

In Digital Ground Malcolm McCullough talks about how tuning is a central part of interaction design practice. How part of the challenge of any project is to get to a point where you can start tweaking the variables that determine the behaviour of your interface for the best feel.

“Feel” is a word I borrow from game design. There is a book on it by Steve Swink. It is a funny term. We are trying to simulate sensations that are derived from the physical realm. We are trying to make things that are purely visual behave in such a way that they evoke these sensations. There are many games that heavily depend on getting feel right. Basically all games that are built on a physics simulation of some kind require good feel for a good player experience to emerge.

Physics simulations have been finding their way into non-game software products for some time now and they are becoming an increasing part of what makes a product, er, feel great. They are often at the foundation of signature moments that set a product apart from the pack. These signature moments are also known as microinteractions. To get them just right, being able to tune well is very important.

The behaviour of microinteractions based on physics simulations is determined by variables. For example, the feel of a spring is determined by the mass of the weight attached to the spring, the spring’s stiffness and the friction that resists the motion of the weight. These variables interact in ways that are hard to model in your head so you need to make repeated changes to each variable and try the simulation to get it just right. This is time-consuming, cumbersome and resists the easy exploration of alternatives essential to a good design process.

In The Setup game designer Bennett Foddy talks about a way to improve on this workflow. Many of his games (if not all of them) are playable physics simulations with punishingly hard controls. He suggests using a hardware interface (a MIDI controller) to tune the variables that determine the feel of his game while it runs. In this way the loop between changing a variable and seeing its effect in game is dramatically shortened and many different combinations of values can be explored easily. Once a satisfactory set of values for the variables has been found they can be written back to the software for future use.

I do believe such a setup is still non-trivial to make work with todays tools. A quick check verifies that Framer does not have OSC support, for example. There is an opportunity here for prototyping environments such as Framer and others to support it. The approach is not limited to motion-based microinteractions but can be extended to the tuning of variables that control other aspects of an app’s behaviour.

For example, when we were making Standing, we would have benefited hugely from hardware controls to tweak the sensitivity of its motion-sensing functions as we were using the app. We were forced to do it by repeatedly changing numbers in the code and building the app again and again. It was quite a pain to get right. To this day I have the feeling we could have made it better if only we would have had the tools to do it.

Judging from snafus such as the poor feel of the latest Twitter desktop client, there is a real need for better tools for tuning microinteractions. Just like pen tablets have become indispensable for those designing the form of user interfaces on screens. I think we might soon find a small set of hardware knobs on the desks of those designers working on the behaviour of user interfaces.

Sources for my Creative Mornings Utrecht talk on education, games, and play

I was standing on the shoulders of giants for this one. Here’s a (probably incomplete) list of sources I referenced throughout the talk.

This happened – Utrecht #8, coming up

I have to say, number seven is still fresh in my mind. Even so, we’ve announced number eight. You’ll find the lineup below. I hope to see you in four weeks, on November 22 at the HKU Akademietheater.

Theseus

Rainer Kohlberger is an independent visual artist based in Berlin. The concept and installation design for the THESEUS Innovation Center Internet of Things was done in collaboration with Thomas Schrott and is the basis for the visual identity of the technology platform. The installation connects and visually creates hierarchy between knowledge, products and services with a combination of physical polygon objects and virtually projected information layers. This atmospheric piece transfer knowledge and guidance to the visitor but also leaves room for interpretation.

De Klessebessers

Helma van Rijn is an Industrial Design Engineering PhD candidate at the TU Delft ID-StudioLab, specialized in 'difficult to reach' user groups. De Klessebessers is an activity for people with dementia to actively recall memories together. The design won the first prize in design competition Vergeethenniet and was on show during the Dutch Design Week 2007. De Klessebessers is currently in use at De Landrijt in Eindhoven.

Wip 'n' Kip

FourceLabs talk about Wip 'n' Kip, a playful installation for Stekker Fest, an annual electronic music festival based in Utrecht. Players of Wip 'n' Kip use adult-sized spring riders to control a chicken on a large screen. They race each other to the finish while at the same time trying to stay ahead of a horde of pursuing monsters. Wip 'n' Kip is a strange but effective mashup of video game, carnival ride and performance. It is part of the PLAY Pilots project, commissioned by the city and province of Utrecht, which explore the applications of play in the cultural industry.

Smarthistory

Lotte Meijer talks about Smarthistory, an online art history resource. It aims to be an addition to, or even replacement of, traditional text books through the use of different media to discuss hundreds of Western art pieces from antiquity to the current day. Different browsing styles are supported by a number of navigation systems. Art works are contextualized using maps and timelines. The site's community is engaged using a number of social media. Smarthistory won a Webby Award in 2009 in the education category. Lotte has gone on to work as an independent designer on many interesting and innovative projects in the art world.

Ronald Rietveld is the fourth speaker at This happened – Utrecht #7

Vacant NL

I’m happy to say we have our fourth speaker confirmed for next Monday’s This happened. Here’s the blurb:

Landscape architect Ronald Rietveld talks about Vacant NL. The installation challenges the Dutch government to use the enormous potential of inspiring, unused buildings from the 17th, 18th, 19th, 20th and 21st century for creative entrepreneurship and innovation. The Dutch government wants to be in the top 5 of world knowledge economies by the end of 2020. Vacant NL takes this political ambition seriously and leverages vacancy to stimulate innovation within the creative knowledge economy. Vacant NL is the Dutch submission for the Venice Architecture Biennale 2010. It is made by Rietveld Landscape, which Ronald Rietveld founded after winning the Prix de Rome in Architecture 2006. In 2003 he graduated with honors from the Amsterdam Academy of Architecture.

At first sight this might be an odd one out, and architectural exhibition at an interaction design event. But both the subject of the installation and the design of the experience deal with interaction in many ways. So I am sure it will provide attendees with valuable insights.