Design and machine learning – an annotated reading list

Earlier this year I coached Design for Interaction master students at Delft University of Technology in the course Research Methodology. The students organised three seminars for which I provided the claims and assigned reading. In the seminars they argued about my claims using the Toulmin Model of Argumentation. The readings served as sources for backing and evidence.

The claims and readings were all related to my nascent research project about machine learning. We delved into both designing for machine learning, and using machine learning as a design tool.

Below are the readings I assigned, with some notes on each, which should help you decide if you want to dive into them yourself.

Hebron, Patrick. 2016. Machine Learning for Designers. Sebastopol: O’Reilly.

The only non-academic piece in this list. This served the purpose of getting all students on the same page with regards to what machine learning is, its applications of machine learning in interaction design, and common challenges encountered. I still can’t think of any other single resource that is as good a starting point for the subject as this one.

Fiebrink, Rebecca. 2016. “Machine Learning as Meta-Instrument: Human-Machine Partnerships Shaping Expressive Instrumental Creation.” In Musical Instruments in the 21st Century, 14:137–51. Singapore: Springer Singapore. doi:10.1007/978–981–10–2951–6_10.

Fiebrink’s Wekinator is groundbreaking, fun and inspiring so I had to include some of her writing in this list. This is mostly of interest for those looking into the use of machine learning for design and other creative and artistic endeavours. An important idea explored here is that tools that make use of (interactive, supervised) machine learning can be thought of as instruments. Using such a tool is like playing or performing, exploring a possibility space, engaging in a dialogue with the tool. For a tool to feel like an instrument requires a tight action-feedback loop.

Dove, Graham, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. The 2017 CHI Conference. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.

A really good survey of how designers currently deal with machine learning. Key takeaways include that in most cases, the application of machine learning is still engineering-led as opposed to design-led, which hampers the creation of non-obvious machine learning applications. It also makes it hard for designers to consider ethical implications of design choices. A key reason for this is that at the moment, prototyping with machine learning is prohibitively cumbersome.

Fiebrink, Rebecca, Perry R Cook, and Dan Trueman. 2011. “Human Model Evaluation in Interactive Supervised Learning.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.

The second Fiebrink piece in this list, which is more of a deep dive into how people use Wekinator. As with the chapter listed above this is required reading for those working on design tools which make use of interactive machine learning. An important finding here is that users of intelligent design tools might have very different criteria for evaluating the ‘correctness’ of a trained model than engineers do. Such criteria are likely subjective and evaluation requires first-hand use of the model in real time.

Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M Ramsey, 316–34. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139046855.020.

Bostrom is known for his somewhat crazy but thoughtprovoking book on superintelligence and although a large part of this chapter is about the ethics of general artificial intelligence (which at the very least is still a way out), the first section discusses the ethics of current “narrow” artificial intelligence. It makes for a good checklist of things designers should keep in mind when they create new applications of machine learning. Key insight: when a machine learning system takes on work with social dimensions—tasks previously performed by humans—the system inherits its social requirements.

Yang, Qian, John Zimmerman, Aaron Steinfeld, and Anthony Tomasic. 2016. Planning Adaptive Mobile Experiences When Wireframing. The 2016 ACM Conference. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.

Finally, a feet-in-the-mud exploration of what it actually means to design for machine learning with the tools most commonly used by designers today: drawings and diagrams of various sorts. In this case the focus is on using machine learning to make an interface adaptive. It includes an interesting discussion of how to balance the use of implicit and explicit user inputs for adaptation, and how to deal with inference errors. Once again the limitations of current sketching and prototyping tools is mentioned, and related to the need for designers to develop tacit knowledge about machine learning. Such tacit knowledge will only be gained when designers can work with machine learning in a hands-on manner.

Supplemental material

Floyd, Christiane. 1984. “A Systematic Look at Prototyping.” In Approaches to Prototyping, 1–18. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978–3–642–69796–8_1.

I provided this to students so that they get some additional grounding in the various kinds of prototyping that are out there. It helps to prevent reductive notions of prototyping, and it makes for a nice complement to Buxton’s work on sketching.

Blevis, E, Y Lim, and E Stolterman. 2006. “Regarding Software as a Material of Design.”

Some of the papers refer to machine learning as a “design material” and this paper helps to understand what that idea means. Software is a material without qualities (it is extremely malleable, it can simulate nearly anything). Yet, it helps to consider it as a physical material in the metaphorical sense because we can then apply ways of design thinking and doing to software programming.

Status update

This is not exactly a now page, but I thought I would write up what I am doing at the moment since last reporting on my status in my end-of-year report.

The majority of my workdays are spent doing freelance design consulting. My primary gig has been through Eend at the Dutch Victim Support Foundation, where until very recently I was part of a team building online services. I helped out with product strategy, setting up a lean UX design process, and getting an integrated agile design and development team up and running. The first services are now shipping so it is time for me to move on, after 10 months of very gratifying work. I really enjoy working in the public sector and I hope to be doing more of it in future.

So yes, this means I am available and you can hire me to do strategy and design for software products and services. Just send me an email.

Shortly before the Dutch national elections of this year, Iskander and I gathered a group of fellow tech workers under the banner of “Tech Solidarity NL” to discuss the concerning lurch to the right in national politics and what our field can do about it. This has developed into a small but active community who gather monthly to educate ourselves and develop plans for collective action. I am getting a huge boost out of this. Figuring out how to be a leftist in this day and age is not easy. The only way to do it is to practice and for that reflection with peers is invaluable. Building and facilitating a group like this is hugely educational too. I have learned a lot about how a community is boot-strapped and nurtured.

If you are in the Netherlands, your politics are left of center, and you work in technology, consider yourself invited to join.

And finally, the last major thing on my plate is a continuing effort to secure a PhD position for myself. I am getting great support from people at Delft University of Technology, in particular Gerd Kortuem. I am focusing on internet of things products that have features driven by machine learning. My ultimate aim is to develop prototyping tools for design and development teams that will help them create more innovative and more ethical solutions. The first step for this will be to conduct field research inside companies who are creating such products right now. So I am reaching out to people to see if I can secure a reasonable amount of potential collaborators for this, which will go a long way in proving the feasibility of my whole plan.

If you know of any companies that develop consumer-facing products that have a connected hardware component and make use of machine learning to drive features, do let me know.

That’s about it. Freelance UX consulting, leftist tech-worker organising and design-for-machine-learning research. Quite happy with that mix, really.

‘Hybrid Writing for Conversational Interfaces’ workshop

On May 24 of this year, Niels ’t Hooft and myself ran a workshop titled ‘Hybrid Writing for Conversational Interfaces’ at TU Delft. Our aim was twofold: teach students about writing characters and dialog, and teach them how to prototype chat interfaces.

We spent a day with roughly thirty industrial design students alternating between bits of theory, writing exercises, instructions on how to use Twine (our prototyping tool of choice) and closed out with a small project and a show and tell.

I was very pleased to see prototypes with quite a high level of complexity and sophistication at the end of the day. And throughout, I could tell students were enjoying themselves writing and building interactive conversations.

Here’s a rough outline of how the workshop was structured.

  1. After briefly introducing ourselves, Niels presented a mini-lecture on interactive fiction. A highlight for me was a two-by-two of the ways in which fiction and software can intersect.

Four types of software-fiction hybrids

  1. I then took over and did a show and tell of the absolute basics of using Twine. Things like creating passages, linking them, creating branches and testing and publishing your story.
  2. The first exercise after this was for students to take what they just learned about Twine and try to create a very simple interactive story.
  3. After a coffee break, Niels then presented his second mini-lecture on the very basics of writing. With a particular focus on writing characters and dialog. This included a handy cheatsheet for things to consider while writing.

A cheatsheet for writing dialog

  1. In our second exercise students worked in pairs. They first each created a character, which they then described to each other. They then first planned out the structure of an encounter between these two characters. And finally they collaboratively wrote the dialogue for this encounter. They were required to stick to Hollywood formatting. Niels and I then did a reading of a few (to great amusement of all present) to close out the morning section of the workshop.
  2. After lunch Niels presented his third and final mini-lecture of the day, on conversational interfaces, relying heavily on the great work of our friend Alper in his book on the subject.
  3. I then took over for the second show and tell. Here we ramped up the challenge and introduced the Twine Texting Project – a framework for prototyping conversational interfaces in Twine. On GitHub, you can find the starter file I had prepared for this section.
  4. The third and final exercise of the day was for students to take what they learned about writing dialog, and prototyping chat interfaces, and to build an interactive prototype of a conversational interface or interactive fiction in chat format. They could either build off of the dialog they have created in the previous exercise, or start from scratch.
  5. We finished the day with demos, where put the Twine story on the big screen and as a group chose what options to select. After each demo the creator would open up the Twine file and walk us through how they had built it. It was pretty cool to see how many students had put what they had learned to very creative uses.

Reflecting on the workshop afterwards, we felt the structure was nicely balanced between theory and practice. The difficulty level was such that students did learn some new things which they could incorporate into future projects, but still built on skills they had already acquired. The choice for Twine worked out well too since it is highly accessible. Non-technical students managed to create something interactive, and more advanced students could apply what they knew about code to produce more sophisticated prototypes.

For future workshops we did feel we could improve on building a bridge between the writing for interactive fiction and writing for conversational interfaces of software products and services. This would require some adaptation of the mini lectures and a slightly different emphasis in the exercises. The key would be to have students imagine existing products and services as characters, and to then write dialog for interactions and prototype them. For a future iteration of the workshop, this would be worth exploring further.

Many thanks to Ianus Keller for inviting us to teach this workshop at IDE Academy.

Curiosity is our product

A few weeks ago I facilitated a discussion on ‘advocacy in a post-truth era’ at the European Digital Rights Initiative’s annual general assembly. And last night I was part of a discussion on fake news at a behaviour design meetup in Amsterdam. This was a good occasion to pull together some of my notes and figure out what I think is true about the ‘fake news’ phenomenon.

There is plenty of good writing out there exploring the history and current state of post-truth political culture.

Kellyanne Conway’s “alternative facts” and Michael Gove’s “I think people have had enough of experts” are just two examples of the right’s appropriation of what I would call epistemological relativism. Post-modernism was fun while it worked to advance our leftist agenda. But now that the tables are turned we’re not enjoying it quite as much anymore, are we?

Part of the fact-free politics playbook goes back at least as far as big tobacco’s efforts to discredit the anti-smoking lobby. “Doubt is our product” still applies to modern day reactionary movements such as climate change deniers and anti-vaxers.

The double whammy of news industry commercialisation and internet platform consolidation has created fertile ground for coordinated efforts by various groups to turn the sowing of doubt all the way up to eleven.

There is Russia’s “firehose of falsehood” which sends a high volume of messages across a wide range of channels with total disregard for truth or even consistency in a rapid, continuous and repetitive fashion. They seem to be having fun destabilising western democracies — including the Netherlands — without any apparent end-goal in mind.

And then there is the outrage marketing leveraged by trolls both minor and major. Pissing off mainstream media builds an audience on the fringes and in the underground. Journalists are held hostage by figures such as Milo because they depend on stories that trigger strong emotions for distribution, eyeballs, clicks and ultimately revenue.

So, given all of this, what is to be done? First some bad news. Facts, the weapon of choice for liberals, don’t appear to work. This is empirically evident from recent events, but it also appears to be borne out by psychology.

Facts are often more complicated than the untruths they are supposed to counter. It is also easier to remember a simple lie than a complicated truth. Complicating matters further, facts tend to be boring. Finally, and most interestingly, there is something called the ‘backfire effect’: we become more entrenched in our views when confronted with contradicting facts, because they are threatening to our group identities.

More bad news. Given the speed at which falsehoods spread through our networks, fact-checking is useless. Fact-checking is after-the-fact-checking. Worse, when media fact-check falsehoods on their front pages they are simply providing even more airtime to them. From a strategic perspective, when you debunk, you allow yourself to be captured by your opponent’s frame, and you’re also on the defensive. In Boydian terms you are caught in their OODA loop, when you should be working to take back the initiative, and you should be offering an alternative narrative.

I am not hopeful mainstream media will save us from these dynamics given the realities of the business models they operate inside of. Journalists inside of these organisations are typically overworked, just holding on for dear life and churning out stories at a rapid clip. In short, there is no time to orient and manoeuvre. For bad-faith actors, they are sitting ducks.

What about literacy? If only people knew about churnalism, the attention economy, and filter bubbles ‘they’ would become immune to the lies peddled by reactionaries and return to the liberal fold. Personally I find these claims highly unconvincing not to mention condescending.

My current working theory is that we, all of us, buy into the stories that activate one or more of our group identities, regardless of wether they are fact-based or outright lies. This is called ‘motivated reasoning’. Since this is a fact of psychology, we are all susceptible to it, including liberals who are supposedly defenders of fact-based reasoning.

Seriously though, what about literacy? I’m sorry, no. There is evidence that scientific literacy actually increases polarisation. Motivated reasoning trumps factual knowledge you may have. The same research shows however that curiosity in turn trumps motivated reasoning. The way I understand the distinction between literacy and curiosity is that the former is about knowledge while the latter is about attitude. Motivated reasoning isn’t counteracted by knowing stuff, but by wanting to know stuff.

This is a mixed bag. Offering facts is comparatively easy. Sparking curiosity requires storytelling which in turn requires imagination. If we’re presented with a fact we are not invited to ask questions. However, if we are presented with questions and those questions are wrapped up in stories that create emotional stakes, some of the views we hold might be destabilised.

In other words, if doubt is the product peddled by our opponents, then we should start trafficking in curiosity.

Further reading

‘Machine Learning for Designers’ workshop

On Wednesday Péter Kun, Holly Robbins and myself taught a one-day workshop on machine learning at Delft University of Technology. We had about thirty master’s students from the industrial design engineering faculty. The aim was to get them acquainted with the technology through hands-on tinkering with the Wekinator as central teaching tool.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Background

The reasoning behind this workshop is twofold.

On the one hand I expect designers will find themselves working on projects involving machine learning more and more often. The technology has certain properties that differ from traditional software. Most importantly, machine learning is probabilistic in stead of deterministic. It is important that designers understand this because otherwise they are likely to make bad decisions about its application.

The second reason is that I have a strong sense machine learning can play a role in the augmentation of the design process itself. So-called intelligent design tools could make designers more efficient and effective. They could also enable the creation of designs that would otherwise be impossible or very hard to achieve.

The workshop explored both ideas.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Format

The structure was roughly as follows:

In the morning we started out providing a very broad introduction to the technology. We talked about the very basic premise of (supervised) learning. Namely, providing examples of inputs and desired outputs and training a model based on those examples. To make these concepts tangible we then introduced the Wekinator and walked the students through getting it up and running using basic examples from the website. The final step was to invite them to explore alternative inputs and outputs (such as game controllers and Arduino boards).

In the afternoon we provided a design brief, asking the students to prototype a data-enabled object with the set of tools they had acquired in the morning. We assisted with technical hurdles where necessary (of which there were more than a few) and closed out the day with demos and a group discussion reflecting on their experiences with the technology.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Results

As I tweeted on the way home that evening, the results were… interesting.

Not all groups managed to put something together in the admittedly short amount of time they were provided with. They were most often stymied by getting an Arduino to talk to the Wekinator. Max was often picked as a go-between because the Wekinator receives OSC messages over UDP, whereas the quickest way to get an Arduino to talk to a computer is over serial. But Max in my experience is a fickle beast and would more than once crap out on us.

The groups that did build something mainly assembled prototypes from the examples on hand. Which is fine, but since we were mainly working with the examples from the Wekinator website they tended towards the interactive instrument side of things. We were hoping for explorations of IoT product concepts. For that more hand-rolling was required and this was only achievable for the students on the higher end of the technical expertise spectrum (and the more tenacious ones).

The discussion yielded some interesting insights into mental models of the technology and how they are affected by hands-on experience. A comment I heard more than once was: Why is this considered learning at all? The Wekinator was not perceived to be learning anything. When challenged on this by reiterating the underlying principles it became clear the black box nature of the Wekinator hampers appreciation of some of the very real achievements of the technology. It seems (for our students at least) machine learning is stuck in a grey area between too-high expectations and too-low recognition of its capabilities.

Next steps

These results, and others, point towards some obvious improvements which can be made to the workshop format, and to teaching design students about machine learning more broadly.

  1. We can improve the toolset so that some of the heavy lifting involved with getting the various parts to talk to each other is made easier and more reliable.
  2. We can build examples that are geared towards the practice of designing IoT products and are ready for adaptation and hacking.
  3. And finally, and probably most challengingly, we can make the workings of machine learning more transparent so that it becomes easier to develop a feel for its capabilities and shortcomings.

We do intend to improve and teach the workshop again. If you’re interested in hosting one (either in an educational or professional context) let me know. And stay tuned for updates on this and other efforts to get designers to work in a hands-on manner with machine learning.

Special thanks to the brilliant Ianus Keller for connecting me to Péter and for allowing us to pilot this crazy idea at IDE Academy.

References

Sources used during preparation and running of the workshop:

  • The Wekinator – the UI is infuriatingly poor but when it comes to getting started with machine learning this tool is unmatched.
  • Arduino – I have become particularly fond of the MKR1000 board. Add a lithium-polymer battery and you have everything you need to prototype IoT products.
  • OSC for Arduino – CNMAT’s implementation of the open sound control (OSC) encoding. Key puzzle piece for getting the above two tools talking to each other.
  • Machine Learning for Designers – my preferred introduction to the technology from a designerly perspective.
  • A Visual Introduction to Machine Learning – a very accessible visual explanation of the basic underpinnings of computers applying statistical learning.
  • Remote Control Theremin – an example project I prepared for the workshop demoing how to have the Wekinator talk to an Arduino MKR1000 with OSC over UDP.

Design × AI coffee meetup

If you work in the field of design or artificial intelligence and are interested in exploring the opportunities at their intersection, consider yourself invited to an informal coffee meetup on February 15, 10am at Brix in Amsterdam.

Erik van der Pluijm and myself have for a while now been carrying on a conversation about AI and design and we felt it was time to expand the circle a bit. We are very curious who else out there shares our excitement.

Questions we are mulling over include: How does the design process change when creating intelligent products? And: How can teams collaborate with intelligent design tools to solve problems in new and interesting ways?

Anyway, lots to chew on.

No need to sign up or anything, just show up and we’ll see what happens.

High-skill robots, low-skill workers

Some notes on what I think I understand about technology and inequality.

Let’s start with an obvious big question: is technology destroying jobs faster than they can be replaced? On the long term the evidence isn’t strong. Humans always appear to invent new things to do. There is no reason this time around should be any different.

But in the short term technology has contributed to an evaporation of mid-skilled jobs. Parts of these jobs are automated entirely, parts can be done by fewer people because of higher productivity gained from tech.

While productivity continues to grow, jobs are lagging behind. The year 2000 appears to have been a turning point. “Something” happened around that time. But no-one knows exactly what.

My hunch is that we’ve seen an emergence of a new class of pseudo-monopolies. Oligopolies. And this is compounded by a ‘winner takes all’ dynamic that technology seems to produce.

Others have pointed to globalisation but although this might be a contributing factor, the evidence does not support the idea that it is the major cause.

So what are we left with?

Historically, looking at previous technological upsets, it appears education makes a big difference. People negatively affected by technological progress should have access to good education so that they have options. In the US the access to high quality education is not equally divided.

Apparently family income is associated with educational achievement. So if your family is rich, you are more likely to become a high skilled individual. And high skilled individuals are privileged by the tech economy.

And if Piketty’s is right, we are approaching a reality in which money made from wealth rises faster than wages. So there is a feedback loop in place which only exacerbates the situation.

One more bullet: If you think trickle-down economics, increasing the size of the pie will help, you might be mistaken. It appears social mobility is helped more by decreasing inequality in the distribution of income growth.

So some preliminary conclusions: a progressive tax on wealth won’t solve the issue. The education system will require reform, too.

I think this is the central irony of the whole situation: we are working hard to teach machines how to learn. But we are neglecting to improve how people learn.

Move 37

Designers make choices. They should be able to provide rationales for those choices. (Although sometimes they can’t.) Being able to explain the thinking that went into a design move to yourself, your teammates and clients is part of being a professional.

Move 37. This was the move AlphaGo made which took everyone by surprise because it appeared so wrong at first.

The interesting thing is that in hindsight it appeared AlphaGo had good reasons for this move. Based on a calculation of odds, basically.

If asked at the time, would AlphaGo have been able to provide this rationale?

It’s a thing that pops up in a lot of the reading I am doing around AI. This idea of transparency. In some fields you don’t just want an AI to provide you with a decision, but also with the arguments supporting that decision. Obvious examples would include a system that helps diagnose disease. You want it to provide more than just the diagnosis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cultural and also legal requirement.

It’s interesting.

Although lives don’t depend on it, the same might apply to intelligent design tools. If I am working with a system and it is offering me design directions or solutions, I want to know why it is suggesting these things as well. Because my reason for picking one over the other depends not just on the surface level properties of the design but also the underlying reasons. It might be important because I need to be able to tell stakeholders about it.

An added side effect of this is that a designer working with such a system is be exposed to machine reasoning about design choices. This could inform their own future thinking too.

Transparent AI might help people improve themselves. A black box can’t teach you much about the craft it’s performing. Looking at outcomes can be inspirational or helpful, but the processes that lead up to them can be equally informative. If not more so.

Imagine working with an intelligent design tool and getting the equivalent of an AlphaGo move 37 moment. Hugely inspirational. Game changer.

This idea gets me much more excited than automating design tasks does.

Books I’ve read in 2016

I’ve read 32 books, which is four short of my goal and also four less than the previous year. It’s still not a bad score though and quality wise the list below contains many gems.

I resolved to read mostly books by women and minority authors. This lead to quite a few surprising experiences which I am certainly grateful for. I think I’ll continue to push myself to seek out such books in the year to come.

There are only a few comics in the list. I sort of fell off the comics bandwagon this year mainly because I just can’t seem to find a good place to discover things to read.

Anyway, here’s the list, with links to my reviews on Goodreads. A * denotes a particular favourite.

Favourite music albums of 2016

I guess this year finally marked the end of my album listening behaviour. Spotify’s Discover and Daily Mix features were the one-two punch that knocked it out. In addition I somehow stopped scrobbling to Last.fm in March. It’s switched back on now but the damage is done.

So the data I do have is incomplete. I did still deliberately put on a number of albums this year. But I won’t post them in order of listens like I did last year. This is subjective, unsorted and hand-picked. I will even sneak in a few albums that were published towards the end of 2015.

My sources included Pitchfork’s list of best new albums which used to be how I discovered new music and still wields some influence. I cross-referenced with Spotify’s top songs of 2016.

So first Spotify tells me what to listen to and then it gives me a list of things I actually listened to. This is getting weird…

Anyway, here they are. A * marks a particular favourite.

  • A Tribe Called Quest – We Got It From Here… *
  • Solange – A Seat At the Table
  • Hamilton Leithauser + Rostam – I Had A Dream That You Were Mine
  • The Avalanches – Wildflower *
  • Blood Orange – Freetown Sound
  • Whitney – Light Upon the Lake
  • Car Seat Headrest – Teens Of Denial *
  • Chance The Rapper – Coloring Book *
  • ANOHNI – HOPELESSNESS
  • Moodymann – DJ-Kicks *
  • Grimes – Art Angels *
  • Floating Points – Elaenia
  • The Range – Potential *
  • Sepalcure – Folding Time
  • Jamila Woods – HEAVN

Here’s a playlist which includes a couple of more albums if you want to have a listen.