Artificial intelligence as partner

Some notes on artificial intelligence, technology as partner and related user interface design challenges. Mostly notes to self, not sure I am adding much to the debate. Just summarising what I think is important to think about more. Warning: Dense with links.

Matt Jones writes about how artificial intelligence does not have to be a slave, but can also be partner.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

I would add a third possibility, which is AI as master. A common fear we humans have and one I think only growing as things like AlphaGo and new Boston Dynamics robots keep happening.

I have had a tweet pinned to my timeline for a while now, which is a quote from Play Matters.

“tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actually does not just apply to AI but to tech in general. Of course, as tech gets smarter and more independent from humans, the idea of a ‘third way’ only grows in importance.

More tweeting. A while back, shortly after AlphaGo’s victory, James tweeted:

On the one hand, we must insist, as Kasparov did, on Advanced Go, and then Advanced Everything Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear example of humans and AI partnering. And it is also an example of technology as a source of expression and a way of being.

Also, in a WIRED article on AlphaGo, someone who had played the AI repeatedly says his game has improved tremendously.

So that is the promise: Artificially intelligent systems which work together with humans for mutual benefit.

Now of course these AIs don’t just arrive into the world fully formed. They are created by humans with particular goals in mind. So there is a design component there. We can design them to be partners but we can also design them to be masters or slaves.

As an aside: Maybe AIs that make use of deep learning are particularly well suited to this partner model? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynamics. There apparently is a significant difference between holistic and reductionist approaches, deep learning being holistic. I imagine reductionist AI might be more dependent on humans. But this is just wild speculation. I don’t know if there is anything there.

This insistence of James on “advanced everything else” is a world view. A politics. To allow ourselves to be increasingly entangled with these systems, to not be afraid of them. Because if we are afraid, we either want to subjugate them or they will subjugate us. It is also about not obscuring the systems we are part of. This is a sentiment also expressed by James in the same series of tweets I quoted from earlier:

These emergences are also the best model we have ever built for describing the true state of the world as it always already exists.

And there is overlap here with ideas expressed by Kevin in ‘Design as Participation’:

[W]e are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?

AI partners might help us to better see the systems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.

I am reminded of the mixed-initiative systems being researched in the area of procedural content generation for games. I wrote about these a while back on the Hubbub blog. Such systems are partners of designers. They give something like super powers. Now imagine such powers applied to other problems. Quite exciting.

Actually, in the aforementioned article I distinguish between tools for making things and tools for inspecting possibility spaces. In the first case designers manipulate more abstract representations of the intended outcome and the system generates the actual output. In the second case the system visualises the range of possible outcomes given a particular configuration of the abstract representation. These two are best paired.

From a design perspective, a lot remains to be figured out. If I look at those mixed-initiative tools I am struck by how poorly they communicate what the AI is doing and what its capabilities are. There is a huge user interface design challenge there.

For stuff focused on getting information, a conversational UI seems to be the current local optimum for working with an AI. But for tools for creativity, to use the two-way split proposed by Victor, different UIs will be required.

What shape will they take? What visual language do we need to express the particular properties of artificial intelligence? What approaches can we take in addition to personifying AI as bots or characters? I don’t know and I can hardly think of any good examples that point towards promising approaches. Lots to be done.

Writing for conversational user interfaces

Last year at Hubbub we worked on two projects featuring a conversational user interface. I thought I would share a few notes on how we did the writing for them. Because for conversational user interfaces a large part of the design is in the writing.

At the moment, there aren’t really that many tools well suited for doing this. Twine comes to mind but it is really more focused on publishing as opposed to authoring. So while we were working on these projects we just grabbed whatever we were familiar with and felt would get the job done.

I actually think there is an opportunity here. If this conversational ui thing takes off designers would benefit a lot from better tools to sketch and prototype them. After all this is the only way to figure out if a conversational user interface is suitable for a particular project. In the words of Bill Buxton:

“Everything is best for something and worst for something else.”

Okay so below are my notes. The two projects are KOKORO (a codename) and Free Birds. We have yet to publish extensively on both, so a quick description is in order.

KOKORO is a digital coach for teenagers to help them manage and improve their mental health. It is currently a prototype mobile web app not publicly available. (The engine we built to drive it is available on GitHub, though.)

Free Birds (Vrije Vogels in Dutch) is a game about civil liberties for families visiting a war and resistance museum in the Netherlands. It is a location-based iOS app currently available on the Dutch app store and playable in Airborne Museum Hartenstein in Oosterbeek.


For KOKORO we used Gingko to write the conversation branches. This is good enough for a prototype but it becomes unwieldy at scale. And anyway you don’t want to be limited to a tree structure. You want to at least be able to loop back to a parent branch, something that isn’t supported by Gingko. And maybe you don’t want to use the branching pattern at all.

Free Birds’s story has a very linear structure. So in this case we just wrote our conversations in Quip with some basic rules for formatting, not unlike a screenplay.

In Free Birds player choices ‘colour’ the events that come immediately after, but the path stays the same.

This approach was inspired by the Walking Dead games. Those are super clever at giving players a sense of agency without the need for sprawling story trees. I remember seeing the creators present this strategy at PRACTICE and something clicked for me. The important point is, choices don’t have to branch out to different directions to feel meaningful.

KOKORO’s choices did have to lead to different paths so we had to build a tree structure. But we also kept track of things a user says. This allows the app to “learn” about the user. Subsequent segments of the conversation are adapted based on this learning. This allows for more flexibility and it scales better. A section of a conversation has various states between which we switch depending on what a user has said in the past.

We did something similar in Free Birds but used it to a far more limited degree, really just to once again colour certain pieces of dialogue. This is already enough to give a player a sense of agency.


As you can see, it’s all far from rocket surgery but you can get surprisingly good results just by sticking to these simple patterns. If I were to investigate more advanced strategies I would look into NLP for input and procedural generation for output. Who knows, maybe I will get to work on a project involving those things some time in the future.

Design without touching the surface

I am preparing two classes at the moment. One is an introduction to user experience design, the other to user interface design. I did not come up with this division, it was part of the assignment. I thought it was odd at first. I wasn’t sure where one discipline ends and the other begins. I still am not sure. But I made a pragmatic decision to have the UX class focus on the high level process of designing (software) products, and the UI class focus on the visual aspects of a product’s interface. The UI class deals with a product’s surface, form, and to some extent also its behaviour, but on a micro level. Whereas the UX class focuses on behaviour on the macro level. Simply speaking—the UX class is about behaviour across screens, the UI class is about behaviour within screens.

The solution is workable. But I am still not entirely comfortable with it. I am not comfortable with the idea of being able to practice UX without ‘touching the surface’, so to speak. And it seems my two classes are advocating this. Also, I am pretty sure this is everyday reality for many UX practitioners. Notice I say “practitioner”, because I am not sure ‘designer’ is the right term in these cases. To be honest I do not think you can practice design without doing sketching and prototyping of some sort. (See Bill Buxton’s ‘Sketching User Experiences’ for an expanded argument on why this is.) And when it comes to designing software products this means touching the surface, the form.

Again, the reality is, ‘UX designer’ and ‘UI designer’ are common terms now. Certainly here in Singapore people know they need both to make good products. Some practitioners say they do both, others one or the other. The latter appears to be the most common and expected case. (By the way, in Singapore no-one I’ve met talks about interaction design.)

My concern is that by encouraging the practice of doing UX design without touching the surface of a product, we get shitty designs. In a process where UX and UI are seen as separate things the risk is one comes before the other. The UX designer draws the wireframes, the UI designer gets to turn them into pretty pictures, with no back-and-forth between the two. An iterative process can mitigate some of the damage such an artificial division of labour produces, but I think we still start out on the wrong foot. I think a better practice might entail including visual considerations from the very beginning of the design process (as we are sketching).

Two things I came across as I was preparing these classes are somehow in support of this idea. Both resulted from a call I did for resources on user interface design. I asked for books about visual aspects, but I got a lot more.

  1. In ‘Magic Ink’ Bret Victor writes about how the design of information software is hugely indebted to graphic design and more specifically information design in the tradition of Tufte. (He also mentions industrial design as an equally big progenitor of interaction design, but for software that is mainly about manipulation, not information.) The article is big, but the start of it is actually a pretty good if unorthodox general introduction to interaction design. For software that is about learning through looking at information Victor says interaction should be a last resort. So that leaves us with a task that is 80% if not more visual design. Touching the surface. Which makes me think you might as well get to it as quickly as possible and start sketching and prototyping aimed not just at structure and behaviour but also form. (Hat tip to Pieter Diepenmaat for this one.)

  2. In ‘Jumping to the End’ Matt Jones rambles entertainingly about design fiction. He argues for paying attention to details and that a lot of the design he practices is about ‘signature moments’ aka micro-interactions. So yeah, again, I can’t imagine designing these effectively without doing sketching and prototyping of the sort that includes the visual. And in fact Matt mentions this more or less at one point, when he talks about the fact that his team’s deliverables at Google are almost all visual. They are high fidelity mockups, animations, videos, and so on. These then become the starting points for further development. (Hat tip to Alexander Zeh for this one.)

In summary, I think distinguishing UX design from UI design is nonsense. Because you cannot practice design without sketching and prototyping. And you cannot sketch and prototype a software product without touching its surface. In stead of taking visual design for granted, or talking about it like it is some innate talent, some kind of magical skill some people are born with and others aren’t, user experience practitioners should consider being less enamoured with acquiring more skills from business, marketing and engineering and in stead practice at the skills that define the fields user experience design is indebted to the most: graphic design and industrial design. In other words, you can’t do user experience design without touching the surface.

Plazes sidebar redesign

Plazes, my favourite underhyped European web application have updated their sidebar. I like the fact that they changed the ranges shown (here, 1 km, 3 km, 10 km, 50 km). Now I can see all the way to Utrecht from Amsterdam. I also like the new “friends somewhere else” section (see the screenshot). They’ve also clustered people at the same plaze, and (all the way at the bottom) have now included a tag cloud-esque city list. Very web 2.0!

plazes sidebar outtake

Yahoo! opens up some more

Some great new resources are now available, courtesy of everyone’s favorite web 2.0 company: Yahoo!

The Design Pattern Library contains a whole bunch of patters for user interface designers to use and abuse. Martijn van Welie finally has some competition.

Of more interest to developers is the UI Library, containing “a set of utilities and controls, written in JavaScript, for building richly interactive web applications”. These code examples are frequently linked to from the pattern library.

I must say, these look like some excellent additions to the current body of knowledge available to designers and developers. Thanks a bunch Yahoo!

However, my paranoid mind can’t help but think: what’s the catch?

Via Jeremy Zawodny.

Delicious Library

This post at Engadget lead me to Delicious Library. A Mac app that lets you scan barcodes on your books and games and such using a webcam. It then creates a catalogue of all your stuff and does all kinds of cools stuff with it, such as give you recommendations, and let you track which books you’ve loaned to your friends. There’s an interview with the makers too, including a look at how the UI evolved, but it’s offline right now. Too bad…

Technorati: , , ,