Artificial intelligence as partner

Some notes on artificial intelligence, technology as partner and related user interface design challenges. Mostly notes to self, not sure I am adding much to the debate. Just summarising what I think is important to think about more. Warning: Dense with links.

Matt Jones writes about how artificial intelligence does not have to be a slave, but can also be partner.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

I would add a third possibility, which is AI as master. A common fear we humans have and one I think only growing as things like AlphaGo and new Boston Dynamics robots keep happening.

I have had a tweet pinned to my timeline for a while now, which is a quote from Play Matters.

“tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actually does not just apply to AI but to tech in general. Of course, as tech gets smarter and more independent from humans, the idea of a ‘third way’ only grows in importance.

More tweeting. A while back, shortly after AlphaGo’s victory, James tweeted:

On the one hand, we must insist, as Kasparov did, on Advanced Go, and then Advanced Everything Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear example of humans and AI partnering. And it is also an example of technology as a source of expression and a way of being.

Also, in a WIRED article on AlphaGo, someone who had played the AI repeatedly says his game has improved tremendously.

So that is the promise: Artificially intelligent systems which work together with humans for mutual benefit.

Now of course these AIs don’t just arrive into the world fully formed. They are created by humans with particular goals in mind. So there is a design component there. We can design them to be partners but we can also design them to be masters or slaves.

As an aside: Maybe AIs that make use of deep learning are particularly well suited to this partner model? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynamics. There apparently is a significant difference between holistic and reductionist approaches, deep learning being holistic. I imagine reductionist AI might be more dependent on humans. But this is just wild speculation. I don’t know if there is anything there.

This insistence of James on “advanced everything else” is a world view. A politics. To allow ourselves to be increasingly entangled with these systems, to not be afraid of them. Because if we are afraid, we either want to subjugate them or they will subjugate us. It is also about not obscuring the systems we are part of. This is a sentiment also expressed by James in the same series of tweets I quoted from earlier:

These emergences are also the best model we have ever built for describing the true state of the world as it always already exists.

And there is overlap here with ideas expressed by Kevin in ‘Design as Participation’:

[W]e are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?

AI partners might help us to better see the systems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.

I am reminded of the mixed-initiative systems being researched in the area of procedural content generation for games. I wrote about these a while back on the Hubbub blog. Such systems are partners of designers. They give something like super powers. Now imagine such powers applied to other problems. Quite exciting.

Actually, in the aforementioned article I distinguish between tools for making things and tools for inspecting possibility spaces. In the first case designers manipulate more abstract representations of the intended outcome and the system generates the actual output. In the second case the system visualises the range of possible outcomes given a particular configuration of the abstract representation. These two are best paired.

From a design perspective, a lot remains to be figured out. If I look at those mixed-initiative tools I am struck by how poorly they communicate what the AI is doing and what its capabilities are. There is a huge user interface design challenge there.

For stuff focused on getting information, a conversational UI seems to be the current local optimum for working with an AI. But for tools for creativity, to use the two-way split proposed by Victor, different UIs will be required.

What shape will they take? What visual language do we need to express the particular properties of artificial intelligence? What approaches can we take in addition to personifying AI as bots or characters? I don’t know and I can hardly think of any good examples that point towards promising approaches. Lots to be done.

Writing for conversational user interfaces

Last year at Hubbub we worked on two projects featuring a conversational user interface. I thought I would share a few notes on how we did the writing for them. Because for conversational user interfaces a large part of the design is in the writing.

At the moment, there aren’t really that many tools well suited for doing this. Twine comes to mind but it is really more focused on publishing as opposed to authoring. So while we were working on these projects we just grabbed whatever we were familiar with and felt would get the job done.

I actually think there is an opportunity here. If this conversational ui thing takes off designers would benefit a lot from better tools to sketch and prototype them. After all this is the only way to figure out if a conversational user interface is suitable for a particular project. In the words of Bill Buxton:

“Everything is best for something and worst for something else.”

Okay so below are my notes. The two projects are KOKORO (a codename) and Free Birds. We have yet to publish extensively on both, so a quick description is in order.

KOKORO is a digital coach for teenagers to help them manage and improve their mental health. It is currently a prototype mobile web app not publicly available. (The engine we built to drive it is available on GitHub, though.)

Free Birds (Vrije Vogels in Dutch) is a game about civil liberties for families visiting a war and resistance museum in the Netherlands. It is a location-based iOS app currently available on the Dutch app store and playable in Airborne Museum Hartenstein in Oosterbeek.


For KOKORO we used Gingko to write the conversation branches. This is good enough for a prototype but it becomes unwieldy at scale. And anyway you don’t want to be limited to a tree structure. You want to at least be able to loop back to a parent branch, something that isn’t supported by Gingko. And maybe you don’t want to use the branching pattern at all.

Free Birds’s story has a very linear structure. So in this case we just wrote our conversations in Quip with some basic rules for formatting, not unlike a screenplay.

In Free Birds player choices ‘colour’ the events that come immediately after, but the path stays the same.

This approach was inspired by the Walking Dead games. Those are super clever at giving players a sense of agency without the need for sprawling story trees. I remember seeing the creators present this strategy at PRACTICE and something clicked for me. The important point is, choices don’t have to branch out to different directions to feel meaningful.

KOKORO’s choices did have to lead to different paths so we had to build a tree structure. But we also kept track of things a user says. This allows the app to “learn” about the user. Subsequent segments of the conversation are adapted based on this learning. This allows for more flexibility and it scales better. A section of a conversation has various states between which we switch depending on what a user has said in the past.

We did something similar in Free Birds but used it to a far more limited degree, really just to once again colour certain pieces of dialogue. This is already enough to give a player a sense of agency.


As you can see, it’s all far from rocket surgery but you can get surprisingly good results just by sticking to these simple patterns. If I were to investigate more advanced strategies I would look into NLP for input and procedural generation for output. Who knows, maybe I will get to work on a project involving those things some time in the future.