Artificial intelligence as partner

Some notes on arti­fi­cial intel­li­gence, tech­nol­o­gy as part­ner and relat­ed user inter­face design chal­lenges. Most­ly notes to self, not sure I am adding much to the debate. Just sum­maris­ing what I think is impor­tant to think about more. Warn­ing: Dense with links.

Matt Jones writes about how arti­fi­cial intel­li­gence does not have to be a slave, but can also be partner.

I’m per­son­al­ly much more inter­est­ed in machine intel­li­gence as human aug­men­ta­tion rather than the oft-hyped AI assis­tant as a sep­a­rate embodiment.

I would add a third pos­si­bil­i­ty, which is AI as mas­ter. A com­mon fear we humans have and one I think only grow­ing as things like Alpha­Go and new Boston Dynam­ics robots keep happening.

I have had a tweet pinned to my time­line for a while now, which is a quote from Play Mat­ters.

tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being” 

So this idea actu­al­ly does not just apply to AI but to tech in gen­er­al. Of course, as tech gets smarter and more inde­pen­dent from humans, the idea of a ‘third way’ only grows in importance. 

More tweet­ing. A while back, short­ly after AlphaGo’s vic­to­ry, James tweet­ed:

On the one hand, we must insist, as Kas­parov did, on Advanced Go, and then Advanced Every­thing Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear exam­ple of humans and AI part­ner­ing. And it is also an exam­ple of tech­nol­o­gy as a source of expres­sion and a way of being.

Also, in a WIRED arti­cle on Alpha­Go, some­one who had played the AI repeat­ed­ly says his game has improved tremendously. 

So that is the promise: Arti­fi­cial­ly intel­li­gent sys­tems which work togeth­er with humans for mutu­al benefit. 

Now of course these AIs don’t just arrive into the world ful­ly formed. They are cre­at­ed by humans with par­tic­u­lar goals in mind. So there is a design com­po­nent there. We can design them to be part­ners but we can also design them to be mas­ters or slaves.

As an aside: Maybe AIs that make use of deep learn­ing are par­tic­u­lar­ly well suit­ed to this part­ner mod­el? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynam­ics. There appar­ent­ly is a sig­nif­i­cant dif­fer­ence between holis­tic and reduc­tion­ist approach­es, deep learn­ing being holis­tic. I imag­ine reduc­tion­ist AI might be more depen­dent on humans. But this is just wild spec­u­la­tion. I don’t know if there is any­thing there.

This insis­tence of James on “advanced every­thing else” is a world view. A pol­i­tics. To allow our­selves to be increas­ing­ly entan­gled with these sys­tems, to not be afraid of them. Because if we are afraid, we either want to sub­ju­gate them or they will sub­ju­gate us. It is also about not obscur­ing the sys­tems we are part of. This is a sen­ti­ment also expressed by James in the same series of tweets I quot­ed from earlier:

These emer­gences are also the best mod­el we have ever built for describ­ing the true state of the world as it always already exists.

And there is over­lap here with ideas expressed by Kevin in ‘Design as Par­tic­i­pa­tion’:

[W]e are no longer just using com­put­ers. We are using com­put­ers to use the world. The obscured and com­plex code and engi­neer­ing now engages with peo­ple, resources, civics, com­mu­ni­ties and ecosys­tems. Should design­ers con­tin­ue to priv­i­lege users above all oth­ers in the sys­tem? What would it mean to design for par­tic­i­pants instead? For all the participants?

AI part­ners might help us to bet­ter see the sys­tems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emer­gence of arti­fi­cial intel­li­gence (only this time with a bud­dy-style user inter­face that actu­al­ly works), this ques­tion of “doing some­thing for me” vs “allow­ing me to do even more” is going to get even more pro­nounced. Both are effec­tive, but the first sucks… or at least, it sucks accord­ing to my own per­son­al pol­i­tics, because I regard indi­vid­ual alien­ation from soci­ety and com­plex sys­tems as one of the huge threats in the 21st century.

I am remind­ed of the mixed-ini­tia­tive sys­tems being researched in the area of pro­ce­dur­al con­tent gen­er­a­tion for games. I wrote about these a while back on the Hub­bub blog. Such sys­tems are part­ners of design­ers. They give some­thing like super pow­ers. Now imag­ine such pow­ers applied to oth­er prob­lems. Quite exciting.

Actu­al­ly, in the afore­men­tioned arti­cle I dis­tin­guish between tools for mak­ing things and tools for inspect­ing pos­si­bil­i­ty spaces. In the first case design­ers manip­u­late more abstract rep­re­sen­ta­tions of the intend­ed out­come and the sys­tem gen­er­ates the actu­al out­put. In the sec­ond case the sys­tem visu­alis­es the range of pos­si­ble out­comes giv­en a par­tic­u­lar con­fig­u­ra­tion of the abstract rep­re­sen­ta­tion. These two are best paired. 

From a design per­spec­tive, a lot remains to be fig­ured out. If I look at those mixed-ini­tia­tive tools I am struck by how poor­ly they com­mu­ni­cate what the AI is doing and what its capa­bil­i­ties are. There is a huge user inter­face design chal­lenge there. 

For stuff focused on get­ting infor­ma­tion, a con­ver­sa­tion­al UI seems to be the cur­rent local opti­mum for work­ing with an AI. But for tools for cre­ativ­i­ty, to use the two-way split pro­posed by Vic­tor, dif­fer­ent UIs will be required.

What shape will they take? What visu­al lan­guage do we need to express the par­tic­u­lar prop­er­ties of arti­fi­cial intel­li­gence? What approach­es can we take in addi­tion to per­son­i­fy­ing AI as bots or char­ac­ters? I don’t know and I can hard­ly think of any good exam­ples that point towards promis­ing approach­es. Lots to be done.

Writing for conversational user interfaces

Last year at Hub­bub we worked on two projects fea­tur­ing a con­ver­sa­tion­al user inter­face. I thought I would share a few notes on how we did the writ­ing for them. Because for con­ver­sa­tion­al user inter­faces a large part of the design is in the writing. 

At the moment, there aren’t real­ly that many tools well suit­ed for doing this. Twine comes to mind but it is real­ly more focused on pub­lish­ing as opposed to author­ing. So while we were work­ing on these projects we just grabbed what­ev­er we were famil­iar with and felt would get the job done. 

I actu­al­ly think there is an oppor­tu­ni­ty here. If this con­ver­sa­tion­al ui thing takes off design­ers would ben­e­fit a lot from bet­ter tools to sketch and pro­to­type them. After all this is the only way to fig­ure out if a con­ver­sa­tion­al user inter­face is suit­able for a par­tic­u­lar project. In the words of Bill Bux­ton:

Every­thing is best for some­thing and worst for some­thing else.”

Okay so below are my notes. The two projects are KOKORO (a code­name) and Free Birds. We have yet to pub­lish exten­sive­ly on both, so a quick descrip­tion is in order.

KOKORO is a dig­i­tal coach for teenagers to help them man­age and improve their men­tal health. It is cur­rent­ly a pro­to­type mobile web app not pub­licly avail­able. (The engine we built to dri­ve it is avail­able on GitHub, though.)

Free Birds (Vri­je Vogels in Dutch) is a game about civ­il lib­er­ties for fam­i­lies vis­it­ing a war and resis­tance muse­um in the Nether­lands. It is a loca­tion-based iOS app cur­rent­ly avail­able on the Dutch app store and playable in Air­borne Muse­um Harten­stein in Oosterbeek.


For KOKORO we used Gingko to write the con­ver­sa­tion branch­es. This is good enough for a pro­to­type but it becomes unwieldy at scale. And any­way you don’t want to be lim­it­ed to a tree struc­ture. You want to at least be able to loop back to a par­ent branch, some­thing that isn’t sup­port­ed by Gingko. And maybe you don’t want to use the branch­ing pat­tern at all.

Free Birds’s sto­ry has a very lin­ear struc­ture. So in this case we just wrote our con­ver­sa­tions in Quip with some basic rules for for­mat­ting, not unlike a screenplay. 

In Free Birds play­er choic­es ‘colour’ the events that come imme­di­ate­ly after, but the path stays the same.

This approach was inspired by the Walk­ing Dead games. Those are super clever at giv­ing play­ers a sense of agency with­out the need for sprawl­ing sto­ry trees. I remem­ber see­ing the cre­ators present this strat­e­gy at PRACTICE and some­thing clicked for me. The impor­tant point is, choic­es don’t have to branch out to dif­fer­ent direc­tions to feel meaningful.

KOKORO’s choic­es did have to lead to dif­fer­ent paths so we had to build a tree struc­ture. But we also kept track of things a user says. This allows the app to “learn” about the user. Sub­se­quent seg­ments of the con­ver­sa­tion are adapt­ed based on this learn­ing. This allows for more flex­i­bil­i­ty and it scales bet­ter. A sec­tion of a con­ver­sa­tion has var­i­ous states between which we switch depend­ing on what a user has said in the past. 

We did some­thing sim­i­lar in Free Birds but used it to a far more lim­it­ed degree, real­ly just to once again colour cer­tain pieces of dia­logue. This is already enough to give a play­er a sense of agency.


As you can see, it’s all far from rock­et surgery but you can get sur­pris­ing­ly good results just by stick­ing to these sim­ple pat­terns. If I were to inves­ti­gate more advanced strate­gies I would look into NLP for input and pro­ce­dur­al gen­er­a­tion for out­put. Who knows, maybe I will get to work on a project involv­ing those things some time in the future.