Some notes on artificial intelligence, technology as partner and related user interface design challenges. Mostly notes to self, not sure I am adding much to the debate. Just summarising what I think is important to think about more. Warning: Dense with links.
Matt Jones writes about how artificial intelligence does not have to be a slave, but can also be partner.
I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.
I would add a third possibility, which is AI as master. A common fear we humans have and one I think only growing as things like AlphaGo and new Boston Dynamics robots keep happening.
I have had a tweet pinned to my timeline for a while now, which is a quote from Play Matters.
“technology is not a servant or a master but a source of expression, a way of being”
So this idea actually does not just apply to AI but to tech in general. Of course, as tech gets smarter and more independent from humans, the idea of a ‘third way’ only grows in importance.
More tweeting. A while back, shortly after AlphaGo’s victory, James tweeted:
On the one hand, we must insist, as Kasparov did, on Advanced Go, and then Advanced Everything Else https://en.wikipedia.org/wiki/Advanced_Chess
Advanced Chess is a clear example of humans and AI partnering. And it is also an example of technology as a source of expression and a way of being.
Also, in a WIRED article on AlphaGo, someone who had played the AI repeatedly says his game has improved tremendously.
So that is the promise: Artificially intelligent systems which work together with humans for mutual benefit.
Now of course these AIs don’t just arrive into the world fully formed. They are created by humans with particular goals in mind. So there is a design component there. We can design them to be partners but we can also design them to be masters or slaves.
As an aside: Maybe AIs that make use of deep learning are particularly well suited to this partner model? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynamics. There apparently is a significant difference between holistic and reductionist approaches, deep learning being holistic. I imagine reductionist AI might be more dependent on humans. But this is just wild speculation. I don’t know if there is anything there.
This insistence of James on “advanced everything else” is a world view. A politics. To allow ourselves to be increasingly entangled with these systems, to not be afraid of them. Because if we are afraid, we either want to subjugate them or they will subjugate us. It is also about not obscuring the systems we are part of. This is a sentiment also expressed by James in the same series of tweets I quoted from earlier:
These emergences are also the best model we have ever built for describing the true state of the world as it always already exists.
And there is overlap here with ideas expressed by Kevin in ‘Design as Participation’:
[W]e are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?
AI partners might help us to better see the systems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:
with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.
I am reminded of the mixed-initiative systems being researched in the area of procedural content generation for games. I wrote about these a while back on the Hubbub blog. Such systems are partners of designers. They give something like super powers. Now imagine such powers applied to other problems. Quite exciting.
Actually, in the aforementioned article I distinguish between tools for making things and tools for inspecting possibility spaces. In the first case designers manipulate more abstract representations of the intended outcome and the system generates the actual output. In the second case the system visualises the range of possible outcomes given a particular configuration of the abstract representation. These two are best paired.
From a design perspective, a lot remains to be figured out. If I look at those mixed-initiative tools I am struck by how poorly they communicate what the AI is doing and what its capabilities are. There is a huge user interface design challenge there.
For stuff focused on getting information, a conversational UI seems to be the current local optimum for working with an AI. But for tools for creativity, to use the two-way split proposed by Victor, different UIs will be required.
What shape will they take? What visual language do we need to express the particular properties of artificial intelligence? What approaches can we take in addition to personifying AI as bots or characters? I don’t know and I can hardly think of any good examples that point towards promising approaches. Lots to be done.