Design and machine learning – an annotated reading list

Earlier this year I coached Design for Interaction master students at Delft University of Technology in the course Research Methodology. The students organised three seminars for which I provided the claims and assigned reading. In the seminars they argued about my claims using the Toulmin Model of Argumentation. The readings served as sources for backing and evidence.

The claims and readings were all related to my nascent research project about machine learning. We delved into both designing for machine learning, and using machine learning as a design tool.

Below are the readings I assigned, with some notes on each, which should help you decide if you want to dive into them yourself.

Hebron, Patrick. 2016. Machine Learning for Designers. Sebastopol: O’Reilly.

The only non-academic piece in this list. This served the purpose of getting all students on the same page with regards to what machine learning is, its applications of machine learning in interaction design, and common challenges encountered. I still can’t think of any other single resource that is as good a starting point for the subject as this one.

Fiebrink, Rebecca. 2016. “Machine Learning as Meta-Instrument: Human-Machine Partnerships Shaping Expressive Instrumental Creation.” In Musical Instruments in the 21st Century, 14:137–51. Singapore: Springer Singapore. doi:10.1007/978–981–10–2951–6_10.

Fiebrink’s Wekinator is groundbreaking, fun and inspiring so I had to include some of her writing in this list. This is mostly of interest for those looking into the use of machine learning for design and other creative and artistic endeavours. An important idea explored here is that tools that make use of (interactive, supervised) machine learning can be thought of as instruments. Using such a tool is like playing or performing, exploring a possibility space, engaging in a dialogue with the tool. For a tool to feel like an instrument requires a tight action-feedback loop.

Dove, Graham, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. The 2017 CHI Conference. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.

A really good survey of how designers currently deal with machine learning. Key takeaways include that in most cases, the application of machine learning is still engineering-led as opposed to design-led, which hampers the creation of non-obvious machine learning applications. It also makes it hard for designers to consider ethical implications of design choices. A key reason for this is that at the moment, prototyping with machine learning is prohibitively cumbersome.

Fiebrink, Rebecca, Perry R Cook, and Dan Trueman. 2011. “Human Model Evaluation in Interactive Supervised Learning.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.

The second Fiebrink piece in this list, which is more of a deep dive into how people use Wekinator. As with the chapter listed above this is required reading for those working on design tools which make use of interactive machine learning. An important finding here is that users of intelligent design tools might have very different criteria for evaluating the ‘correctness’ of a trained model than engineers do. Such criteria are likely subjective and evaluation requires first-hand use of the model in real time.

Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M Ramsey, 316–34. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139046855.020.

Bostrom is known for his somewhat crazy but thoughtprovoking book on superintelligence and although a large part of this chapter is about the ethics of general artificial intelligence (which at the very least is still a way out), the first section discusses the ethics of current “narrow” artificial intelligence. It makes for a good checklist of things designers should keep in mind when they create new applications of machine learning. Key insight: when a machine learning system takes on work with social dimensions—tasks previously performed by humans—the system inherits its social requirements.

Yang, Qian, John Zimmerman, Aaron Steinfeld, and Anthony Tomasic. 2016. Planning Adaptive Mobile Experiences When Wireframing. The 2016 ACM Conference. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.

Finally, a feet-in-the-mud exploration of what it actually means to design for machine learning with the tools most commonly used by designers today: drawings and diagrams of various sorts. In this case the focus is on using machine learning to make an interface adaptive. It includes an interesting discussion of how to balance the use of implicit and explicit user inputs for adaptation, and how to deal with inference errors. Once again the limitations of current sketching and prototyping tools is mentioned, and related to the need for designers to develop tacit knowledge about machine learning. Such tacit knowledge will only be gained when designers can work with machine learning in a hands-on manner.

Supplemental material

Floyd, Christiane. 1984. “A Systematic Look at Prototyping.” In Approaches to Prototyping, 1–18. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978–3–642–69796–8_1.

I provided this to students so that they get some additional grounding in the various kinds of prototyping that are out there. It helps to prevent reductive notions of prototyping, and it makes for a nice complement to Buxton’s work on sketching.

Blevis, E, Y Lim, and E Stolterman. 2006. “Regarding Software as a Material of Design.”

Some of the papers refer to machine learning as a “design material” and this paper helps to understand what that idea means. Software is a material without qualities (it is extremely malleable, it can simulate nearly anything). Yet, it helps to consider it as a physical material in the metaphorical sense because we can then apply ways of design thinking and doing to software programming.

‘Machine Learning for Designers’ workshop

On Wednesday Péter Kun, Holly Robbins and myself taught a one-day workshop on machine learning at Delft University of Technology. We had about thirty master’s students from the industrial design engineering faculty. The aim was to get them acquainted with the technology through hands-on tinkering with the Wekinator as central teaching tool.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Background

The reasoning behind this workshop is twofold.

On the one hand I expect designers will find themselves working on projects involving machine learning more and more often. The technology has certain properties that differ from traditional software. Most importantly, machine learning is probabilistic in stead of deterministic. It is important that designers understand this because otherwise they are likely to make bad decisions about its application.

The second reason is that I have a strong sense machine learning can play a role in the augmentation of the design process itself. So-called intelligent design tools could make designers more efficient and effective. They could also enable the creation of designs that would otherwise be impossible or very hard to achieve.

The workshop explored both ideas.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Format

The structure was roughly as follows:

In the morning we started out providing a very broad introduction to the technology. We talked about the very basic premise of (supervised) learning. Namely, providing examples of inputs and desired outputs and training a model based on those examples. To make these concepts tangible we then introduced the Wekinator and walked the students through getting it up and running using basic examples from the website. The final step was to invite them to explore alternative inputs and outputs (such as game controllers and Arduino boards).

In the afternoon we provided a design brief, asking the students to prototype a data-enabled object with the set of tools they had acquired in the morning. We assisted with technical hurdles where necessary (of which there were more than a few) and closed out the day with demos and a group discussion reflecting on their experiences with the technology.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Results

As I tweeted on the way home that evening, the results were… interesting.

Not all groups managed to put something together in the admittedly short amount of time they were provided with. They were most often stymied by getting an Arduino to talk to the Wekinator. Max was often picked as a go-between because the Wekinator receives OSC messages over UDP, whereas the quickest way to get an Arduino to talk to a computer is over serial. But Max in my experience is a fickle beast and would more than once crap out on us.

The groups that did build something mainly assembled prototypes from the examples on hand. Which is fine, but since we were mainly working with the examples from the Wekinator website they tended towards the interactive instrument side of things. We were hoping for explorations of IoT product concepts. For that more hand-rolling was required and this was only achievable for the students on the higher end of the technical expertise spectrum (and the more tenacious ones).

The discussion yielded some interesting insights into mental models of the technology and how they are affected by hands-on experience. A comment I heard more than once was: Why is this considered learning at all? The Wekinator was not perceived to be learning anything. When challenged on this by reiterating the underlying principles it became clear the black box nature of the Wekinator hampers appreciation of some of the very real achievements of the technology. It seems (for our students at least) machine learning is stuck in a grey area between too-high expectations and too-low recognition of its capabilities.

Next steps

These results, and others, point towards some obvious improvements which can be made to the workshop format, and to teaching design students about machine learning more broadly.

  1. We can improve the toolset so that some of the heavy lifting involved with getting the various parts to talk to each other is made easier and more reliable.
  2. We can build examples that are geared towards the practice of designing IoT products and are ready for adaptation and hacking.
  3. And finally, and probably most challengingly, we can make the workings of machine learning more transparent so that it becomes easier to develop a feel for its capabilities and shortcomings.

We do intend to improve and teach the workshop again. If you’re interested in hosting one (either in an educational or professional context) let me know. And stay tuned for updates on this and other efforts to get designers to work in a hands-on manner with machine learning.

Special thanks to the brilliant Ianus Keller for connecting me to Péter and for allowing us to pilot this crazy idea at IDE Academy.

References

Sources used during preparation and running of the workshop:

  • The Wekinator – the UI is infuriatingly poor but when it comes to getting started with machine learning this tool is unmatched.
  • Arduino – I have become particularly fond of the MKR1000 board. Add a lithium-polymer battery and you have everything you need to prototype IoT products.
  • OSC for Arduino – CNMAT’s implementation of the open sound control (OSC) encoding. Key puzzle piece for getting the above two tools talking to each other.
  • Machine Learning for Designers – my preferred introduction to the technology from a designerly perspective.
  • A Visual Introduction to Machine Learning – a very accessible visual explanation of the basic underpinnings of computers applying statistical learning.
  • Remote Control Theremin – an example project I prepared for the workshop demoing how to have the Wekinator talk to an Arduino MKR1000 with OSC over UDP.

Design × AI coffee meetup

If you work in the field of design or artificial intelligence and are interested in exploring the opportunities at their intersection, consider yourself invited to an informal coffee meetup on February 15, 10am at Brix in Amsterdam.

Erik van der Pluijm and myself have for a while now been carrying on a conversation about AI and design and we felt it was time to expand the circle a bit. We are very curious who else out there shares our excitement.

Questions we are mulling over include: How does the design process change when creating intelligent products? And: How can teams collaborate with intelligent design tools to solve problems in new and interesting ways?

Anyway, lots to chew on.

No need to sign up or anything, just show up and we’ll see what happens.

High-skill robots, low-skill workers

Some notes on what I think I understand about technology and inequality.

Let’s start with an obvious big question: is technology destroying jobs faster than they can be replaced? On the long term the evidence isn’t strong. Humans always appear to invent new things to do. There is no reason this time around should be any different.

But in the short term technology has contributed to an evaporation of mid-skilled jobs. Parts of these jobs are automated entirely, parts can be done by fewer people because of higher productivity gained from tech.

While productivity continues to grow, jobs are lagging behind. The year 2000 appears to have been a turning point. “Something” happened around that time. But no-one knows exactly what.

My hunch is that we’ve seen an emergence of a new class of pseudo-monopolies. Oligopolies. And this is compounded by a ‘winner takes all’ dynamic that technology seems to produce.

Others have pointed to globalisation but although this might be a contributing factor, the evidence does not support the idea that it is the major cause.

So what are we left with?

Historically, looking at previous technological upsets, it appears education makes a big difference. People negatively affected by technological progress should have access to good education so that they have options. In the US the access to high quality education is not equally divided.

Apparently family income is associated with educational achievement. So if your family is rich, you are more likely to become a high skilled individual. And high skilled individuals are privileged by the tech economy.

And if Piketty’s is right, we are approaching a reality in which money made from wealth rises faster than wages. So there is a feedback loop in place which only exacerbates the situation.

One more bullet: If you think trickle-down economics, increasing the size of the pie will help, you might be mistaken. It appears social mobility is helped more by decreasing inequality in the distribution of income growth.

So some preliminary conclusions: a progressive tax on wealth won’t solve the issue. The education system will require reform, too.

I think this is the central irony of the whole situation: we are working hard to teach machines how to learn. But we are neglecting to improve how people learn.

Move 37

Designers make choices. They should be able to provide rationales for those choices. (Although sometimes they can’t.) Being able to explain the thinking that went into a design move to yourself, your teammates and clients is part of being a professional.

Move 37. This was the move AlphaGo made which took everyone by surprise because it appeared so wrong at first.

The interesting thing is that in hindsight it appeared AlphaGo had good reasons for this move. Based on a calculation of odds, basically.

If asked at the time, would AlphaGo have been able to provide this rationale?

It’s a thing that pops up in a lot of the reading I am doing around AI. This idea of transparency. In some fields you don’t just want an AI to provide you with a decision, but also with the arguments supporting that decision. Obvious examples would include a system that helps diagnose disease. You want it to provide more than just the diagnosis. Because if it turns out to be wrong, you want to be able to say why at the time you thought it was right. This is a social, cultural and also legal requirement.

It’s interesting.

Although lives don’t depend on it, the same might apply to intelligent design tools. If I am working with a system and it is offering me design directions or solutions, I want to know why it is suggesting these things as well. Because my reason for picking one over the other depends not just on the surface level properties of the design but also the underlying reasons. It might be important because I need to be able to tell stakeholders about it.

An added side effect of this is that a designer working with such a system is be exposed to machine reasoning about design choices. This could inform their own future thinking too.

Transparent AI might help people improve themselves. A black box can’t teach you much about the craft it’s performing. Looking at outcomes can be inspirational or helpful, but the processes that lead up to them can be equally informative. If not more so.

Imagine working with an intelligent design tool and getting the equivalent of an AlphaGo move 37 moment. Hugely inspirational. Game changer.

This idea gets me much more excited than automating design tasks does.

Adapting intelligent tools for creativity

I read Alper’s book on conversational user interfaces over the weekend and was struck by this paragraph:

“The holy grail of a conversational system would be one that’s aware of itself — one that knows its own model and internal structure and allows you to change all of that by talking to it. Imagine being able to tell Siri to tone it down a bit with the jokes and that it would then actually do that.”

His point stuck with me because I think this is of particular importance to creative tools. These need to be flexible so that a variety of people can use them in different circumstances. This adaptability is what lends a tool depth.

The depth I am thinking of in creative tools is similar to the one in games, which appears to be derived from a kind of semi-orderedness. In short, you’re looking for a sweet spot between too simple and too complex.

And of course, you need good defaults.

Back to adaptation. This can happen in at least two ways on the interface level: modal or modeless. A simple example of the former would be to go into a preferences window to change the behaviour of your drawing package. Similarly, modeless adaptation happens when you rearrange some panels to better suit the task at hand.

Returning to Siri, the equivalence of modeless adaptation would be to tell her to tone it down when her sense of humor irks you.

For the modal solution, imagine a humor slider in a settings screen somewhere. This would be a terrible solution because it offers a poor mapping of a control to a personality trait. Can you pinpoint on a scale of 1 to 10 your preferred amount of humor in your hypothetical personal assistant? And anyway, doesn’t it depend on a lot of situational things such as your mood, the particular task you’re trying to complete and so on? In short, this requires something more situated and adaptive.

So just being able to tell Siri to tone it down would be the equivalent of rearranging your Photoshop palets. And in a next interaction Siri might carefully try some humor again to gauge your response. And if you encourage her, she might be more humorous again.

Enough about funny Siri for now because it’s a bit of a silly example.

Funny Siri, although she’s a bit of a Silly example, does illustrate another problem I am trying to wrap my head around. How does an intelligent tool for creativity communicate its internal state? Because it is probabilistic, it can’t be easily mapped to a graphic information display. And so our old way of manipulating state, and more specifically adapting a tool to our needs becomes very different too.

It seems to be best for an intelligent system to be open to suggestions from users about how to behave. Adapting an intelligent creative tool is less like rearranging your workspace and more like coordinating with a coworker.

My ideal is for this to be done in the same mode (and so using the same controls) as when doing the work itself. I expect this to allow for more fluid interactions, going back and forth between doing the work at hand, and meta-communication about how the system supports the work. I think if we look at how people collaborate this happens a lot, communication and meta-communication going on continuously in the same channels.

We don’t need a self-aware artificial intelligence to do this. We need to apply what computer scientists call supervised learning. The basic idea is to provide a system with example inputs and desired outputs, and let it infer the necessary rules from them. If the results are unsatisfactory, you simply continue training it until it performs well enough.

A super fun example of this approach is the Wekinator, a piece of machine learning software for creating musical instruments. Below is a video in which Wekinator’s creator Rebecca Fiebrink performs several demos.

Here we have an intelligent system learning from examples. A person manipulating data in stead of code to get to a particular desired behaviour. But what Wekinator lacks and what I expect will be required for this type of thing to really catch on is for the training to happen in the same mode or medium as the performance. The technology seems to be getting there, but there are many interaction design problems remaining to be solved.

Generating UI design variations

AI design tool for UI design alternatives

I am still thinking about AI and design. How is the design process of AI products different? How is the user experience of AI products different? Can design tools be improved with AI?

When it comes to improving design tools with AI my starting point is game design and development. What follows is a quick sketch of one idea, just to get it out of my system.

‘Mixed-initiative’ tools for procedural generation (such as Tanagra) allow designers to create high-level structures which a machine uses to produce full-fledged game content (such as levels). It happens in a real-time. There is a continuous back-and-forth between designer and machine.

Software user interfaces, on mobile in particular, are increasingly frequently assembled from ready-made components according to more or less well-described rules taken from design languages such as Material Design. These design languages are currently primarily described for human consumption. But it should be a small step to make a design language machine-readable.

So I see an opportunity here where a designer might assemble a UI like they do now, and a machine can do several things. For example it can test for adherence to design language rules, suggest corrections or even auto-correct as the designer works.

More interestingly, a machine might take one UI mockup, and provide the designer with several more possible variations. To do this it could use different layouts, or alternative components that serve a same or similar purpose to the ones used.

In high pressure work environments where time is scarce, corners are often cut in the divergence phase of design. Machines could augment designers so that generating many design alternatives becomes less laborious both mentally and physically. Ideally, machines would surprise and even inspire us. And the final say would still be ours.

Artificial intelligence, creativity and metis

Boris pointed me to CreativeAI, an interesting article about creativity and artificial intelligence. It offers a really nice overview of the development of the idea of augmenting human capabilities through technology. One of the claims the authors make is that artificial intelligence is making creativity more accessible. Because tools with AI in them support humans in a range of creative tasks in a way that shortcuts the traditional requirements of long practice to acquire the necessary technical skills.

For example, ShadowDraw (PDF) is a program that helps people with freehand drawing by guessing what they are trying to create and showing a dynamically updated ‘shadow image’ on the canvas which people can use as a guide.

It is an interesting idea and in some ways these kinds of software indeed lower the threshold for people to engage in creative tasks. They are good examples of artificial intelligence as partner in stead of master or servant.

While reading CreativeAI I wasn’t entirely comfortable though and I think it may have been caused by two things.

One is that I care about creativity and I think that a good understanding of it and a daily practice at it—in the broad sense of the word—improves lives. I am also in some ways old-fashioned about it and I think the joy of creativity stems from the infinitely high skill ceiling involved and the never-ending practice it affords. Let’s call it the Jiro perspective, after the sushi chef made famous by a wonderful documentary.

So, claiming that creative tools with AI in them can shortcut all of this life-long joyful toil produces a degree of panic for me. Although it’s probably a Pastoral worldview which would be better to abandon. In a world eaten by software, it’s better to be a Promethean.

The second reason might hold more water but really is more of an open question than something I have researched in any meaningful way. I think there is more to creativity than just the technical skill required and as such the CreativeAI story runs the risk of being reductionist. While reading the article I was also slowly but surely making my way through one of the final chapters of James C. Scott’s Seeing Like a State, which is about the concept of metis.

It is probably the most interesting chapter of the whole book. Scott introduces metis as a form of knowledge different from that produced by science. Here are some quick excerpts from the book that provide a sense of what it is about. But I really can’t do the richness of his description justice here. I am trying to keep this short.

The kind of knowledge required in such endeavors is not deductive knowledge from first principles but rather what Greeks of the classical period called metis, a concept to which we shall return. […] metis is better understood as the kind of knowledge that can be acquired only by long practice at similar but rarely identical tasks, which requires constant adaptation to changing circumstances. […] It is to this kind of knowledge that [socialist writer] Luxemburg appealed when she characterized the building of socialism as “new territory” demanding “improvisation” and “creativity.”

Scott’s argument is about how authoritarian high-modernist schemes privilege scientific knowledge over metis. His exploration of what metis means is super interesting to anyone dedicated to honing a craft, or to cultivating organisations conducive to the development and application of craft in the face of uncertainty. There is a close link between metis and the concept of agility.

So circling back to artificially intelligent tools for creativity I would be interested in exploring not only how we can diminish the need for the acquisition of the technical skills required, but to also accelerate the acquisition of the practical knowledge required to apply such skills in the ever-changing real world. I suggest we expand our understanding of what it means to be creative, but without losing the link to actual practice.

For the ancient Greeks metis became synonymous with a kind of wisdom and cunning best exemplified by such figures as Odysseus and notably also Prometheus. The latter in particular exemplifies the use of creativity towards transformative ends. This is the real promise of AI for creativity in my eyes. Not to simply make it easier to reproduce things that used to be hard to create but to create new kinds of tools which have the capacity to surprise their users and to produce results that were impossible to create before.

Artificial intelligence as partner

Some notes on artificial intelligence, technology as partner and related user interface design challenges. Mostly notes to self, not sure I am adding much to the debate. Just summarising what I think is important to think about more. Warning: Dense with links.

Matt Jones writes about how artificial intelligence does not have to be a slave, but can also be partner.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

I would add a third possibility, which is AI as master. A common fear we humans have and one I think only growing as things like AlphaGo and new Boston Dynamics robots keep happening.

I have had a tweet pinned to my timeline for a while now, which is a quote from Play Matters.

“tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actually does not just apply to AI but to tech in general. Of course, as tech gets smarter and more independent from humans, the idea of a ‘third way’ only grows in importance.

More tweeting. A while back, shortly after AlphaGo’s victory, James tweeted:

On the one hand, we must insist, as Kasparov did, on Advanced Go, and then Advanced Everything Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear example of humans and AI partnering. And it is also an example of technology as a source of expression and a way of being.

Also, in a WIRED article on AlphaGo, someone who had played the AI repeatedly says his game has improved tremendously.

So that is the promise: Artificially intelligent systems which work together with humans for mutual benefit.

Now of course these AIs don’t just arrive into the world fully formed. They are created by humans with particular goals in mind. So there is a design component there. We can design them to be partners but we can also design them to be masters or slaves.

As an aside: Maybe AIs that make use of deep learning are particularly well suited to this partner model? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynamics. There apparently is a significant difference between holistic and reductionist approaches, deep learning being holistic. I imagine reductionist AI might be more dependent on humans. But this is just wild speculation. I don’t know if there is anything there.

This insistence of James on “advanced everything else” is a world view. A politics. To allow ourselves to be increasingly entangled with these systems, to not be afraid of them. Because if we are afraid, we either want to subjugate them or they will subjugate us. It is also about not obscuring the systems we are part of. This is a sentiment also expressed by James in the same series of tweets I quoted from earlier:

These emergences are also the best model we have ever built for describing the true state of the world as it always already exists.

And there is overlap here with ideas expressed by Kevin in ‘Design as Participation’:

[W]e are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?

AI partners might help us to better see the systems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.

I am reminded of the mixed-initiative systems being researched in the area of procedural content generation for games. I wrote about these a while back on the Hubbub blog. Such systems are partners of designers. They give something like super powers. Now imagine such powers applied to other problems. Quite exciting.

Actually, in the aforementioned article I distinguish between tools for making things and tools for inspecting possibility spaces. In the first case designers manipulate more abstract representations of the intended outcome and the system generates the actual output. In the second case the system visualises the range of possible outcomes given a particular configuration of the abstract representation. These two are best paired.

From a design perspective, a lot remains to be figured out. If I look at those mixed-initiative tools I am struck by how poorly they communicate what the AI is doing and what its capabilities are. There is a huge user interface design challenge there.

For stuff focused on getting information, a conversational UI seems to be the current local optimum for working with an AI. But for tools for creativity, to use the two-way split proposed by Victor, different UIs will be required.

What shape will they take? What visual language do we need to express the particular properties of artificial intelligence? What approaches can we take in addition to personifying AI as bots or characters? I don’t know and I can hardly think of any good examples that point towards promising approaches. Lots to be done.