I am still thinking about AI and design. How is the design process of AI products different? How is the user experience of AI products different? Can design tools be improved with AI?
When it comes to improving design tools with AI my starting point is game design and development. What follows is a quick sketch of one idea, just to get it out of my system.
‘Mixed-initiative’ tools for procedural generation (such as Tanagra) allow designers to create high-level structures which a machine uses to produce full-fledged game content (such as levels). It happens in a real-time. There is a continuous back-and-forth between designer and machine.
Software user interfaces, on mobile in particular, are increasingly frequently assembled from ready-made components according to more or less well-described rules taken from design languages such as Material Design. These design languages are currently primarily described for human consumption. But it should be a small step to make a design language machine-readable.
So I see an opportunity here where a designer might assemble a UI like they do now, and a machine can do several things. For example it can test for adherence to design language rules, suggest corrections or even auto-correct as the designer works.
More interestingly, a machine might take one UI mockup, and provide the designer with several more possible variations. To do this it could use different layouts, or alternative components that serve a same or similar purpose to the ones used.
In high pressure work environments where time is scarce, corners are often cut in the divergence phase of design. Machines could augment designers so that generating many design alternatives becomes less laborious both mentally and physically. Ideally, machines would surprise and even inspire us. And the final say would still be ours.