Design and machine learning – an annotated reading list

Earlier this year I coached Design for Interaction master students at Delft University of Technology in the course Research Methodology. The students organised three seminars for which I provided the claims and assigned reading. In the seminars they argued about my claims using the Toulmin Model of Argumentation. The readings served as sources for backing and evidence.

The claims and readings were all related to my nascent research project about machine learning. We delved into both designing for machine learning, and using machine learning as a design tool.

Below are the readings I assigned, with some notes on each, which should help you decide if you want to dive into them yourself.

Hebron, Patrick. 2016. Machine Learning for Designers. Sebastopol: O’Reilly.

The only non-academic piece in this list. This served the purpose of getting all students on the same page with regards to what machine learning is, its applications of machine learning in interaction design, and common challenges encountered. I still can’t think of any other single resource that is as good a starting point for the subject as this one.

Fiebrink, Rebecca. 2016. “Machine Learning as Meta-Instrument: Human-Machine Partnerships Shaping Expressive Instrumental Creation.” In Musical Instruments in the 21st Century, 14:137–51. Singapore: Springer Singapore. doi:10.1007/978–981–10–2951–6_10.

Fiebrink’s Wekinator is groundbreaking, fun and inspiring so I had to include some of her writing in this list. This is mostly of interest for those looking into the use of machine learning for design and other creative and artistic endeavours. An important idea explored here is that tools that make use of (interactive, supervised) machine learning can be thought of as instruments. Using such a tool is like playing or performing, exploring a possibility space, engaging in a dialogue with the tool. For a tool to feel like an instrument requires a tight action-feedback loop.

Dove, Graham, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. The 2017 CHI Conference. New York, New York, USA: ACM. doi:10.1145/3025453.3025739.

A really good survey of how designers currently deal with machine learning. Key takeaways include that in most cases, the application of machine learning is still engineering-led as opposed to design-led, which hampers the creation of non-obvious machine learning applications. It also makes it hard for designers to consider ethical implications of design choices. A key reason for this is that at the moment, prototyping with machine learning is prohibitively cumbersome.

Fiebrink, Rebecca, Perry R Cook, and Dan Trueman. 2011. “Human Model Evaluation in Interactive Supervised Learning.” In, 147. New York, New York, USA: ACM Press. doi:10.1145/1978942.1978965.

The second Fiebrink piece in this list, which is more of a deep dive into how people use Wekinator. As with the chapter listed above this is required reading for those working on design tools which make use of interactive machine learning. An important finding here is that users of intelligent design tools might have very different criteria for evaluating the ‘correctness’ of a trained model than engineers do. Such criteria are likely subjective and evaluation requires first-hand use of the model in real time.

Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M Ramsey, 316–34. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139046855.020.

Bostrom is known for his somewhat crazy but thoughtprovoking book on superintelligence and although a large part of this chapter is about the ethics of general artificial intelligence (which at the very least is still a way out), the first section discusses the ethics of current “narrow” artificial intelligence. It makes for a good checklist of things designers should keep in mind when they create new applications of machine learning. Key insight: when a machine learning system takes on work with social dimensions—tasks previously performed by humans—the system inherits its social requirements.

Yang, Qian, John Zimmerman, Aaron Steinfeld, and Anthony Tomasic. 2016. Planning Adaptive Mobile Experiences When Wireframing. The 2016 ACM Conference. New York, New York, USA: ACM. doi:10.1145/2901790.2901858.

Finally, a feet-in-the-mud exploration of what it actually means to design for machine learning with the tools most commonly used by designers today: drawings and diagrams of various sorts. In this case the focus is on using machine learning to make an interface adaptive. It includes an interesting discussion of how to balance the use of implicit and explicit user inputs for adaptation, and how to deal with inference errors. Once again the limitations of current sketching and prototyping tools is mentioned, and related to the need for designers to develop tacit knowledge about machine learning. Such tacit knowledge will only be gained when designers can work with machine learning in a hands-on manner.

Supplemental material

Floyd, Christiane. 1984. “A Systematic Look at Prototyping.” In Approaches to Prototyping, 1–18. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978–3–642–69796–8_1.

I provided this to students so that they get some additional grounding in the various kinds of prototyping that are out there. It helps to prevent reductive notions of prototyping, and it makes for a nice complement to Buxton’s work on sketching.

Blevis, E, Y Lim, and E Stolterman. 2006. “Regarding Software as a Material of Design.”

Some of the papers refer to machine learning as a “design material” and this paper helps to understand what that idea means. Software is a material without qualities (it is extremely malleable, it can simulate nearly anything). Yet, it helps to consider it as a physical material in the metaphorical sense because we can then apply ways of design thinking and doing to software programming.

Generating UI design variations

AI design tool for UI design alternatives

I am still thinking about AI and design. How is the design process of AI products different? How is the user experience of AI products different? Can design tools be improved with AI?

When it comes to improving design tools with AI my starting point is game design and development. What follows is a quick sketch of one idea, just to get it out of my system.

‘Mixed-initiative’ tools for procedural generation (such as Tanagra) allow designers to create high-level structures which a machine uses to produce full-fledged game content (such as levels). It happens in a real-time. There is a continuous back-and-forth between designer and machine.

Software user interfaces, on mobile in particular, are increasingly frequently assembled from ready-made components according to more or less well-described rules taken from design languages such as Material Design. These design languages are currently primarily described for human consumption. But it should be a small step to make a design language machine-readable.

So I see an opportunity here where a designer might assemble a UI like they do now, and a machine can do several things. For example it can test for adherence to design language rules, suggest corrections or even auto-correct as the designer works.

More interestingly, a machine might take one UI mockup, and provide the designer with several more possible variations. To do this it could use different layouts, or alternative components that serve a same or similar purpose to the ones used.

In high pressure work environments where time is scarce, corners are often cut in the divergence phase of design. Machines could augment designers so that generating many design alternatives becomes less laborious both mentally and physically. Ideally, machines would surprise and even inspire us. And the final say would still be ours.

Artificial intelligence as partner

Some notes on artificial intelligence, technology as partner and related user interface design challenges. Mostly notes to self, not sure I am adding much to the debate. Just summarising what I think is important to think about more. Warning: Dense with links.

Matt Jones writes about how artificial intelligence does not have to be a slave, but can also be partner.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

I would add a third possibility, which is AI as master. A common fear we humans have and one I think only growing as things like AlphaGo and new Boston Dynamics robots keep happening.

I have had a tweet pinned to my timeline for a while now, which is a quote from Play Matters.

“tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actually does not just apply to AI but to tech in general. Of course, as tech gets smarter and more independent from humans, the idea of a ‘third way’ only grows in importance.

More tweeting. A while back, shortly after AlphaGo’s victory, James tweeted:

On the one hand, we must insist, as Kasparov did, on Advanced Go, and then Advanced Everything Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear example of humans and AI partnering. And it is also an example of technology as a source of expression and a way of being.

Also, in a WIRED article on AlphaGo, someone who had played the AI repeatedly says his game has improved tremendously.

So that is the promise: Artificially intelligent systems which work together with humans for mutual benefit.

Now of course these AIs don’t just arrive into the world fully formed. They are created by humans with particular goals in mind. So there is a design component there. We can design them to be partners but we can also design them to be masters or slaves.

As an aside: Maybe AIs that make use of deep learning are particularly well suited to this partner model? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynamics. There apparently is a significant difference between holistic and reductionist approaches, deep learning being holistic. I imagine reductionist AI might be more dependent on humans. But this is just wild speculation. I don’t know if there is anything there.

This insistence of James on “advanced everything else” is a world view. A politics. To allow ourselves to be increasingly entangled with these systems, to not be afraid of them. Because if we are afraid, we either want to subjugate them or they will subjugate us. It is also about not obscuring the systems we are part of. This is a sentiment also expressed by James in the same series of tweets I quoted from earlier:

These emergences are also the best model we have ever built for describing the true state of the world as it always already exists.

And there is overlap here with ideas expressed by Kevin in ‘Design as Participation’:

[W]e are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?

AI partners might help us to better see the systems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.

I am reminded of the mixed-initiative systems being researched in the area of procedural content generation for games. I wrote about these a while back on the Hubbub blog. Such systems are partners of designers. They give something like super powers. Now imagine such powers applied to other problems. Quite exciting.

Actually, in the aforementioned article I distinguish between tools for making things and tools for inspecting possibility spaces. In the first case designers manipulate more abstract representations of the intended outcome and the system generates the actual output. In the second case the system visualises the range of possible outcomes given a particular configuration of the abstract representation. These two are best paired.

From a design perspective, a lot remains to be figured out. If I look at those mixed-initiative tools I am struck by how poorly they communicate what the AI is doing and what its capabilities are. There is a huge user interface design challenge there.

For stuff focused on getting information, a conversational UI seems to be the current local optimum for working with an AI. But for tools for creativity, to use the two-way split proposed by Victor, different UIs will be required.

What shape will they take? What visual language do we need to express the particular properties of artificial intelligence? What approaches can we take in addition to personifying AI as bots or characters? I don’t know and I can hardly think of any good examples that point towards promising approaches. Lots to be done.

Prototyping in the browser

When you are designing a web site or web app I think you should prototype in the browser. Why? You might as well ask why prototype at all. Answer: To enable continuous testing and refinement of your design. Since you are designing for the web it makes sense to do this testing and refinement with an artefact composed of the web’s material.

There are many ways to do prototyping. A common way is to make wireframes and then make them ‘clickable’. But when I am designing a web site or a web app and I get to the point where it is time to do wireframes I often prefer to go straight to the browser.

Before this step I have sketched out all the screens on paper of course. I have done multiple sketches of each page. I’ve had them critiqued by team members and I have reworked them.

Drawing pictures of web pages

But then I open my drawing program—Sketch, in my case—and my heart sinks. Not because Sketch sucks. Sketch is great. But it somehow feels wrong to draw pictures of web pages on my screen. I find it cumbersome. My drawing program does not behave like a browser. That is to say in stead of defining a bunch of rules for elements and having the browser figure out how to render them on a page together I need to follow those rules myself in my head as I put each element in its place.

And don’t get me started on how wireframes are supposed to be without visual design. That is nonsense. If you are using contrast, repetition, alignment and proximity, you are doing layout. That is visual design. I can’t stand wireframes with a bad visual hierarchy.

If I persevere, and I have a set of wireframes in my drawing program, they are static. I can’t use them. I then need to export them to some other often clunky program to make the pictures clickable. Which always results in a poor resemblance of the actual experience. (I use Marvel. It’s okay but it is hardly a joy to use. For mobile apps I still use it, for web sites I prefer not to.)

Prototyping in the browser

When I prototype in the browser I don’t have to deal with these issues. I am doing layout in a way that is native to the medium. And once I have some pages set up they are immediately usable. So I can hand it to someone, a team member or a test participant, and let them play with it.

That is why, for web sites and web apps, I skip wireframes altogether and prototype in the browser. I do not know how common this is in the industry nowadays. So I thought I would share my approach here. It may be of use to some.

It used to be the case that it was quite a bit of hassle to get up and running with a browser prototype so naturally opening a drawing package seemed more attractive. Not so anymore. Tools have come a long way. Case in point: My setup nowadays involves zero screwing around on the command line.

CodeKit

The core of it is a paid-for Mac app called CodeKit, a so-called task manager. It allows you to install a front-end development framework I like called Zurb Foundation with a couple of clicks and has a built in web server so you can play with your prototype on any device on your local network. As you make changes to the code of your prototype it gets automatically updated on all your devices. No more manual refreshing. Saves a huge amount of time.

I know you can do most of what CodeKit does for you with stuff like Grunt but that involves tedious configuration and working the command line. This is fine when you’re a developer, but not fine when you are a designer. I want to be up and running as fast as possible. CodeKit allows me to do that and has some other features built in that are ideal for prototyping which I will talk about more below. Long story short: CodeKit has saved me a huge amount of time and is well worth the money.

Okay so on with the show. Yes, this whole prototyping in the browser thing involves ‘coding’. But honestly, if you can’t write some HTML and CSS you really shouldn’t be doing design for the web in the first place. I don’t care if you consider yourself a UX designer and somehow above all this lowly technical stuff. You are not. Nobody is saying you should become a frontend developer but you need to have an acquaintance with the materials your product is made of. Follow a few courses on Codecadamy or something. There really isn’t an excuse anymore these days for not knowing this stuff. If you want to level up, learn SASS.

Zurb Foundation

I like Zurb Foundation because it offers a coherent and comprehensive library of elements which covers almost all the common patterns found in web sites and apps. It offers a grid and some default typography styles as well. All of it doesn’t look flashy at all which is how I like it when I am prototyping. A prototype at this stage does not require personality yet. Just a clear visual hierarchy. Working with Foundation is almost like playing with LEGO. You just click together the stuff you need. It’s painless and looks and works great.

I hardly do any styling but the few changes I do want to make I can easily add to Foundation’s app.scss using SASS. I usually have a few styles in there for tweaking some margins on particular elements, for example a footer. But I try to focus on the structure and behaviour of my pages and for that I am mostly doing HTML.

GitHub

Testing locally I already mentioned. For that, CodeKit has you covered. Of course, you want to be able to share your prototype with others. For this I like to use GitHub and their Pages feature. Once again, using their desktop client, this involves zero command line work. You just add the folder with your CodeKit project as a new repository and sync it with GitHub. Then you need to add a branch named ‘gh-pages’ and do ‘update from master’. Presto, your prototype is now on the web for anyone with the URL to see and use. Perfect if you’re working in a distributed team.

Don’t be intimidated by using GitHub. Their on-boarding is pretty impressive nowadays. You’ll be up and running in no time. Using version control, even if it is just you working on the prototype, adds some much needed structure and control over changes. And when you are collaborating on your prototype with team members it is indispensable.

But in most cases I am the only one building the prototype so I just work on the master branch and once every while I update the gh-pages branch from master and sync it and I am done. If you use Slack you can add a GitHub bot to a channel and have your team members receive an automatic update every time you change the prototype.

The Kit Language

If your project is of any size beyond the very small you will likely have repeating elements in your design. Headers, footers, recurring widgets and so on. CodeKit has recently added support for something called the Kit Language. This adds support for imports and variables to regular HTML. It is absolutely great for prototyping. For each repeating element you create a ‘partial’ and import it wherever you need it. Variables are great for changing the contents of such repeating elements. CodeKit compiles it all into plain static HTML for you so your prototype runs anywhere.

The Kit Language really was the missing piece of the puzzle for me. With it in place I am very comfortable recommending this way of working to anyone.

So that’s my setup: CodeKit, Zurb Foundation and GitHub. Together they make for a very pleasant and productive way to do prototyping in the browser. I don’t imagine myself going back to drawing pictures of web pages anytime soon.

Writing for conversational user interfaces

Last year at Hubbub we worked on two projects featuring a conversational user interface. I thought I would share a few notes on how we did the writing for them. Because for conversational user interfaces a large part of the design is in the writing.

At the moment, there aren’t really that many tools well suited for doing this. Twine comes to mind but it is really more focused on publishing as opposed to authoring. So while we were working on these projects we just grabbed whatever we were familiar with and felt would get the job done.

I actually think there is an opportunity here. If this conversational ui thing takes off designers would benefit a lot from better tools to sketch and prototype them. After all this is the only way to figure out if a conversational user interface is suitable for a particular project. In the words of Bill Buxton:

“Everything is best for something and worst for something else.”

Okay so below are my notes. The two projects are KOKORO (a codename) and Free Birds. We have yet to publish extensively on both, so a quick description is in order.

KOKORO is a digital coach for teenagers to help them manage and improve their mental health. It is currently a prototype mobile web app not publicly available. (The engine we built to drive it is available on GitHub, though.)

Free Birds (Vrije Vogels in Dutch) is a game about civil liberties for families visiting a war and resistance museum in the Netherlands. It is a location-based iOS app currently available on the Dutch app store and playable in Airborne Museum Hartenstein in Oosterbeek.


For KOKORO we used Gingko to write the conversation branches. This is good enough for a prototype but it becomes unwieldy at scale. And anyway you don’t want to be limited to a tree structure. You want to at least be able to loop back to a parent branch, something that isn’t supported by Gingko. And maybe you don’t want to use the branching pattern at all.

Free Birds’s story has a very linear structure. So in this case we just wrote our conversations in Quip with some basic rules for formatting, not unlike a screenplay.

In Free Birds player choices ‘colour’ the events that come immediately after, but the path stays the same.

This approach was inspired by the Walking Dead games. Those are super clever at giving players a sense of agency without the need for sprawling story trees. I remember seeing the creators present this strategy at PRACTICE and something clicked for me. The important point is, choices don’t have to branch out to different directions to feel meaningful.

KOKORO’s choices did have to lead to different paths so we had to build a tree structure. But we also kept track of things a user says. This allows the app to “learn” about the user. Subsequent segments of the conversation are adapted based on this learning. This allows for more flexibility and it scales better. A section of a conversation has various states between which we switch depending on what a user has said in the past.

We did something similar in Free Birds but used it to a far more limited degree, really just to once again colour certain pieces of dialogue. This is already enough to give a player a sense of agency.


As you can see, it’s all far from rocket surgery but you can get surprisingly good results just by sticking to these simple patterns. If I were to investigate more advanced strategies I would look into NLP for input and procedural generation for output. Who knows, maybe I will get to work on a project involving those things some time in the future.

Hardware interfaces for tuning the feel of microinteractions

In Digital Ground Malcolm McCullough talks about how tuning is a central part of interaction design practice. How part of the challenge of any project is to get to a point where you can start tweaking the variables that determine the behaviour of your interface for the best feel.

“Feel” is a word I borrow from game design. There is a book on it by Steve Swink. It is a funny term. We are trying to simulate sensations that are derived from the physical realm. We are trying to make things that are purely visual behave in such a way that they evoke these sensations. There are many games that heavily depend on getting feel right. Basically all games that are built on a physics simulation of some kind require good feel for a good player experience to emerge.

Physics simulations have been finding their way into non-game software products for some time now and they are becoming an increasing part of what makes a product, er, feel great. They are often at the foundation of signature moments that set a product apart from the pack. These signature moments are also known as microinteractions. To get them just right, being able to tune well is very important.

The behaviour of microinteractions based on physics simulations is determined by variables. For example, the feel of a spring is determined by the mass of the weight attached to the spring, the spring’s stiffness and the friction that resists the motion of the weight. These variables interact in ways that are hard to model in your head so you need to make repeated changes to each variable and try the simulation to get it just right. This is time-consuming, cumbersome and resists the easy exploration of alternatives essential to a good design process.

In The Setup game designer Bennett Foddy talks about a way to improve on this workflow. Many of his games (if not all of them) are playable physics simulations with punishingly hard controls. He suggests using a hardware interface (a MIDI controller) to tune the variables that determine the feel of his game while it runs. In this way the loop between changing a variable and seeing its effect in game is dramatically shortened and many different combinations of values can be explored easily. Once a satisfactory set of values for the variables has been found they can be written back to the software for future use.

I do believe such a setup is still non-trivial to make work with todays tools. A quick check verifies that Framer does not have OSC support, for example. There is an opportunity here for prototyping environments such as Framer and others to support it. The approach is not limited to motion-based microinteractions but can be extended to the tuning of variables that control other aspects of an app’s behaviour.

For example, when we were making Standing, we would have benefited hugely from hardware controls to tweak the sensitivity of its motion-sensing functions as we were using the app. We were forced to do it by repeatedly changing numbers in the code and building the app again and again. It was quite a pain to get right. To this day I have the feeling we could have made it better if only we would have had the tools to do it.

Judging from snafus such as the poor feel of the latest Twitter desktop client, there is a real need for better tools for tuning microinteractions. Just like pen tablets have become indispensable for those designing the form of user interfaces on screens. I think we might soon find a small set of hardware knobs on the desks of those designers working on the behaviour of user interfaces.

Sources for my Creative Mornings Utrecht talk on education, games, and play

I was standing on the shoulders of giants for this one. Here’s a (probably incomplete) list of sources I referenced throughout the talk.

All of these are highly recommended.

Update: the slides are now up on Speaker Deck.

This happened – Utrecht #8, coming up

I have to say, number seven is still fresh in my mind. Even so, we’ve announced number eight. You’ll find the lineup below. I hope to see you in four weeks, on November 22 at the HKU Akademietheater.

Theseus

Rainer Kohlberger is an independent visual artist based in Berlin. The concept and installation design for the THESEUS Innovation Center Internet of Things was done in collaboration with Thomas Schrott and is the basis for the visual identity of the technology platform. The installation connects and visually creates hierarchy between knowledge, products and services with a combination of physical polygon objects and virtually projected information layers. This atmospheric piece transfer knowledge and guidance to the visitor but also leaves room for interpretation.

De Klessebessers

Helma van Rijn is an Industrial Design Engineering PhD candidate at the TU Delft ID-StudioLab, specialized in 'difficult to reach' user groups. De Klessebessers is an activity for people with dementia to actively recall memories together. The design won the first prize in design competition Vergeethenniet and was on show during the Dutch Design Week 2007. De Klessebessers is currently in use at De Landrijt in Eindhoven.

Wip 'n' Kip

FourceLabs talk about Wip 'n' Kip, a playful installation for Stekker Fest, an annual electronic music festival based in Utrecht. Players of Wip 'n' Kip use adult-sized spring riders to control a chicken on a large screen. They race each other to the finish while at the same time trying to stay ahead of a horde of pursuing monsters. Wip 'n' Kip is a strange but effective mashup of video game, carnival ride and performance. It is part of the PLAY Pilots project, commissioned by the city and province of Utrecht, which explore the applications of play in the cultural industry.

Smarthistory

Lotte Meijer talks about Smarthistory, an online art history resource. It aims to be an addition to, or even replacement of, traditional text books through the use of different media to discuss hundreds of Western art pieces from antiquity to the current day. Different browsing styles are supported by a number of navigation systems. Art works are contextualized using maps and timelines. The site's community is engaged using a number of social media. Smarthistory won a Webby Award in 2009 in the education category. Lotte has gone on to work as an independent designer on many interesting and innovative projects in the art world.

Ronald Rietveld is the fourth speaker at This happened – Utrecht #7

Vacant NL

I’m happy to say we have our fourth speaker confirmed for next Monday’s This happened. Here’s the blurb:

Landscape architect Ronald Rietveld talks about Vacant NL. The installation challenges the Dutch government to use the enormous potential of inspiring, unused buildings from the 17th, 18th, 19th, 20th and 21st century for creative entrepreneurship and innovation. The Dutch government wants to be in the top 5 of world knowledge economies by the end of 2020. Vacant NL takes this political ambition seriously and leverages vacancy to stimulate innovation within the creative knowledge economy. Vacant NL is the Dutch submission for the Venice Architecture Biennale 2010. It is made by Rietveld Landscape, which Ronald Rietveld founded after winning the Prix de Rome in Architecture 2006. In 2003 he graduated with honors from the Amsterdam Academy of Architecture.

At first sight this might be an odd one out, and architectural exhibition at an interaction design event. But both the subject of the installation and the design of the experience deal with interaction in many ways. So I am sure it will provide attendees with valuable insights.

Playful street tiles, artful games and radioscapes at the next This happened – Utrecht

After a bit of a long summer break Alexander, Ianus and I are back with another edition of This happened – Utrecht. Read about the program of the seventh edition below. We’ll add a fourth speaker to the roster soon. The event is scheduled for Monday 4 October at Theater Kikker in Utrecht. Doors open at 7:30PM. The registration opens next week on Monday 20 September at 12:00PM.

The Patchingzone

Anne Nigten is director of The Patchingzone, a transdisciplinary laboratory for innovation where Master, doctor, post-doc students and professionals from different backgrounds create meaningful content. Earlier, Anne Nigten was manager of V2_lab and completed a PhD on a method for creative research and development. Go-for-IT! is a city game created together with citizens of South Rotterdam and launched in December 2009. On four playgrounds in the area street tiles were equipped with LEDs. Locals could play games with their feet, similar to console game dance mats.

Ibb and Obb

Richard Boeser is an independent designer based in Rotterdam. His studio Sparpweed is currently working on the game Ibb and Obb, scheduled to launch for Playstation Network and PC in August 2011. Ibb and Obb is a cooperative game for two players who together must find a way through a world where gravity is flipped across the horizon. Players move between both sides of the world through portals. They can surf on gravity, soulhop enemies and collect diamonds. The game is partly financed by the Game Fund, an arrangement that seeks to stimulate the development of artistic games in the Netherlands.

Radioscape

Edwin van der Heide studied sonology at the Royal Conservatory in The Hague. He now works as an artist in the field of sound, space and interaction. Radioscape transforms urban space into an acoustic labyrinth. Based on the fundamental principles of radio each participant is equipped with a receiver, headphones and an antenna. Fifteen transmitters each broadcast their own composition. Inspired by short wave sounds, they overlap to form a metacomposition. By changing position, the interpretation of sound is changed as well.

A big thank you to our sponsors, Microsoft and Fier for making this one happen.