ThingsCon 2018 workshop ‘Seeing Like a Bridge’

Workshop in progress with a view of Rotterdam’s Willemsbrug across the Maas.

Early December of last year Alec Shuldiner and myself ran a workshop at ThingsCon 2018 in Rotterdam.

Here’s the description as it was listed on the conference website:

In this workshop we will take a deep dive into some of the challenges of designing smart public infrastructure.

Smart city ideas are moving from hype into reality. The everyday things that our contemporary world runs on, such as roads, railways and canals are not immune to this development. Basic, “hard” infrastructure is being augmented with internet-connected sensing, processing and actuating capabilities. We are involved as practitioners and researchers in one such project: the MX3D smart bridge, a pedestrian bridge 3D printed from stainless steel and equipped with a network of sensors.

The question facing everyone involved with these developments, from citizens to professionals to policy makers is how to reap the potential benefits of these technologies, without degrading the urban fabric. For this to happen, information technology needs to become more like the city: open-ended, flexible and adaptable. And we need methods and tools for the diverse range of stakeholders to come together and collaborate on the design of truly intelligent public infrastructure.

We will explore these questions in this workshop by first walking you through the architecture of the MX3D smart bridge—offering a uniquely concrete and pragmatic view into a cutting edge smart city project. Subsequently we will together explore the question: What should a smart pedestrian bridge that is aware of itself and its surroundings be able to tell us? We will conclude by sharing some of the highlights from our conversation, and make note of particularly thorny questions that require further work.

The workshop’s structure was quite simple. After a round of introductions, Alec introduced the MX3D bridge to the participants. For a sense of what that introduction talk was like, I recommend viewing this recording of a presentation he delivered at a recent Pakhuis de Zwijger event.

We then ran three rounds of group discussion in the style of world cafe. each discussion was guided by one question. Participants were asked to write, draw and doodle on the large sheets of paper covering each table. At the end of each round, people moved to another table while one person remained to share the preceding round’s discussion with the new group.

The discussion questions were inspired by value-sensitive design. I was interested to see if people could come up with alternative uses for a sensor-equipped 3D-printed footbridge if they first considered what in their opinion made a city worth living in.

The questions we used were:

  1. What specific things do you like about your town? (Places, things to do, etc. Be specific.)
  2. What values underly those things? (A value is what a person or group of people consider important in life.)
  3. How would you redesign the bridge to support those values?

At the end of the three discussion rounds we went around to each table and shared the highlights of what was produced. We then had a bit of a back and forth about the outcomes and the workshop approach, after which we wrapped up.

We did get to some interesting values by starting from personal experience. Participants came from a variety of countries and that was reflected in the range of examples and related values. The design ideas for the bridge remained somewhat abstract. It turned out to be quite a challenge to make the jump from values to different types of smart bridges. Despite this, we did get nice ideas such as having the bridge report on water quality of the canal it crosses, derived from the value of care for the environment.

The response from participants afterwards was positive. People found it thought-provoking, which was definitely the point. People were also eager to learn even more about the bridge project. It remains a thing that captures people’s imagination. For that reason alone, it continues to be a very productive case to use for the grounding of these sorts of discussions.

Prototyping the Useless Butler: Machine Learning for IoT Designers

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Amsterdam 2017, photo by nunocruzstreet.com

At ThingsCon Amsterdam 2017, Péter and I ran a second iteration of our machine learning workshop. We improved on our first attempt at TU Delft in a number of ways.

  • We prepared example code for communicating with Wekinator from a wifi connected Arduino MKR1000 over OSC.
  • We created a predefined breadboard setup.
  • We developed three exercises, one for each type of Wekinator output: regression, classification and dynamic time warping.

In contrast to the first version, we had two hours to run through the whole thing, in stead of a day… So we had to cut some corners, and doubled down on walking participants through a number of exercises so that they would come out of it with some readily applicable skills.

We dubbed the workshop ‘prototyping the useless butler’, with thanks to Philip van Allen for the suggestion to frame the exercises around building something non-productive so that the focus was shifted to play and exploration.

All of the code, the circuit diagram and slides are over on GitHub. But I’ll summarise things here.

  1. We spent a very short amount of time introducing machine learning. We used Google’s Teachable Machine as an example and contrasted regular programming with using machine learning algorithms to train models. The point was to provide folks with just enough conceptual scaffolding so that the rest of the workshop would make sense.
  2. We then introduced our ‘toolchain’ which consists of Wekinator, the Arduino MKR1000 module and the OSC protocol. The aim of this toolchain is to allow designers who work in the IoT space to get a feel for the material properties of machine learning through hands-on tinkering. We tried to create a toolchain with as few moving parts as possible, because each additional component would introduce another point of failure which might require debugging. This toolchain would enable designers to either use machine learning to rapidly prototype interactive behaviour with minimal or no programming. It can also be used to prototype products that expose interactive machine learning features to end users. (For a speculative example of one such product, see Bjørn Karmann’s Objectifier.)
  3. Participants were then asked to set up all the required parts on their own workstation. A list can be found on the Useless Butler GitHub page.
  4. We then proceeded to build the circuit. We provided all the components and showed a Fritzing diagram to help people along. The basic idea of this circuit, the eponymous useless butler, was to have a sufficiently rich set of inputs and outputs with which to play, that would suit all three types of Wekinator output. So we settled on a pair of photoresistors or LDRs as inputs and an RGB LED as output.
  5. With the prerequisites installed and the circuit built we were ready to walk through the examples. For regression we mapped the continuous stream of readings from the two LDRs to three outputs, one each for the red, green and blue of the LED. For classification we put the state of both LDRs into one of four categories, each switching the RGB LED to a specific color (cyan, magenta, yellow or white). And finally, for dynamic time warping, we asked Wekinator to recognise one of three gestures and switch the RGB LED to one of three states (red, green or off).

When we reflected on the workshop afterwards, we agreed we now have a proven concept. Participants were able to get the toolchain up and running and could play around with iteratively training and evaluating their model until it behaved as intended.

However, there is still quite a bit of room for improvement. On a practical note, quite a bit of time was taken up by the building of the circuit, which isn’t the point of the workshop. One way of dealing with this is to bring those to a workshop pre-built. Doing so would enable us to get to the machine learning quicker and would open up time and space to also engage with the participants about the point of it all.

We’re keen on bringing this workshop to more settings in future. If we do, I’m sure we’ll find the opportunity to improve on things once more and I will report back here.

Many thanks to Iskander and the rest of the ThingsCon team for inviting us to the conference.

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Amsterdam 2017, photo by nunocruzstreet.com