ThingsCon 2018 workshop ‘Seeing Like a Bridge’

Work­shop in progress with a view of Rotterdam’s Willems­brug across the Maas.

Ear­ly Decem­ber of last year Alec Shuldin­er and myself ran a work­shop at ThingsCon 2018 in Rot­ter­dam.

Here’s the descrip­tion as it was list­ed on the con­fer­ence web­site:

In this work­shop we will take a deep dive into some of the chal­lenges of design­ing smart pub­lic infra­struc­ture.

Smart city ideas are mov­ing from hype into real­i­ty. The every­day things that our con­tem­po­rary world runs on, such as roads, rail­ways and canals are not immune to this devel­op­ment. Basic, “hard” infra­struc­ture is being aug­ment­ed with inter­net-con­nect­ed sens­ing, pro­cess­ing and actu­at­ing capa­bil­i­ties. We are involved as prac­ti­tion­ers and researchers in one such project: the MX3D smart bridge, a pedes­tri­an bridge 3D print­ed from stain­less steel and equipped with a net­work of sen­sors.

The ques­tion fac­ing every­one involved with these devel­op­ments, from cit­i­zens to pro­fes­sion­als to pol­i­cy mak­ers is how to reap the poten­tial ben­e­fits of these tech­nolo­gies, with­out degrad­ing the urban fab­ric. For this to hap­pen, infor­ma­tion tech­nol­o­gy needs to become more like the city: open-end­ed, flex­i­ble and adapt­able. And we need meth­ods and tools for the diverse range of stake­hold­ers to come togeth­er and col­lab­o­rate on the design of tru­ly intel­li­gent pub­lic infra­struc­ture.

We will explore these ques­tions in this work­shop by first walk­ing you through the archi­tec­ture of the MX3D smart bridge—offering a unique­ly con­crete and prag­mat­ic view into a cut­ting edge smart city project. Sub­se­quent­ly we will togeth­er explore the ques­tion: What should a smart pedes­tri­an bridge that is aware of itself and its sur­round­ings be able to tell us? We will con­clude by shar­ing some of the high­lights from our con­ver­sa­tion, and make note of par­tic­u­lar­ly thorny ques­tions that require fur­ther work.

The workshop’s struc­ture was quite sim­ple. After a round of intro­duc­tions, Alec intro­duced the MX3D bridge to the par­tic­i­pants. For a sense of what that intro­duc­tion talk was like, I rec­om­mend view­ing this record­ing of a pre­sen­ta­tion he deliv­ered at a recent Pakhuis de Zwi­jger event.

We then ran three rounds of group dis­cus­sion in the style of world cafe. each dis­cus­sion was guid­ed by one ques­tion. Par­tic­i­pants were asked to write, draw and doo­dle on the large sheets of paper cov­er­ing each table. At the end of each round, peo­ple moved to anoth­er table while one per­son remained to share the pre­ced­ing round’s dis­cus­sion with the new group.

The dis­cus­sion ques­tions were inspired by val­ue-sen­si­tive design. I was inter­est­ed to see if peo­ple could come up with alter­na­tive uses for a sen­sor-equipped 3D-print­ed foot­bridge if they first con­sid­ered what in their opin­ion made a city worth liv­ing in.

The ques­tions we used were:

  1. What spe­cif­ic things do you like about your town? (Places, things to do, etc. Be spe­cif­ic.)
  2. What val­ues under­ly those things? (A val­ue is what a per­son or group of peo­ple con­sid­er impor­tant in life.)
  3. How would you redesign the bridge to sup­port those val­ues?

At the end of the three dis­cus­sion rounds we went around to each table and shared the high­lights of what was pro­duced. We then had a bit of a back and forth about the out­comes and the work­shop approach, after which we wrapped up.

We did get to some inter­est­ing val­ues by start­ing from per­son­al expe­ri­ence. Par­tic­i­pants came from a vari­ety of coun­tries and that was reflect­ed in the range of exam­ples and relat­ed val­ues. The design ideas for the bridge remained some­what abstract. It turned out to be quite a chal­lenge to make the jump from val­ues to dif­fer­ent types of smart bridges. Despite this, we did get nice ideas such as hav­ing the bridge report on water qual­i­ty of the canal it cross­es, derived from the val­ue of care for the envi­ron­ment.

The response from par­tic­i­pants after­wards was pos­i­tive. Peo­ple found it thought-pro­vok­ing, which was def­i­nite­ly the point. Peo­ple were also eager to learn even more about the bridge project. It remains a thing that cap­tures people’s imag­i­na­tion. For that rea­son alone, it con­tin­ues to be a very pro­duc­tive case to use for the ground­ing of these sorts of dis­cus­sions.

Prototyping the Useless Butler: Machine Learning for IoT Designers

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Ams­ter­dam 2017, pho­to by nunocruzstreet.com

At ThingsCon Ams­ter­dam 2017, Péter and I ran a sec­ond iter­a­tion of our machine learn­ing work­shop. We improved on our first attempt at TU Delft in a num­ber of ways.

  • We pre­pared exam­ple code for com­mu­ni­cat­ing with Wek­ina­tor from a wifi con­nect­ed Arduino MKR1000 over OSC.
  • We cre­at­ed a pre­de­fined bread­board set­up.
  • We devel­oped three exer­cis­es, one for each type of Wek­ina­tor out­put: regres­sion, clas­si­fi­ca­tion and dynam­ic time warp­ing.

In con­trast to the first ver­sion, we had two hours to run through the whole thing, in stead of a day… So we had to cut some cor­ners, and dou­bled down on walk­ing par­tic­i­pants through a num­ber of exer­cis­es so that they would come out of it with some read­i­ly applic­a­ble skills.

We dubbed the work­shop ‘pro­to­typ­ing the use­less but­ler’, with thanks to Philip van Allen for the sug­ges­tion to frame the exer­cis­es around build­ing some­thing non-pro­duc­tive so that the focus was shift­ed to play and explo­ration.

All of the code, the cir­cuit dia­gram and slides are over on GitHub. But I’ll sum­marise things here.

  1. We spent a very short amount of time intro­duc­ing machine learn­ing. We used Google’s Teach­able Machine as an exam­ple and con­trast­ed reg­u­lar pro­gram­ming with using machine learn­ing algo­rithms to train mod­els. The point was to pro­vide folks with just enough con­cep­tu­al scaf­fold­ing so that the rest of the work­shop would make sense.
  2. We then intro­duced our ‘tool­chain’ which con­sists of Wek­ina­tor, the Arduino MKR1000 mod­ule and the OSC pro­to­col. The aim of this tool­chain is to allow design­ers who work in the IoT space to get a feel for the mate­r­i­al prop­er­ties of machine learn­ing through hands-on tin­ker­ing. We tried to cre­ate a tool­chain with as few mov­ing parts as pos­si­ble, because each addi­tion­al com­po­nent would intro­duce anoth­er point of fail­ure which might require debug­ging. This tool­chain would enable design­ers to either use machine learn­ing to rapid­ly pro­to­type inter­ac­tive behav­iour with min­i­mal or no pro­gram­ming. It can also be used to pro­to­type prod­ucts that expose inter­ac­tive machine learn­ing fea­tures to end users. (For a spec­u­la­tive exam­ple of one such prod­uct, see Bjørn Karmann’s Objec­ti­fi­er.)
  3. Par­tic­i­pants were then asked to set up all the required parts on their own work­sta­tion. A list can be found on the Use­less But­ler GitHub page.
  4. We then pro­ceed­ed to build the cir­cuit. We pro­vid­ed all the com­po­nents and showed a Fritz­ing dia­gram to help peo­ple along. The basic idea of this cir­cuit, the epony­mous use­less but­ler, was to have a suf­fi­cient­ly rich set of inputs and out­puts with which to play, that would suit all three types of Wek­ina­tor out­put. So we set­tled on a pair of pho­tore­sis­tors or LDRs as inputs and an RGB LED as out­put.
  5. With the pre­req­ui­sites installed and the cir­cuit built we were ready to walk through the exam­ples. For regres­sion we mapped the con­tin­u­ous stream of read­ings from the two LDRs to three out­puts, one each for the red, green and blue of the LED. For clas­si­fi­ca­tion we put the state of both LDRs into one of four cat­e­gories, each switch­ing the RGB LED to a spe­cif­ic col­or (cyan, magen­ta, yel­low or white). And final­ly, for dynam­ic time warp­ing, we asked Wek­ina­tor to recog­nise one of three ges­tures and switch the RGB LED to one of three states (red, green or off).

When we reflect­ed on the work­shop after­wards, we agreed we now have a proven con­cept. Par­tic­i­pants were able to get the tool­chain up and run­ning and could play around with iter­a­tive­ly train­ing and eval­u­at­ing their mod­el until it behaved as intend­ed.

How­ev­er, there is still quite a bit of room for improve­ment. On a prac­ti­cal note, quite a bit of time was tak­en up by the build­ing of the cir­cuit, which isn’t the point of the work­shop. One way of deal­ing with this is to bring those to a work­shop pre-built. Doing so would enable us to get to the machine learn­ing quick­er and would open up time and space to also engage with the par­tic­i­pants about the point of it all.

We’re keen on bring­ing this work­shop to more set­tings in future. If we do, I’m sure we’ll find the oppor­tu­ni­ty to improve on things once more and I will report back here.

Many thanks to Iskan­der and the rest of the ThingsCon team for invit­ing us to the con­fer­ence.

ThingsCon Amsterdam 2017, photo by nunocruzstreet.com
ThingsCon Ams­ter­dam 2017, pho­to by nunocruzstreet.com