At a TU Delft spring symposium on AI education, Hosana and I ran a short workshop titled “AI pedagogy through a design lens.” In it, we identified some of the challenges facing AI teaching, particularly outside of computer science, and explored how design pedagogy, particularly the practices of studios and making, may help to address them. The AI & Society master elective I’ve been developing and teaching over the past five years served as a case study. The session was punctuated by brief brainstorming using an adapted version of the SQUID gamestorming technique. Below are the slides we used.
Tag: workshops
Prototyping the Useless Butler: Machine Learning for IoT Designers
At ThingsCon Amsterdam 2017, Péter and I ran a second iteration of our machine learning workshop. We improved on our first attempt at TU Delft in a number of ways.
- We prepared example code for communicating with Wekinator from a wifi connected Arduino MKR1000 over OSC.
- We created a predefined breadboard setup.
- We developed three exercises, one for each type of Wekinator output: regression, classification and dynamic time warping.
In contrast to the first version, we had two hours to run through the whole thing, in stead of a day… So we had to cut some corners, and doubled down on walking participants through a number of exercises so that they would come out of it with some readily applicable skills.
We dubbed the workshop ‘prototyping the useless butler’, with thanks to Philip van Allen for the suggestion to frame the exercises around building something non-productive so that the focus was shifted to play and exploration.
All of the code, the circuit diagram and slides are over on GitHub. But I’ll summarise things here.
- We spent a very short amount of time introducing machine learning. We used Google’s Teachable Machine as an example and contrasted regular programming with using machine learning algorithms to train models. The point was to provide folks with just enough conceptual scaffolding so that the rest of the workshop would make sense.
- We then introduced our ‘toolchain’ which consists of Wekinator, the Arduino MKR1000 module and the OSC protocol. The aim of this toolchain is to allow designers who work in the IoT space to get a feel for the material properties of machine learning through hands-on tinkering. We tried to create a toolchain with as few moving parts as possible, because each additional component would introduce another point of failure which might require debugging. This toolchain would enable designers to either use machine learning to rapidly prototype interactive behaviour with minimal or no programming. It can also be used to prototype products that expose interactive machine learning features to end users. (For a speculative example of one such product, see Bjørn Karmann’s Objectifier.)
- Participants were then asked to set up all the required parts on their own workstation. A list can be found on the Useless Butler GitHub page.
- We then proceeded to build the circuit. We provided all the components and showed a Fritzing diagram to help people along. The basic idea of this circuit, the eponymous useless butler, was to have a sufficiently rich set of inputs and outputs with which to play, that would suit all three types of Wekinator output. So we settled on a pair of photoresistors or LDRs as inputs and an RGB LED as output.
- With the prerequisites installed and the circuit built we were ready to walk through the examples. For regression we mapped the continuous stream of readings from the two LDRs to three outputs, one each for the red, green and blue of the LED. For classification we put the state of both LDRs into one of four categories, each switching the RGB LED to a specific color (cyan, magenta, yellow or white). And finally, for dynamic time warping, we asked Wekinator to recognise one of three gestures and switch the RGB LED to one of three states (red, green or off).
When we reflected on the workshop afterwards, we agreed we now have a proven concept. Participants were able to get the toolchain up and running and could play around with iteratively training and evaluating their model until it behaved as intended.
However, there is still quite a bit of room for improvement. On a practical note, quite a bit of time was taken up by the building of the circuit, which isn’t the point of the workshop. One way of dealing with this is to bring those to a workshop pre-built. Doing so would enable us to get to the machine learning quicker and would open up time and space to also engage with the participants about the point of it all.
We’re keen on bringing this workshop to more settings in future. If we do, I’m sure we’ll find the opportunity to improve on things once more and I will report back here.
Many thanks to Iskander and the rest of the ThingsCon team for inviting us to the conference.
‘Machine Learning for Designers’ workshop
On Wednesday Péter Kun, Holly Robbins and myself taught a one-day workshop on machine learning at Delft University of Technology. We had about thirty master’s students from the industrial design engineering faculty. The aim was to get them acquainted with the technology through hands-on tinkering with the Wekinator as central teaching tool.
Background
The reasoning behind this workshop is twofold.
On the one hand I expect designers will find themselves working on projects involving machine learning more and more often. The technology has certain properties that differ from traditional software. Most importantly, machine learning is probabilistic in stead of deterministic. It is important that designers understand this because otherwise they are likely to make bad decisions about its application.
The second reason is that I have a strong sense machine learning can play a role in the augmentation of the design process itself. So-called intelligent design tools could make designers more efficient and effective. They could also enable the creation of designs that would otherwise be impossible or very hard to achieve.
The workshop explored both ideas.
Format
The structure was roughly as follows:
In the morning we started out providing a very broad introduction to the technology. We talked about the very basic premise of (supervised) learning. Namely, providing examples of inputs and desired outputs and training a model based on those examples. To make these concepts tangible we then introduced the Wekinator and walked the students through getting it up and running using basic examples from the website. The final step was to invite them to explore alternative inputs and outputs (such as game controllers and Arduino boards).
In the afternoon we provided a design brief, asking the students to prototype a data-enabled object with the set of tools they had acquired in the morning. We assisted with technical hurdles where necessary (of which there were more than a few) and closed out the day with demos and a group discussion reflecting on their experiences with the technology.
Results
As I tweeted on the way home that evening, the results were… interesting.
Not all groups managed to put something together in the admittedly short amount of time they were provided with. They were most often stymied by getting an Arduino to talk to the Wekinator. Max was often picked as a go-between because the Wekinator receives OSC messages over UDP, whereas the quickest way to get an Arduino to talk to a computer is over serial. But Max in my experience is a fickle beast and would more than once crap out on us.
The groups that did build something mainly assembled prototypes from the examples on hand. Which is fine, but since we were mainly working with the examples from the Wekinator website they tended towards the interactive instrument side of things. We were hoping for explorations of IoT product concepts. For that more hand-rolling was required and this was only achievable for the students on the higher end of the technical expertise spectrum (and the more tenacious ones).
The discussion yielded some interesting insights into mental models of the technology and how they are affected by hands-on experience. A comment I heard more than once was: Why is this considered learning at all? The Wekinator was not perceived to be learning anything. When challenged on this by reiterating the underlying principles it became clear the black box nature of the Wekinator hampers appreciation of some of the very real achievements of the technology. It seems (for our students at least) machine learning is stuck in a grey area between too-high expectations and too-low recognition of its capabilities.
Next steps
These results, and others, point towards some obvious improvements which can be made to the workshop format, and to teaching design students about machine learning more broadly.
- We can improve the toolset so that some of the heavy lifting involved with getting the various parts to talk to each other is made easier and more reliable.
- We can build examples that are geared towards the practice of designing IoT products and are ready for adaptation and hacking.
- And finally, and probably most challengingly, we can make the workings of machine learning more transparent so that it becomes easier to develop a feel for its capabilities and shortcomings.
We do intend to improve and teach the workshop again. If you’re interested in hosting one (either in an educational or professional context) let me know. And stay tuned for updates on this and other efforts to get designers to work in a hands-on manner with machine learning.
Special thanks to the brilliant Ianus Keller for connecting me to Péter and for allowing us to pilot this crazy idea at IDE Academy.
References
Sources used during preparation and running of the workshop:
- The Wekinator – the UI is infuriatingly poor but when it comes to getting started with machine learning this tool is unmatched.
- Arduino – I have become particularly fond of the MKR1000 board. Add a lithium-polymer battery and you have everything you need to prototype IoT products.
- OSC for Arduino – CNMAT’s implementation of the open sound control (OSC) encoding. Key puzzle piece for getting the above two tools talking to each other.
- Machine Learning for Designers – my preferred introduction to the technology from a designerly perspective.
- A Visual Introduction to Machine Learning – a very accessible visual explanation of the basic underpinnings of computers applying statistical learning.
- Remote Control Theremin – an example project I prepared for the workshop demoing how to have the Wekinator talk to an Arduino MKR1000 with OSC over UDP.
Engagement design worksheets
In June/July of this year I helped Michael Fillié teach two classes about engagement design at General Assembly Singapore. The first was theoretical and the second practical. For the practical class we created a couple of worksheets which participants used in groups to gradually build a design concept for a new product or product improvement aimed at long-term engagement. Below are the worksheets along with some notes on how to use them. I’m hoping they may be useful in your own practice.
A practical note: Each of these worksheets is designed to be printed on A1 paper. (Click on the images to get the PDFs.) We worked on them using post-it notes so that it is easy to add, change or remove things as you go.
Problem statement and persona
We started with understanding the problem and the user. This worksheet is an adaptation of the persona sheet by Strategyzer. To use it you begin at the top, fleshing out the problem in the form of stating the engagement challenge, and the business goals. Then, you select a user segment which is relevant to the problem.
The middle section of the sheet is used to describe them in the form of a persona. Start with putting a face on them. Give the persona a name and add some demographic details relevant for the user’s behaviour. Then, move on to exploring what their environment looks and sounds like and what they are thinking and feeling. Finally, try to describe what issues the user is having that are addressed by the product and what the user stands to gain from using the product.
The third section of this sheet is used to wrap up the first exercise by doing a quick gap analysis of what the business would like to see in terms of user behaviour and what the user is currently doing. This will help pin down the engagement design concept fleshed out in the next exercises.
Engagement loop
Exercise two builds on the understanding of the problem and the user and offers a structured way of thinking through a possible solution. For this we use the engagement loop model developed by Sebastian Deterding. There are different places we can start here but one that often works well is to start imagining the Big Hairy Audacious Goal the user is looking to achieve. This is the challenge. It is a thing (usually a skill) the user can improve at. Note this challenge down in the middle. Then, working around the challenge, describe a measurable goal the user can achieve on their way to mastering the challenge. Describe the action the user can take with the product towards that goal, and the feedback the product will give them to let them know their action has succeeded and how much closer it has gotten them to the goal. Finally and crucially, try to describe what kind of motivation the user is driven by and make sure the goals, actions and feedback make sense in that light. If not, adjust things until it all clicks.
Storyboard
The final exercise is devoted to visualising and telling a story about the engagement loop we developed in the abstract in the previous block. It is a typical storyboard, but we have constrained it to a set of story beats you must hit to build a satisfying narrative. We go from introducing the user and their challenge, to how the product communicates the goal and action to what a user does with it and how they get feedback on that to (fast-forward) how they feel when they ultimately master the challenge. It makes the design concept relatable to outsiders and can serve as a jumping off point for further design and development.
Use, adapt and share
Together, these three exercises and worksheets are a great way to think through an engagement design problem. We used them for teaching but I can also imagine teams using them to explore a solution to a problem they might be having with an existing product, or as a way to kickstart the development of a new product.
We’ve built on other people’s work for these so it only makes sense to share them again for others to use and build on. If you do use them I would love to hear about your experiences.
Week 146
Crazy, crazy week I am glad to have survived. But wait, it’s not done yet. Tomorrow (saturday) I’ll be running a workshop in Leidsche Rijn with local young folk, for Cultuur19. The aim is to design a little social game that’ll function as a viral marketing tactic for our upcoming urban games design workshop in the same district. This is a Hubbub mission, and I am glad to have the support of Karel who — besides cooking up crazy plans at FourceLabs — is an occasional agent of Hubbub.
This was my last week working on site with Layar because I’m heading to Copenhagen on sunday. I’ll be staying there for a few weeks, working there — for Layar still, possibly for Social Square — lecturing at CIID and apart from that just taking it a little slower. My apartment is around the corner from the Laundromat Café in Nørrebro so that should be no problem.
I was at Waag Society’s beautiful Theatrum Anatomicum last wednesday to cohost a workshop on games and architecture as part of the Best Scene in Town project initiated by 7scenes. I presented three bold predictions for the future of games in the city. Look for a write-up of that one at the Hubbub blog soon. The teams came up with interesting concepts for games in Amsterdam and I enjoyed working with all of them.
Going back to the start of this week, I turned 30 on monday. A watershed moment of some sort I guess. Somewhat appropriately, we announced This happened – Utrecht #6 that day too. Check out the program, I am real pleased with our speakers.
Now let’s just hope that volcano doesn’t mess with my flight in sunday and the next note will be coming to you from lovely CPH.
This pervasive games workshop I ran at this conference
A few things I got people to do at this year’s NLGD Festival of Games:
Fight each other with paper swords…
…and run around with lunch-boxes on their heads.1
This was all part of a workshop I ran, titled ‘Playful Tinkering’. The mysterious Mink ette — who amongst many things is a designer at Six to Start — and I got people to rapidly prototype pervasive games that were be played at the conference venue the day after. The best game won a magnificent trophy shaped like a spring rider.
Some exercises we did during the workshop:
- Play a name game Mink ette had made up shortly before the workshop in no time at all. This is good for several things: physical warm-up, breaking the ice, demonstrate the kinds of games the session is about.
- Walk around the room and write down imaginary game titles as well as names of games you used to play as a child. Good for emptying heads and warming up mentally.
- Walk around again, pick a post-it that intrigues you. Then guess what the game is about, and have others to fill in the blanks where need. Then play the game. This is mostly just for fun. (Nothing wrong with that.)
- Analyse the games, break them up into their basic parts. Change one of those parts and play the game again. See what effect the change has. This is to get a sense of what games design is about, and how changing a rule impacts the player experience.
Participants brainstorming game ideas
People then formed groups and worked on an original game. We pushed them to rapidly generate a first ruleset that could be playtested with the other groups. After this they did another design sprint, and playtested again outside the room, “in the wild”. All of this in less than four hours. Whew!
The games that were made:
- A game that involved hunting for people that matched the descriptions on post-its that were hidden around the venue. You first needed to find a post-it, then find the person that matched the description on it and finally take a photo of them for points. This game was so quick to play it already ran at the party, hours after the workshop finished.
- ‘Crowd Control’ — compete with other players to get the largest percentage of a group of people to do what you are doing (like nodding your head). This game won the trophy, in part because of the ferocious player recruitment style the runners employed during the playtest.
- A sailing game, where you tried to maneuver an imaginary boat from one end of a space to the other. Your movement was constrained by the “wind”, which was a function of the amount of people on either side of your boat. It featured an ingenuous measuring mechanic which used an improvised rope made from a torn up conference tote bag.
- The lunchbox thing was improvised during the lunch before the playtest. A student also brought in a game he was working on for his graduation to playtest.
We set up the playtest itself as follows:
The room was open to anyone passing by. Each game got their own station where they could recruit players, explain the rules, keep score, etc. Mink ette and I handed each player a red, blue and yellow tiddlywink. They could use this to vote on their favorite game in three separate categories, by handing the runners a tiddlywink. People could play more than once, and vote as often as they liked. We also kept track of how much players each game got. We handed out prizes to winners in the different categories (a lucky dip box loaded with piñata fillers). The most played game got the grand prize — the spring rider trophy I created with help from my sister and fabricated at the local fablab.2
Spring rider trophy and tiddlywinks ready for some playtesting action
It was a pleasure to have the elusive Mink ette over for the ride. I loved the way she explained what pervasive games were all about — being able to play anytime, anywhere with anything. I was also impressed with the way she managed to get people to do strange things without thinking twice.
We had a very dedicated group of participants, most of whom stuck around for the whole session and returned again for the playtest the next day. I’m very grateful for their enthusiasm. The whole experience was very rewarding, I’m keen on doing this more often at events and applying what I learnt to the workshops I run as part of my own games design practice.
Happy winners of the spring rider trophy flanked by Mink ette and yours truly
On sketching
Catching up with this slightly neglected blog (it’s been 6 weeks since the last proper post). I’d like to start by telling you about a small thing I helped out with last week. Peter Boersma1 asked me to help out with one of his UX Cocktail Hours. He was inspired by a recent IxDA Studio event where, in stead of just chatting and drinking, designers actually made stuff. (Gasp!) Peter wanted to do a workshop where attendees collaborated on sketching a solution to a given design problem.
Part of my contribution to the evening was a short presentation on the theory and practice of sketching. On the theory side, I referenced Bill Buxton’s list of qualities that define what a sketch is2, and emphasized that this means a sketch can be done in any material, not necessarily pencil and paper. Furthermore I discussed why sketching works, using part of an article on embodied interaction3. The main point there, as far as I am concerned is that when sketching, as designers we have the benefit of ‘backtalk’ from our materials, which can provide us with new insights. I wrapped up the presentation with a case study of a project I did a while back with the Amsterdam-based agency Info.nl4 for a social web start-up aimed at independent professionals. In the project I went quite far in using sketches to not only develop the design, but also collaboratively construct it with the client, technologists and others.
The whole thing was recorded; you can find a video of the talk at Vimeo (thanks to Iskander and Alper). I also uploaded the slides to SlideShare (sans notes).
The second, and most interesting part of the evening was the workshop itself. This was set up as follows: Peter and I had prepared a fictional case, concerning peer-to-peer energy. We used the Dutch company Qurrent as an example, and asked the participants to conceptualise a way to encourage use of Qurrent’s product range. The aim was to have people be more energy efficient, and share surplus energy they had generated with the Qurrent community. The participants split up in teams of around ten people each, and went to work. We gave them around one hour to design a solution, using only pen and paper. Afterwards, they presented the outcome of their work to each other. For each team, we asked one participant to critique the work by mentioning one thing he or she liked, and one thing that could be improved. The team was then given a chance to reply. We also asked each team to briefly reflect on their working process. At the end of the evening everyone was given a chance to vote for their favourite design. The winner received a prize.5
Wrapping up, I think what I liked most about the workshop was seeing the many different ways the teams approached the problem (many of the participants did not know each other beforehand). Group dynamics varied hugely. I think it was valuable to have each team share their experiences on this front with each other. One thing that I think we could improve was the case itself; next time I would like to provide participants with a more focused, more richly detailed briefing for them to sink their teeth in. That might result in an assignment that is more about structure and behaviour (or even interface) and less about concepts and values. It would be good to see how sketching functions in such a context.
- the Netherlands’ tallest IA and one of several famous Peters who work in UX [↩]
- taken from his wonderful book Sketching User Experiences [↩]
- titled How Bodies Matter (PDF) by Klemer and Takayama [↩]
- who were also the hosts of this event [↩]
- I think it’s interesting to note that the winner had a remarkable concept, but in my opinion was not the best example of the power of sketching. Apparently the audience valued product over process. [↩]
The theory and practice of urban game design
A few weeks ago NLGD asked me to help out with an urban games ‘seminar’ that they had commissioned in collaboration with the Dutch Game Garden. A group of around 50 students from two game design courses at the Utrecht School of the Arts1 were asked to design a game for the upcoming Festival of Games in Utrecht. The workshop lasted a week. My involvement consisted of a short lecture, followed by several design exercises designed to help the students get started on Monday. On Friday, I was part of the jury that determined which game will be played at the festival.
Lecture
In the lecture I briefly introduced some thinkers in urbanism that I find of interest to urban game designers. I talked about Jane Jacobs’ view of the city as a living organism that is grown from the bottom up. I also mentioned Kevin Lynch’s work around wayfinding and the elements that make up people’s mental maps of cities. I touched upon the need to have a good grasp of social interaction patterns2. Finally, I advised the students to be frugal when it comes to the inclusion of technology in the students’ game designs. A good question to always ask yourself is: can I have as much fun without this gadget?
I wrapped up the lecture by looking at 5 games, some well-known, others less so: Big Urban Game, ConQwest, Pac-Manhattan, The Soho Project and The Comfort of Strangers. There are many more good examples, of course, but each of these helped in highlighting a specific aspect of urban games design.
Workshop
Next, I ran a workshop of around 3 hours with the students, consisting of two exercises (plus one they could complete afterwards in their own time). The first one is the most interesting to discuss here. It’s a game-like elicitation technique called VNA3, which derives its name from the card types in the deck it is made up of: verbs, nouns and adjectives.
The way it works is that you take turns drawing a card from the deck and make up a one-sentence idea involving the term. The first person to go draws a verb, the second person a noun and the third an adjective. Each person builds on the idea of his or her precursor. The concept that results from the three-card sequence is written down, and the next person draws a verb card again.4 The exercise resembles cadavre exquis, the biggest difference being that here, the terms are predetermined.
VNA is a great ice-breaker. The students were divided into teams of five and, because a side-goal of the seminar was to encourage collaboration between students from the different courses, they often did not know each other. Thanks to this exercise they became acquainted, but within a creative context. The exercise also privileges volume of ideas over their quality, which is perfect in the early stages of conceptualization. Last but not least, it is a lot of fun; many students asked where they could get the deck of cards.
Jurying
On Friday, I (together with the other jury members) was treated to ten presentations by the students. Each had prepared a video containing footage of prototyping and play-testing sessions, as well as an elevator pitch. A lot of them were quite good, especially considering the fact that many students had not created an urban game before, or hadn’t even played one. But one game really stood out for me. It employed a simple mechanic: making chains of people by holding hands. A chain was started by players, but required the help of passers-by to complete. Watching the videos of chains being completed evoked a strong positive emotional response, not only with myself, but also my fellow jurors. What’s more important though, is that the game clearly engendered happiness in its participants, including the people who joined in as it was being played.
In one video sequence, we see a near-completed chain of people in a mall, shouting requests at people to join in. A lone man has been observing the spectacle from a distance for some time. Suddenly, he steps forward, and joins hands with the others. The chain is completed. A huge cheer emerges from the group, hands are raised in the air and applause follows, the man joining in. Then he walks off towards the camera, grinning, two thumbs up. I could not help but grin back.5
- Game Design and Development and Design for Virtual Theatre and Games [↩]
- pointing to this resource, that was discussed at length on the IGDA ARG SIG [↩]
- developed by Annakaisa Kultima [↩]
- An interesting aside is that the deck was originally designed to be used for the creation of casual mobile games. The words were chosen accordingly. Despite this, or perhaps because of this, they are quite suitable to the design of urban games. [↩]
- To clarify, this was not the game that got selected for the Festival of Games. There were some issues with the game as a whole. It was short-listed though. Another excellent game, involving mechanics inspired by photo safari, was the winner. [↩]
A day of playing around with multi-touch and RoomWare
Last Saturday I attended a RoomWare workshop. The people of CanTouch were there too, and brought one of their prototype multi-touch tables. The aim for the day was to come up with applications of RoomWare (open source software that can sense presence of people in spaces) and multi-touch. I attended primarily because it was a good opportunity to spend a day messing around with a table.
Attendance was multifaceted, so while programmers were putting together a proof-of-concept, designers (such as Alexander Zeh, James Burke and I) came up with concepts for new interactions. The proof-of-concept was up and running at the end of then day: The table could sense who was in the room and display his or her Flickr photos, which you could then move around, scale, rotate, etc. in the typical multi-touch fashion.
The concepts designers came up with mainly focused on pulling in Last.fm data (again using RoomWare’s sensing capabilities) and displaying it for group-based exploration. Here’s a storyboard I quickly whipped up of one such application:
The storyboard shows how you can add yourself from a list of people present in the room. Your top artists flock around you. When more people are added, lines are drawn between you. The thickness of the line represents how similar your tastes are, according to Last.fm’s taste-o-meter. Also, shared top artists flock in such a way as to be closest to all related people. Finally, artists can be acted on to listen to music.
When I was sketching this, it became apparent that orientation of elements should follow very different rules from regular screens. I chose to sketch things so that they all point outwards, with the middle of the table as the orientation point.
By spending a day immersed in multi-touch stuff, some interesting design challenges became apparent:
- With tabletop surfaces, stuff is closer or further away physically. Proximity of elements can be unintentionally interpreted as saying something about aspects such as importance, relevance, etc. Designers need to be even more aware of placement than before, plus conventions from vertically oriented screens no longer apply. Top-of-screen becomes furthest away and therefore least prominent in stead of most important.
- With group-based interactions, it becomes tricky to determine who to address and where to address him or her. Sometimes the system should address the group as a whole. When 5 people are standing around a table, text-based interfaces become problematic since what is legible from one end of the table is unintelligible from the other. New conventions need to be developed for this as well. Alexander and I philosophized about placing text along circles and animating them so that they circulate around the table, for instance.
- Besides these, many other interface challenges present themselves. One crucial piece of information for solving many of these is knowing where people are located around the table. This issue can be approached from different angles. By incorporating sensors in the table, detection may be automated and interfaces could me made to adapt automatically. This is the techno-centric angle. I am not convinced this is the way to go, because it diminishes people’s control over the experience. I would prefer to make the interface itself adjustable in natural ways, so that people can mold the representation to suit their context. With situated technologies like this, auto-magical adaptation is an “AI-hard” problem, and the price of failure is a severely degraded user experience from which people cannot recover because the system won’t let them.
All in all the workshop was a wonderful day of tinkering with like-minded individuals from radically different backgrounds. As a designer, I think this is one of the best way be involved with open source projects. On a day like this, technologists can be exposed to new interaction concepts while they are hacking away. At the same time designers get that rare opportunity to play around with technology as it is shaped. Quick-and-dirty sketches like the ones Alexander and I came up with are definitely the way to communicate ideas. The goal is to suggest, not to describe, after all. Technologists should feel free to elaborate and build on what designers come up with and vice-versa. I am curious to see which parts of what we came up with will find their way into future RoomWare projects.