Last year at Hubbub we worked on two projects featuring a conversational user interface. I thought I would share a few notes on how we did the writing for them. Because for conversational user interfaces a large part of the design is in the writing.
At the moment, there aren’t really that many tools well suited for doing this. Twine comes to mind but it is really more focused on publishing as opposed to authoring. So while we were working on these projects we just grabbed whatever we were familiar with and felt would get the job done.
I actually think there is an opportunity here. If this conversational ui thing takes off designers would benefit a lot from better tools to sketch and prototype them. After all this is the only way to figure out if a conversational user interface is suitable for a particular project. In the words of Bill Buxton:
“Everything is best for something and worst for something else.”
Okay so below are my notes. The two projects are KOKORO (a codename) and Free Birds. We have yet to publish extensively on both, so a quick description is in order.
For KOKORO we used Gingko to write the conversation branches. This is good enough for a prototype but it becomes unwieldy at scale. And anyway you don’t want to be limited to a tree structure. You want to at least be able to loop back to a parent branch, something that isn’t supported by Gingko. And maybe you don’t want to use the branching pattern at all.
Free Birds’s story has a very linear structure. So in this case we just wrote our conversations in Quip with some basic rules for formatting, not unlike a screenplay.
In Free Birds player choices ‘colour’ the events that come immediately after, but the path stays the same.
This approach was inspired by the Walking Dead games. Those are super clever at giving players a sense of agency without the need for sprawling story trees. I remember seeing the creators present this strategy at PRACTICE and something clicked for me. The important point is, choices don’t have to branch out to different directions to feel meaningful.
KOKORO’s choices did have to lead to different paths so we had to build a tree structure. But we also kept track of things a user says. This allows the app to “learn” about the user. Subsequent segments of the conversation are adapted based on this learning. This allows for more flexibility and it scales better. A section of a conversation has various states between which we switch depending on what a user has said in the past.
We did something similar in Free Birds but used it to a far more limited degree, really just to once again colour certain pieces of dialogue. This is already enough to give a player a sense of agency.
As you can see, it’s all far from rocket surgery but you can get surprisingly good results just by sticking to these simple patterns. If I were to investigate more advanced strategies I would look into NLP for input and procedural generation for output. Who knows, maybe I will get to work on a project involving those things some time in the future.
I am preparing two classes at the moment. One is an introduction to user experience design, the other to user interface design. I did not come up with this division, it was part of the assignment. I thought it was odd at first. I wasn’t sure where one discipline ends and the other begins. I still am not sure. But I made a pragmatic decision to have the UX class focus on the high level process of designing (software) products, and the UI class focus on the visual aspects of a product’s interface. The UI class deals with a product’s surface, form, and to some extent also its behaviour, but on a micro level. Whereas the UX class focuses on behaviour on the macro level. Simply speaking—the UX class is about behaviour across screens, the UI class is about behaviour within screens.
The solution is workable. But I am still not entirely comfortable with it. I am not comfortable with the idea of being able to practice UX without ‘touching the surface’, so to speak. And it seems my two classes are advocating this. Also, I am pretty sure this is everyday reality for many UX practitioners. Notice I say “practitioner”, because I am not sure ‘designer’ is the right term in these cases. To be honest I do not think you can practice design without doing sketching and prototyping of some sort. (See Bill Buxton’s ‘Sketching User Experiences’ for an expanded argument on why this is.) And when it comes to designing software products this means touching the surface, the form.
Again, the reality is, ‘UX designer’ and ‘UI designer’ are common terms now. Certainly here in Singapore people know they need both to make good products. Some practitioners say they do both, others one or the other. The latter appears to be the most common and expected case. (By the way, in Singapore no-one I’ve met talks about interaction design.)
My concern is that by encouraging the practice of doing UX design without touching the surface of a product, we get shitty designs. In a process where UX and UI are seen as separate things the risk is one comes before the other. The UX designer draws the wireframes, the UI designer gets to turn them into pretty pictures, with no back-and-forth between the two. An iterative process can mitigate some of the damage such an artificial division of labour produces, but I think we still start out on the wrong foot. I think a better practice might entail including visual considerations from the very beginning of the design process (as we are sketching).
Two things I came across as I was preparing these classes are somehow in support of this idea. Both resulted from a call I did for resources on user interface design. I asked for books about visual aspects, but I got a lot more.
In ‘Magic Ink’ Bret Victor writes about how the design of information software is hugely indebted to graphic design and more specifically information design in the tradition of Tufte. (He also mentions industrial design as an equally big progenitor of interaction design, but for software that is mainly about manipulation, not information.) The article is big, but the start of it is actually a pretty good if unorthodox general introduction to interaction design. For software that is about learning through looking at information Victor says interaction should be a last resort. So that leaves us with a task that is 80% if not more visual design. Touching the surface. Which makes me think you might as well get to it as quickly as possible and start sketching and prototyping aimed not just at structure and behaviour but also form. (Hat tip to Pieter Diepenmaat for this one.)
In ‘Jumping to the End’ Matt Jones rambles entertainingly about design fiction. He argues for paying attention to details and that a lot of the design he practices is about ‘signature moments’ aka micro-interactions. So yeah, again, I can’t imagine designing these effectively without doing sketching and prototyping of the sort that includes the visual. And in fact Matt mentions this more or less at one point, when he talks about the fact that his team’s deliverables at Google are almost all visual. They are high fidelity mockups, animations, videos, and so on. These then become the starting points for further development. (Hat tip to Alexander Zeh for this one.)
In summary, I think distinguishing UX design from UI design is nonsense. Because you cannot practice design without sketching and prototyping. And you cannot sketch and prototype a software product without touching its surface. In stead of taking visual design for granted, or talking about it like it is some innate talent, some kind of magical skill some people are born with and others aren’t, user experience practitioners should consider being less enamoured with acquiring more skills from business, marketing and engineering and in stead practice at the skills that define the fields user experience design is indebted to the most: graphic design and industrial design. In other words, you can’t do user experience design without touching the surface.
Catching up with this slightly neglected blog (it’s been 6 weeks since the last proper post). I’d like to start by telling you about a small thing I helped out with last week. Peter Boersma1 asked me to help out with one of his UX Cocktail Hours. He was inspired by a recent IxDA Studio event where, in stead of just chatting and drinking, designers actually made stuff. (Gasp!) Peter wanted to do a workshop where attendees collaborated on sketching a solution to a given design problem.
Part of my contribution to the evening was a short presentation on the theory and practice of sketching. On the theory side, I referenced Bill Buxton’s list of qualities that define what a sketch is2, and emphasized that this means a sketch can be done in any material, not necessarily pencil and paper. Furthermore I discussed why sketching works, using part of an article on embodied interaction3. The main point there, as far as I am concerned is that when sketching, as designers we have the benefit of ‘backtalk’ from our materials, which can provide us with new insights. I wrapped up the presentation with a case study of a project I did a while back with the Amsterdam-based agency Info.nl4 for a social web start-up aimed at independent professionals. In the project I went quite far in using sketches to not only develop the design, but also collaboratively construct it with the client, technologists and others.
The whole thing was recorded; you can find a video of the talk at Vimeo (thanks to Iskander and Alper). I also uploaded the slides to SlideShare (sans notes).
The second, and most interesting part of the evening was the workshop itself. This was set up as follows: Peter and I had prepared a fictional case, concerning peer-to-peer energy. We used the Dutch company Qurrent as an example, and asked the participants to conceptualise a way to encourage use of Qurrent’s product range. The aim was to have people be more energy efficient, and share surplus energy they had generated with the Qurrent community. The participants split up in teams of around ten people each, and went to work. We gave them around one hour to design a solution, using only pen and paper. Afterwards, they presented the outcome of their work to each other. For each team, we asked one participant to critique the work by mentioning one thing he or she liked, and one thing that could be improved. The team was then given a chance to reply. We also asked each team to briefly reflect on their working process. At the end of the evening everyone was given a chance to vote for their favourite design. The winner received a prize.5
Wrapping up, I think what I liked most about the workshop was seeing the many different ways the teams approached the problem (many of the participants did not know each other beforehand). Group dynamics varied hugely. I think it was valuable to have each team share their experiences on this front with each other. One thing that I think we could improve was the case itself; next time I would like to provide participants with a more focused, more richly detailed briefing for them to sink their teeth in. That might result in an assignment that is more about structure and behaviour (or even interface) and less about concepts and values. It would be good to see how sketching functions in such a context.
the Netherlands’ tallest IA and one of several famous Peters who work in UX [↩]
I think it’s interesting to note that the winner had a remarkable concept, but in my opinion was not the best example of the power of sketching. Apparently the audience valued product over process. [↩]
Subscribers to my Flickr stream have probably noticed a number of images of some kind of map flowing past lately. They were the result of me tracking my progress on a pet project. I have more or less finished work on it this week, so I thought I’d detail what I did over here.
Background
Following my Twitter dataviz sketches, I thought I’d take another stab at prototyping with Processing. On the one hand I wanted to increase my familiarity with the environment. On the other, I continued to be fascinated with data-visualization, so I wanted to do another design exercise in this domain. I was particularly interested in creating displays that assist in decision making and present data in a way that allows people to ‘play’ with it — explore it and learn from it.
The seed for this thing was planted when I saw Stamen’s work on the mySociety travel-time maps. I thought the idea of visually overlaying two datasets and allowing the intersection to be manipulated by people was simple but powerful. But, at that time, I saw no way to ‘easily’ try my hand at something similar. I had no ready access to any potentially interesting data, and my scraping skills are limited at best.
Luckily, I was not the only one whose curiosity was piqued. After seeing Ben Cerveny demoing the same maps at The Web and Beyond 2008, Alper wondered how hard it would be to create something similar for the Netherlands. He presented a way to do it with freely available tools and data (to an extent) in a workshop at a local unconference.1
I did not attend the event, but after seeing his blog post, I sent him an email and asked if he was willing to part with the data he had collected from the Dutch public transport travel planning site 9292. Alper being the nice guy he is, he soon emailed me a JSON containing of the data.
So that’s the background. I had an example, I had some data, and I had a little experience with making things in Processing.
JSON
The first step was to read the data in the JSON file from Processing. I followed the instructions on how to get the JSON library into Processing from Ben Fry’s book (pages 315–316). On the Processing boards, a cursory search unearthed some code examples. After a little fiddling, I got it to work and could print the data to Processing’s console.
Plotting
Next up was to start visualizing it. I used the examples of scatterplot maps in Visualizing Data as a starting point, and plugged in the JSON data. Pretty soon, I had a nice plot of the postal codes that actually resembled the Netherlands.
Coloring
From there, it was rather easy to show each postal code’s travel time.2 I simply mapped travel times to a hue in the HSB spectrum. The result nicely shows colored bands of travel-time regions and also allows you to pick out some interesting outliers (such as Groningen in the north).
Selecting
At this point, I wanted to be able to select travel-time ranges and hide postal codes outside of that range. Initially, I used the keyboard for input. This was OK for this stage of the project, but of course it would need to be replaced with something more intuitive later on. In any case, I could highlight selected points and dim others, which increased the display’s explorability considerably.
Coloring, again
The HSB spectrum is quick and easy way of getting access to a full a range of colors. It served me well in my Twitter visualizations. However in this case it left something to be desired, aesthetically speaking. Via Tom Carden I found the wonderful cpt-city, which catalogues gradients for cartography and the like. Initially I struggled with ways to get these colors into Processing, but then it turned out you could easily read out the colors of pixels from images. This allowed me to cycle through many palettes just by adding the files to my Processing sketch. I discovered that a palette with a clear division in the middle was best, because that provides you with an extra reference point besides the beginning and end.
Selecting, again
I next turned to the interaction bits. I knew I wanted a so-called dual slider that would allow people to select the upper and lower limit of travel time. In the Processing book, there is code for plenty of interface widgets, but sadly no dual slider. I looked around on the Processing board and could find none either, to my surprise. Even in the UI libraries (such as controlP5 and Interfascia) I could not locate one.
So I decided to lower the bar and first include two horizontal sliders, one for the upper and one for the lower limit. These I made using the code on pages 448–452 of the Processing book. Not perfect, but an improvement over the keyboard controls.
Selecting, yet again
Next, I decided I’d see if I could modify the code of the horizontal scrollbar so that I would end up with a dual slider. After some messing about (which did increase my understanding of the original code considerably) I managed to get it to work. This was an unexpected success. I now had a decent dual slider.
Exploring
So far there was no way of telling which point corresponded to which postal code. So, I added a rollover that displayed the postal code’s name and travel time. At this point it became clear the data wasn’t perfect — some postal codes were erroneously geocoded by GeoNames. For instance, code 9843 (which is Grijpskerk, 199 minutes to the Dam) was placed on the map as Amsterdam Noord-Oost!
Adding more data
Around this point I visited Alper in Delft and we discussed adding a second dataset. Although housing prices à la mySociety would have been interesting, we decided to take a different route and add a second travel-time set for cars.3 My first step in integrating this was to simply generate a map each for the public transport and car travel data and manually juxtapose them. What I liked about this was that even though you know intuitively that traveling by car is faster, the two maps next to each other provide a dramatic visual confirmation of this piece of knowledge.
Representing
Moving ahead with the extra data, I started to struggle with how to represent both travel times. My first effort was to draw two sets of dots on top of each other (one for car travel times and one for public transport) and color each accordingly. For each set I introduced a separate slider. I wasn’t very satisfied with the result of this. It did not help in understanding what was going on that much.
Showing differences
After discussions with Alper and several other people, I decided it would make more sense to show the difference between travel times. So I calculated the percentage difference between public transport and car travel time for each postal code. This value I mapped to a color. Here, a simple gradient worked better than the palettes used earlier for travel times.
I also discarded the idea of having two dual sliders and simply went with one travel time selector. Although more user-friendly, it created a new problem: for some points both travel times would fall within the selected range, and for others one or the other. So I needed an extra visual dimension to show this. This turned out to be the greatest challenge.
After trying many approaches, I eventually settled on using the shape of the point to show which travel times fell within the range. A small dot meant that only the public transport travel time is within the range, a donut means only the car travel time is selected, and a big dot represents selection of both times.
Final tweaks
Around this point I felt that it was time to wrap up. I had learnt about all I could from the exercise and any extra time spent on the project would result in marginal improvements at best. I added a legend for both the shapes and color, improved the legibility of the rollover and increased the visual affordance of the slider, and that was it.
Thoughts
It is becoming apparent to me that the act of building displays like this is playful in its own way. Through sketching in code, you can have something like a conversation with the data and get a sense of what’s there. Perhaps the end result is merely a byproduct of this process?
I’m amazed at how far a novice programmer like myself, with a dramatic lack of affinity for anything related to mathematics or physics, can get by simply modifying, augmenting and combining code that is already out there. I have no ambition whatsoever of becoming a professional developer of production-quality code. But building a collection bits and pieces of code that can do useful and interesting things seems like a good strategy for any designer. I am learning to trust my innate reluctance to code stuff from scratch.
Also, isn’t it cool that it is becoming increasingly feasible for regular citizens to start analyzing data that is — or at least should be — publicly available? Government still has a long way to go. Why do we need to go through the painstaking process of scraping this data from sources such as 9292 which for all intents and purposes is a public service?4
I will probably make the final prototype available online at some point in the future. For now, if you have any questions or comments I would love to hear them here, or via email.
Update:Alper has released a JSON file containing all the data I used to make this. Go on and grab it, and make some displays of your own!
And another update: I’ve decided to make this application available for download, including source files.
Sketching is the defining activity of design writes Buxton and I tend to agree. The genius of his book is that he shows sketching can take on many forms. It is not limited to working with pencils and paper. You can sketch in 3D using wood or clay. You can sketch in time using video, etc. Buxton does not include many examples of sketching in code, though.1 Programming in any language tends to be a hard earned skill, he writes, and once you have achieved sufficient mastery in it, you tend to try and solve all problems with this one tool. Good designers can draw on a broad range of sketching techniques and pick the right one for a given situation. This might include programming, but then it would need to conform to Buxton’s defining characteristics of sketching: quick, inexpensive, disposable, plentiful, offer minimal detail, and suggest and explore rather than confirm.
I have been spending some time broadening my sketching repertoire as a designer. Before I started interaction design I was mostly into visual arts (drawing, painting, comics) so I am quite comfortable sketching in 2D, using storyboards, etc.2 Sketching in code though, has always been a weak spot. I have started to remedy this by looking into Processing.
As an exercise I took some data from Twitter — one data set was the 20 most recent tweets and the other my friends list — and decided to see how quick I could create a few different visualizations of that data. The end results were:
one: a timeline that spatially plots the latest tweets from my friends — showing density at certain points in time; or how ‘noisy’ it is on my Twitter stream,
two: an ordering of friends based on the percentage of their tweets that take up my timeline — who’s the loudest of my friends?,
three: a graph of my friends list, with number of friends and followers on the axes and their total number of tweets mapped to the size of each point.
The aim was not to come up with groundbreaking solutions, or finished applications.3 The goal was to exercise this idea of sketching in code and use it to get a feel for a ‘complex’ data set, iterating on many different ways to show the data before committing to one solution. In a real-world project I could see myself as a designer do this and then collaborate with a ‘proper’ programmer to develop the final solution (which would most likely be interactive). I would choose different sketching techniques to design the interactive aspects of a data-visualization. For now I am content with Processing sketches that simply output a static image.
Tools & resources used were:
Processing — a free and open source programming environment for visual folks
The Twitter4J library for working with the Twitter API
If as a designer you are confronted with a project that involves making a large amount of data understandable, sketching in code can help. You can use it to ‘talk’ to the data, and get a sense of its ‘shape’.
There is one involving Phidgets and Max/MSP, a visual programming solution for physical computing. [↩]
That’s what I always say when I’m playing games, too.
I really liked Bill Buxton’s book Sketching User Experiences. I like it because Buxton defends design as a legitimate profession separate from other disciplines—such as engineering—while at the same time showing that designers (no matter how brilliant) can only succeed in the right ecosystem. I also like the fact that he identifies sketching (in its many forms) as a defining activity of the design profession. The many examples he shows are very inspiring.
One in particular stood out for me, which is the project Sketch-A-Move by Anab Jain and Louise Klinker done in 2004 at the RCA in London. The image above is taken from the video they created to illustrate their concept. It’s about cars auto-magically driving along trajectories that you draw on their roof. You can watch the video over at the book’s companion website. It’s a very good example of visualizing an interactive product in a very compelling way without actually building it. This was all faked, if you want to find out how, buy the book.2
The great thing about the video is not only does it illustrate how the concept works, it also gives you a sense of what the experience of using it would be like. As Buxton writes:3
“You see, toys are not about toys. Toys are about play and the experience of fun that they help foster. And that is what this video really shows. That, and the power of video to go beyond simply documenting a concept to communicating something about experience in a very visceral way.”
Not only does it communicate the fun you would have playing with it, I think this way of sketching actually helped the designers get a sense themselves of wether what they had come up with was fun. You can tell they are actually playing, being surprised by unexpected outcomes, etc.
The role of play in design is discussed by Buxton as well, although he admits he needed to be prompted by a friend of his: Alex Manu, a teacher at OCAD in Toronto writes in an email to Buxton:4
“Without play imagination dies.”
“Challenges to imagination are the keys to creativity. The skill of retrieving imagination resides in the mastery of play. The ecology of play is the ecology of the possible. Possibility incubates creativity.”
Which Buxton rephrases in one of his own personal mantras:5
“These things are far too important to take seriously.”
All of which has made me realize that if I’m not having some sort of fun while designing, I’m doing something wrong. It might be worth considering switching from one sketching technique to another. It might help me get a different perspective on the problem, and yield new possible solutions. Buxton’s book is a treasure trove of sketching techniques. There is no excuse for being bored while designing anymore.
I think it was around half a year ago that I wrote “UX designers should get into everyware”. Back then I did not expect to be part of a ubicomp project anytime soon. But here I am now, writing about work I did in the area of multi-touch interfaces.
Background
The people at InUse (Sweden’s premier interaction design consultancy firm) asked me to assist them with visualising potential uses of multi-touch technology in the context of a gated community. That’s right—an actual real-world physical real-estate development project. How cool is that?
This residential community is aimed at well-to-do seniors. As with most gated communities, it offers them convenience, security and prestige. You might shudder at the thought of living in one of these places (I know I have my reservations) but there’s not much use in judging people wanting to do so. Planned amenities include sports facilities, fine dining, onsite medical care, a cinema and on and on…
Social capital
One of the known issues with these ‘communities’ is that there’s not much evidence of social capital being higher there than in any regular neighbourhood. In fact some have argued that the global trend of gated communities is detrimental to the build-up of social capital in their surroundings. They throw up physical barriers that prevent free interaction of people. These are some of the things I tried to address: To see if we could support the emergence of community inside the residency using social tools while at the same counteracting physical barriers to the outside world with “virtual inroads” that allow for free interaction between residents and people in the periphery.
Being in the world
Another concern I tried to address is the different ways multi-touch interfaces can play a role in the lives of people. Recently Matt Jones addressed this in a post on the iPhone and Nokia’s upcoming multi-touch phones. In a community like the one I was designing for, the worst thing I could do is make every instance of multi-touch technology an attention-grabbing presence demanding full immersion from its user. In many cases ‘my’ users would be better served with them behaving in an unobtrusive way, allowing almost unconscious use. In other words: I tried to balance being in the world with being in the screen—applying each paradigm based on how appropriate it was given the user’s context. (After all, sometimes people want or even need to be immersed.)
Process
InUse had already prepared several personas representative of the future residents of the community. We went through those together and examined each for scenarios that would make good candidates for storyboarding. We wanted to come up with a range of scenarios that not only showed how these personas could be supported with multi-touch interfaces, but also illustrate the different spaces the interactions could take place in (private, semiprivate and public) and the scales at which the technology can operate (from small key-like tokens to full wall-screens).
I drafted each scenario as a textual outline and sketched the potential storyboards on thumbnail size. We went over those in a second workshop and refined them—making adjustments to better cover the concerns outlined above as well as improving clarity. We wanted to end up with a set of storyboards that could be used in a presentation for the client (the real-estate development firm) so we needed to balance user goals with business objectives. To that end we thought about and included examples of API-like integration of the platform with service providers in the periphery of the community. We also tried to create self-service experiences that would feel like being waited on by a personal butler.
Outcome
I ended up drawing three scenarios of around 9 panels each, digitising and cleaning them up on my Mac. Each scenario introduces a persona, the physical context of the interaction and the persona’s motivation that drives him to engage with the technology. The interactions visualised are a mix of gestures and engagements with multi-touch screens of different sizes. Usually the persona is supported in some way by a social dimension—fostering serendipity and emergence of real relations.
All in all I have to say I am pretty pleased with the result of this short but sweet engagement. Collaboration with the people of InUse was smooth (as was expected, since we are very much the same kind of animal) and there will be follow-up workshops with the client. It remains to be seen how much of this multi-touch stuff will find its way into the final gated community. That as always will depend on what makes business sense.
In any case it was a great opportunity for me to immerse myself fully in the interrelated topics of multi-touch, gesture, urbanism and sociality. And finally, it gave me the perfect excuse to sit down and do lots and lots of drawings.
Last sunday I sat down and coded a prototype of a casual game in Mobile Processing. I got the idea for it the evening before: You’re a bee who needs to collect as much honey as possible in his hive while at the same time keeping a flower-bed blooming by pollinating… Play it and let me know what your high score is in the comments!
Thinking and making
I’ve been looking for an excuse to get some experience with Processing (particularly the variant suitable for developing mobile stuff) for a while. I also felt I needed to get back into the making part of the field I’ve been thinking about so much lately: Game Design. I agree with Saffer, Webb and others — making is an important part of the design practice, it cannot be replaced by lots of thinking. The things learnt from engaging with the actual stuff things are made of (which in the case of digital games is code) aren’t gained in any other way and very valuable.
Get the game
I’ve uploaded the first version of the game here. You can play it in the emulator in your browser or if your phone runs Java midlets, download the file and play it like you’re supposed to: While out and about. The source code is provided as well, if you feel like looking at it.1
How to play
You’re the yellow oval. The orange triangle in the top left corner is your hive. Green squares are grass, brown squares are seeds, red squares are flowers and pink squares are pollinated flowers. The field is updated in columns from left to right (indicated by the yellow marker in the bottom). A seed will turn into a flower (in rare cases a pollinated flower). A flower will die, a pollinated flower will die and spread seeds to grass around it. Move your bee with the directional keys, use the centre key to grab nectar from a flower. You can cary a maximum of 100 nectar. Drop your nectar off at the hive (again using the centre key) to up your score. When you first grab nectar from a pollinated flower and subsequently from a normal flower, the latter is pollinated. Try to keep the flower-bed in bloom while at the same time racking up a high-score!
You’ll get 10 nectar from a flower (in bloom or not). Pollinating a flower costs 5 nectar. If you try to take nectar more than once from the same flower, you’ll loose 10 nectar.2
Improvements
Stuff not in here that I might put into a next version (whenever I get around to it):
Animation — I need to get my feet wet with some scripted animation. Thing is I’ve always sucked at this. For now it’s all tile-based stuff.
Better feedback — For instance show the points you earn near the bee and the hive. I think that’ll make the game a lot easier to understand and therefore more fun.
Menus, pause, game over — It’s a prototype, so you get dumped into the action right away. (The game starts on the first key you press.) And there’s no actual game over message, the field just turns green and you’re left to wonder what to do.
Balance — I’m not sure if the game like it stands is balanced right, I will need to play it a lot to figure that out. Also there’s probably a dominant strategy that’ll let you rack up points easily.
The aim was to create a relatively casual game experience that will almost allow you to zone out while playing. I think it is far too twitchy now, so perhaps I really should sit down and do a second version sometime soon.
Mobile Processing
I enjoy working with Mobile Processing. I like the way it allows you to program in a very naive way but if you like structure things in a more sophisticated fashion. It really does allow you to sketch in code, which is exactly what I need. The emphasis on just code also prevents me from fiddling around with animations, graphics and so on (like I would in Flash for instance.) Perhaps the only thing that would be nice is an editor that is a bit more full-featured.3 Perhaps I should grab an external editor next time?
Feedback
If you played the game and liked it (or thought it was too hard, boring or whatever) I’d love to get your feedback in the comments. Anyone else out there prototyping games in Processing? Or using it to teach game design? I’d be very interested to hear about it.
Not that it’s particularly good, I’m an amateur coder at best. [↩]
I’m not sure this is the right kind of negative reinforcement. [↩]
The automatic code formatting refused to work for me, requiring me to spend a bit too much effort on formatting by hand. [↩]