Designing Learning Experiences in a Post-ChatGPT World

Transcript of a talk delivered at LXDCON’25 on June 12, 2025.

My name is Kars. I am a postdoc at TU Delft. I research contestable AI—how to use design to ensure AI systems remain subject to societal control. I teach the responsible design of AI systems. In a previous life, I was a practicing designer of digital products and services. I will talk about designing learning experiences in a post-ChatGPT world.

Let’s start at this date.

This is when OpenAI released an early demo of ChatGPT. The chatbot quickly went viral on social media. Users shared examples of what it could do. Stories and samples included everything from travel planning to writing fables to coding computer programs. Within five days, the chatbot had attracted over one million users.

Fast forward to today, 2 years, 6 months, and 14 days later, we’ve seen a massive impact across domains, including on education.

For example, the article on the left talks about how AI cheating has become pervasive in higher education. It is fundamentally undermining the educational process itself. Students are using ChatGPT for nearly every assignment while educators struggle with ineffective detection methods and question whether traditional academic work has lost all meaning.

The one on the right talks about how students are accusing professors of being hypocritical. Teachers are using AI tools for things like course materials and grading while telling students they cannot use them.

What we’re looking at is a situation where academic integrity was already in question, on top of that, both students and faculty are quickly adopting AI, and institutions aren’t really ready for it.

These transformations in higher education give me pause. What should we change about how we design learning experiences given this new reality?

So, just to clarify, when I mention “AI” in this talk, I’m specifically referring to generative AI, or GenAI, and even more specifically, to chatbots that are powered by large language models, like ChatGPT.

Throughout this talk I will use this example of a learning experience that makes use of GenAI. Sharad Goel, Professor at Harvard Kennedy School, developed an AI Slackbot named “StatGPT” that aims to enhance student learning through interactive engagement.

It was tested in a statistics course with positive feedback from students. They described it as supportive and easily accessible, available anytime for student use. There are plans to implement StatGPT in various other courses. They say it assists in active problem-solving and consider it an example of how AI can facilitate learning, rather than replace it.

The debate around GenAI and learning has become polarized. I see the challenge as trying to find a balance. On one side, there’s complete skepticism about AI, and on the other, there’s this blind acceptance of it. What I propose is that we need an approach I call Conscious Adaptation: moving forward with full awareness of what’s being transformed.

To build the case for this approach, I will be looking at two common positions in the debates around AI and education. I’ll be focusing on four pieces of writing.

Two of them are by Ethan Mollick, from his blog. He’s a professor at the University of Pennsylvania specializing in innovation and entrepreneurship, known for his work on the potential of AI to transform different fields.

The other two pieces are by Ian Bogost, published at The Atlantic. He’s a media studies scholar, author, and game designer who teaches at Washington University. He’s known for his sobering, realist critiques of the impact of technology on society.

These, to me, exemplify two strands of the debate around AI in education.

Ethan Mollick’s position, in essence, is that AI in education is an inevitable transformation that educators must embrace and redesign around, not fight.

You could say Mollick is an optimist. But he is also really clear-eyed about how much disruption is going on. He even refers to it as the “Homework Apocalypse.” He talks about some serious issues: there are failures in detection, students are not learning as well (with exam performance dropping by about 17%), and there are a lot of misunderstandings about AI on both sides—students and faculty.

But his perspective is more about adapting to a tough situation. He’s always focused on solutions, constantly asking, “What can we do about this?” He believes that with thoughtful human efforts, we can really influence the outcomes positively.

On the other hand, Ian Bogost’s view is that AI has created an unsolvable crisis that’s fundamentally breaking traditional education and leaving teachers demoralized.

Bogost, I would describe as a realist. He accepts the inevitability of AI, noting that the “arms race will continue” and that technology will often outpace official policies. He also highlights the negative impact on faculty morale, the dependency of students, and the chaos in institutions.

He’s not suggesting that we should ban AI or go back to a time before it existed. He sees AI as something that might be the final blow to a profession that’s already struggling with deeper issues. At the same time, he emphasizes the need for human agency by calling out the lack of reflection and action from institutions.

So, they both observe the same reality, but they look at it differently. Mollick sees it as an engineering challenge—one that’s complicated but can be tackled with smart design. On the other hand, Bogost views it as a social issue that uncovers deeper problems that can’t just be fixed with technology.

Mollick thinks it’s possible to rebuild after a sort of collapse, while Bogost questions if the institutions that are supposed to do that rebuilding are really fit for the job.

Mollick would likely celebrate it as an example of co-intelligence. Bogost would likely ask what the rollout of the bot would be at the expense of, or what deeper problems its deployment unveils.

Getting past the conflict between these two views isn’t just about figuring out the best technical methods or the right order of solutions. The real challenge lies in our ability as institutions to make real changes, and we need to be careful that focusing on solutions doesn’t distract us from the important discussions we need to have.

I see three strategies that work together to create an approach that addresses the conflict between these two perspectives in a way that I believe will be more effective.

First, institutional realism is about designing interventions assuming institutions will resist change, capture innovations, or abandon initiatives. Given this, we could focus on individual teacher practices, learner-level tools, and changes that don’t require systemic transformation. We could treat every implementation as a diagnostic probe revealing actual (vs. stated) institutional capacity.

Second, loss-conscious innovation is about before implementing AI-enhanced practices, explicitly identifying what human learning processes, relationships, or skills are being replaced. We could develop metrics that track preservation alongside progress. We could build “conservation” components into new approaches to protect irreplaceable educational values.

Third, and finally, we should recognize that Mollick-style solution-building and Bogost-style critical analysis serve different but essential roles. Practitioners need actionable guidance; while the broader field needs diagnostic consciousness. We should avoid a false synthesis but instead maintain both approaches as distinct intellectual work that informs each other.

In short, striking a balance may not be the main focus; it’s more about taking practical actions while considering the overall context. Progress is important, but it’s also worth reflecting on what gets left behind. Conscious adaptation.

So, applying these strategies to Harvard’s chatbot, we could ask: (1) How can we create a feedback loop between an intervention like this and the things it uncovers about institutional limits, so that those can be addressed in the appropriate place? (2) How can we measure what value this bot adds for students and for teachers? What is it replacing, what is it adding, what is it making room for? (3) What critique of learning at Harvard is implied by this intervention?

What does all of this mean, finally, for LXD? This is an LXD conference, so I don’t need to spend a lot of time explaining what it is. But let’s just use this basic definition as a starting point. It’s about experiences, it’s about centering the learner, it’s about achieving learning outcomes, etc.

Comparing my conscious adaptation approach to what typifies LXD, I can see a number of alignments.

Both LXD and Conscious Adaptation prioritize authentic human engagement over efficiency. LXD through human-centered design, conscious adaptation through protecting meaningful intellectual effort from AI displacement.

LXD’s focus on holistic learning journeys aligns with both Mollick’s “effort is the point” and Bogost’s concern that AI shortcuts undermine the educational value embedded in struggle and synthesis.

LXD’s experimental, prototype-driven approach mirrors my “diagnostic pragmatism”—both treat interventions as learning opportunities that reveal what actually works rather than pursuing idealized solutions.

So, going back one final time to Harvard’s bot, an LXD practice aligned in this way would lead us to ask: (1) Is this leveraging GenAI to protect and promote genuine intellectual effort? (2) Are teachers and learners meaningfully engaged in the ongoing development of this technology? (3) Is this prototype properly embedded, so that its potential to create learning for the organization can be realized?

So, where does this leave us as learning experience designers? I see three practical imperatives for Conscious Adaptation.

First, we need to protect meaningful human effort while leveraging AI’s strengths. Remember that “the effort is the point” in learning. Rather than asking “can AI do this?”, we should ask “should it?” Harvard’s bot works because it scaffolds thinking rather than replacing it. We should use AI for feedback and iteration while preserving human work for synthesis and struggle.

Second, we must design for real institutions, not ideal ones. Institutions resist change, capture innovations, and abandon initiatives. We need to design assuming limited budgets, overworked staff, and competing priorities. Every implementation becomes a diagnostic probe that reveals what resistance actually tells us about institutional capacity.

Third, we have to recognize the limits of design. AI exposes deeper structural problems like grade obsession, teacher burnout, and test-driven curricula. You can’t design your way out of systemic issues, and sometimes the best move is recognizing when the problem isn’t experiential at all.

This is Conscious Adaptation—moving forward with eyes wide open.

Thanks.

Postdoc update – July 2025

I am over one year into my postdoc at TU Delft. Where did the time go? By way of an annual report, here’s a rundown of my most notable outputs and activities since the previous update from June 2024. And also, some notes on what I am up to now.

Happenings

Participatory AI and ML Engineering: On 13 February 2024 at a Human Values for Smarter Cities meeting and on 11 June 2024 at a Cities Coalition for Digital Rights meeting, I presented a talk on participatory AI and ML engineering (blogged here). This has since evolved into a study I am currently running with the working title “Vision Model Macroscope.” We are designing, building, and evaluating an interface that allows municipal workers to understand and debate value-laden technical decisions made by machine learning engineers in the construction of camera vehicles. For the design, I am collaborating with CLEVER°FRANKE. The study is part of the Human Values for Smarter Cities projected headed up by the Civic Interaction Design group at AUAS.

Envisioning Contestability Loops: My article “Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI” (with Ianus Keller, Mireia Yurrita Semperena, Denis Bulygin, Gerd Kortuem, and Neelke Doorn) was published in She Ji on 17 June 2024. (I had already published the infographic “Contestability Loops for Public AI,” which the article revolves around, on 17 April 2024.) Later in the year, on 5 September 2024, I ran the workshop that the study builds on as a ThingsCon Salon. And on 27 September 2024, I presented the article at Lawtomation Days in Madrid, Spain, as part of the panel “Methods in law and technology research: inter- and cross-disciplinary challenges and opportunities,” chaired by Kostina Prifti (slides). (Also, John Thackara said nice things about the article online.)

Contestability Loops for Public AI infographic
Envisioning Contestability Loops workshop at ThingsCon Salon in progress.

Democratizing AI Through Continuous Adaptability: I presented on “Democratizing AI Through Continuous Adaptability: The Role of DevOps” at the TILTing Perspectives 2024 panel “The mutual shaping of democratic practices & AI,” which was chaired and moderated by Merel Noorman on 14 July 2024. I later reprised this talk at NWO ICT.OPEN on 16 April 2025 as part of the track “Human-Computer Interaction and Societal Impact in the Netherlands,” chaired by Armağan Karahanoğlu and Max Birk (PDF of slides).

From Stem to Stern: I was part of the organizing team of the CSCW 2024 workshop “From Stem to Stern: Contestability Along AI Value Chains,” which took place as a hybrid one-day session on 9 November 2024. I blogged a summary and some takeaways of the workshop here. Shoutout to Agathe Balayn and Yulu Pi for leading this endeavor.

Contestable AI Talks: I was invited to speak on my PhD research at various meetings and events organized by studios, agencies, consultancies, schools, and public sector organizations. On 3 September 2024, at the data design agency CLEVER°FRANKE (slides). On 10 January 2025, at the University of Utrecht Computational Sociology group. On 19 February 2025, at digital ethics consultancy The Green Land (slides). On 6 March 2024, at Communication and Multimedia Design Amsterdam (slides). And on 17 March 2025, at the Advisory Board on Open Government and Information Management.

Designing Responsible AI: Over the course of 2024, Sara Colombo, Francesca Mauri, and I developed and taught for the first time a new Integrated Product Design master’s elective, “Designing Responsible AI” (course description). Later, on 28 March 2025, I was invited by my colleagues Alessandro Bozzon and Carlo van der Valk to give a single-morning interactive lecture on part of the same content at the course AI Products and Services (slides).

Books that represent the range of theory covered in the course “Designing Responsible AI.”

Stop the Cuts: On 2 July 2024, a far-right government was sworn in in the Netherlands (it has since fallen). They intended to cut funding to education by €2 billion. A coalition of researchers, teachers, students, and others organized to protest and strike in response. I was present at several of these actions: The alternative opening of the academic year in Utrecht on 2 September 2024. Local walkouts on 14 November 2024 (I participated in Utrecht). Mass demonstration in The Hague on 25 November 2024. Local actions on 11 December 2024 (I participated in Delft). And finally, for now at least, on 24 April 2025, at the Delft edition of the nationwide relay strike. If you read this, work in academia, and want to act, join a union (I am a member of the AOb), and sign up for the WOinActie newsletter.

End of the march during the 24 April 2025 strike in Delft.

Panels: Over the past months, I was a panelist at several events. On 22 October 2024, at the Design & AI Symposium as part of the panel “Evolving Perspectives on AI and Design,” together with Iohanna Nicenboim and Jesse Benjamin, moderated by Mathias Funk (blog post). On 13 December 2024 at TH/NGS as part of the panel “Rethink Design: Book Launch and Panel Discussion on Designing With AI” chaired by Roy Bendor (video). On 12 March 2025, at the panel “Inclusive AI: Approaches to Digital Inclusion,” chaired by Nazli Cila and Taylor Stone.

Slide I used during my panel contribution at the Design & AI symposium.

Design for Human Autonomy: I was part of several activities organized by the Delft Design for Values institute related to their annual theme of autonomy (led by Michael Klenk). I was a panelist on 15 October 2024 during the kick-off event (blog post). I wrote the section on designing AI for autonomy for the white paper edited by Udo Pesch (preprint). And during the closing symposium, master’s graduation student Ameya Sawant, whom I am coaching (with Fernando Secomandi acting as chair), was honored as a finalist in the thesis competition.

Master Graduation Students: Four master students that I coached during their thesis projects graduated, which between them explored technology’s role in society through AI-mediated civic engagement, generative AI implementation in public services, experimental approaches to AI trustworthiness, and urban environmental sensing—Nina te Groen (with Achilleas Psyilidis as chair), Romée Postma (with Roy Bendor), Eline Oei (with Giulia Calabretta), and Jim Blom (with Tomasz Jaskiewicz).

Architecting for Contestability: On 22 November 2025, I ran a single-day workshop about contestability for government-employed ICT architects participating in the Digital Design & Architecture course offered by the University of Twente, on invitation from Marijn Janssen (slides).

Qualitative Design Research: On 17 December 2024, I delivered a lecture on qualitative design research for the course Empirical Design Research, on invitation from my colleague Himanshu Verma (slides). Later, on 22 April 2025, I delivered a follow-up in the form of a lecture on reflexive thematic analysis for the course Product Futures Studio, coordinated by Holly McQuillan (slides).

Democratic Generative Things: On 6 June 2025 I joined the ThingsCon unconference to discuss my contribution to the RIOT report, “Embodied AI and collective power: Designing democratic generative things” (preprint). The report was edited by edited by Andrea Krajewski and Iskander Smit.

Me, holding forth during the ThingsCon RIOT unconference.

Learning Experience Design: I delivered the closing invited talk at LXDCON on 12 June 2025, reflecting on the impact of GenAI on the fields of education and design for learning (slides). Many thanks to Niels Floor for the invitation.

People’s Compute: I published a preprint of my position paper “People’s Compute: Design and the Politics of AI Infrastructures” over at OSF on 14 April 2025. I emailed it to peers and received over a dozen encouraging responses. It was also somehow picked up by Evgeny Morozov’s The Syllabus with some nice commentary attached.

On deck

So what am I up to at the moment? Keeping nice and busy.

  • I am co-authoring several articles, papers, and book chapters on topics including workplace automation, AI transparency, contestability in engineering, AI design and regulation, computational argumentation, explainable and participatory AI, and AI infrastructure politics. I do hope at least some of these will see the light of day in the coming months.
  • I am preparing a personal grant application that builds on the vision laid out in People’s Compute.
  • I will be delivering an invited talk at Enterprise UX on 21 November 2025.
  • I am acting as a scientific advisor to a center that is currently being established, which focuses on increasing digital autonomy within Dutch government institutions.
  • I will be co-teaching Designing Responsible AI again in Q1 of the next academic year.
  • I’ll serve as an associate chair on the CHI 2026 design subcommittee.
  • And I have signed up to begin our university’s teaching qualification certification.

Whew. That’s it. Thanks for reading (skimming?) if you’ve made it all the way to the end. I will try to circle back and do another update, maybe a little sooner than this one, say in six months’ time.

‘Machine Learning for Designers’ workshop

On Wednesday Péter Kun, Holly Robbins and myself taught a one-day workshop on machine learning at Delft University of Technology. We had about thirty master’s students from the industrial design engineering faculty. The aim was to get them acquainted with the technology through hands-on tinkering with the Wekinator as central teaching tool.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Background

The reasoning behind this workshop is twofold.

On the one hand I expect designers will find themselves working on projects involving machine learning more and more often. The technology has certain properties that differ from traditional software. Most importantly, machine learning is probabilistic in stead of deterministic. It is important that designers understand this because otherwise they are likely to make bad decisions about its application.

The second reason is that I have a strong sense machine learning can play a role in the augmentation of the design process itself. So-called intelligent design tools could make designers more efficient and effective. They could also enable the creation of designs that would otherwise be impossible or very hard to achieve.

The workshop explored both ideas.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Format

The structure was roughly as follows:

In the morning we started out providing a very broad introduction to the technology. We talked about the very basic premise of (supervised) learning. Namely, providing examples of inputs and desired outputs and training a model based on those examples. To make these concepts tangible we then introduced the Wekinator and walked the students through getting it up and running using basic examples from the website. The final step was to invite them to explore alternative inputs and outputs (such as game controllers and Arduino boards).

In the afternoon we provided a design brief, asking the students to prototype a data-enabled object with the set of tools they had acquired in the morning. We assisted with technical hurdles where necessary (of which there were more than a few) and closed out the day with demos and a group discussion reflecting on their experiences with the technology.

Photo credits: Holly Robbins
Photo credits: Holly Robbins

Results

As I tweeted on the way home that evening, the results were… interesting.

Not all groups managed to put something together in the admittedly short amount of time they were provided with. They were most often stymied by getting an Arduino to talk to the Wekinator. Max was often picked as a go-between because the Wekinator receives OSC messages over UDP, whereas the quickest way to get an Arduino to talk to a computer is over serial. But Max in my experience is a fickle beast and would more than once crap out on us.

The groups that did build something mainly assembled prototypes from the examples on hand. Which is fine, but since we were mainly working with the examples from the Wekinator website they tended towards the interactive instrument side of things. We were hoping for explorations of IoT product concepts. For that more hand-rolling was required and this was only achievable for the students on the higher end of the technical expertise spectrum (and the more tenacious ones).

The discussion yielded some interesting insights into mental models of the technology and how they are affected by hands-on experience. A comment I heard more than once was: Why is this considered learning at all? The Wekinator was not perceived to be learning anything. When challenged on this by reiterating the underlying principles it became clear the black box nature of the Wekinator hampers appreciation of some of the very real achievements of the technology. It seems (for our students at least) machine learning is stuck in a grey area between too-high expectations and too-low recognition of its capabilities.

Next steps

These results, and others, point towards some obvious improvements which can be made to the workshop format, and to teaching design students about machine learning more broadly.

  1. We can improve the toolset so that some of the heavy lifting involved with getting the various parts to talk to each other is made easier and more reliable.
  2. We can build examples that are geared towards the practice of designing IoT products and are ready for adaptation and hacking.
  3. And finally, and probably most challengingly, we can make the workings of machine learning more transparent so that it becomes easier to develop a feel for its capabilities and shortcomings.

We do intend to improve and teach the workshop again. If you’re interested in hosting one (either in an educational or professional context) let me know. And stay tuned for updates on this and other efforts to get designers to work in a hands-on manner with machine learning.

Special thanks to the brilliant Ianus Keller for connecting me to Péter and for allowing us to pilot this crazy idea at IDE Academy.

References

Sources used during preparation and running of the workshop:

  • The Wekinator – the UI is infuriatingly poor but when it comes to getting started with machine learning this tool is unmatched.
  • Arduino – I have become particularly fond of the MKR1000 board. Add a lithium-polymer battery and you have everything you need to prototype IoT products.
  • OSC for Arduino – CNMAT’s implementation of the open sound control (OSC) encoding. Key puzzle piece for getting the above two tools talking to each other.
  • Machine Learning for Designers – my preferred introduction to the technology from a designerly perspective.
  • A Visual Introduction to Machine Learning – a very accessible visual explanation of the basic underpinnings of computers applying statistical learning.
  • Remote Control Theremin – an example project I prepared for the workshop demoing how to have the Wekinator talk to an Arduino MKR1000 with OSC over UDP.

Week 169

Fiona Raby once told me that the majority of her work with students at the RCA was about psychology. After a week like this, I can see where she’s coming from. Without going into too much detail, I had my work cut out for me with a new group of students who I will be working with on a design research project at the HKU. After a first meeting with the team and a kick-off with the client the next day, it became clear I was dealing with a group with some serious motivational issues. The trick was to figure out where it all was coming from. To do this it was vital to try and see things as they really are in stead of as they were presented to me by the group. After several additional sessions (messing with my schedule but that comes with the territory) I had it figured out more or less and have formulated a plan to deal with it. Psychology.

In between all that craziness my week consisted of:

  • Working with my two new interns at Hubbub. We reflected on their experiences at the Natural Networking Festival and presented a post-mortem of the first game to Thieu after attending one of the Learning Lab meetups.
  • Sketching out additions to the PLAY Pilots website necessary to support the Zesbaans installation for the Netherlands Film Festival. These will launch next week in time for the installation’s unveiling on Thursday.
  • Presenting my preliminary list of interactive works suitable for next year’s Tweetakt festival. This is my first time curating an event other than This happened. I am keen to mash up playful interaction design with the fringes of game design and it seems Tweetakt are up for it too. Happy days.
  • Another full day of work on Maguro. Best part of which was a few quiet hours to bang out a first playable paper prototype of the game. Convergence is a bitch but always rewarding when it happens.
  • Today, I hung out at BUROPONY and took care of a few odds and ends for their website. In return work has started on a last bit of Hubbub corporate identity: a design for the box to hold our business-slash-collectible playing cards.

And with that I am signing off. A train is taking me from Rotterdam to Utrecht, perhaps I will be in time to catch the tail end of friday drinks at the Dutch Game Garden. Never a dull moment there.

“Stay hungry, stay foolish”

I graduated from the Utrecht School of the Arts in 2002. Now, less than seven years later, I am mentoring a group of five students who will be doing the same come September this year. I took a photo of them today, here it is:

Bright young bunch

From left to right, here’s who they are and what they’re up to:

  • Christiaan is tech lead on Hollandia, an action adventure game inspired by Dutch folklore. His research looks at ways to close the gap between creatives and technologists in small teams, using agile techniques.
  • Kjell is designing a series of experimental games using voice as their only input. He’s researching what game mechanics work best with voice control.
  • Maxine is game designer on the aforementioned Hollandia game. Her research looks at the translation of the play experience of physical toys to digital games. (In of Hollandia, you’ll be using a Wiimote to control the spinning top used by the heroine.)
  • Paul is building a physics-based platform puzzle game for two players. His research looks at the design of meaningful collaborative play.
  • Eva is making a space simulation game with realistic physics and complex controls. She’s researching what kinds of fun are elicited by such games.

Practically speaking, mentoring these guys means that I see them once a week for a 15-minute session. In this we discuss the past week’s progress and their plans for the next. They’ve set their own briefs, and are expected to be highly self-reliant. My task consists of making sure they stay on track and their work is relevant, both from an educational and a professional perspective. It’s challenging work, but a lot of fun. It forces me to make explicit the stuff I’ve picked up professionally. It’s also a lot about developing a sense for where each student individually can improve and encouraging them to challenge themselves in those areas.

I’m looking forward to seeing what they’ll deliver come September, when it’s their turn to graduate, and go out to conquer the world.

What I’ve been up to lately

You might be wondering what’s been going on at the Leapfrog studio lately, since I haven’t really posted anything substantial here in a while. Quite some stuff has happened — and I’ll hopefully get back into posting longer articles soon — but for now, here’s a list of more or less interesting things I have been doing:

This happened – Utrecht

We had our first This happened – Utrecht on November 3. I think we succeeded in creating an event that really looks at the craft of interaction design. I’m happy to say we’re planning to do three events next year — all at Theater Kikker in Utrecht — and we’ve got lots of cool speakers in mind. If you want to make sure you won’t miss them, subscribe to our newsletter (in Dutch).1

Teaching

My students are nearing the end of their project. They’ve been hard at work creating concepts for mobile social games with a musical component; they came up with 20 in total. Now they’re prototyping two of them, and I must say it’s looking good. They’ll have to present the games to the project’s commissioner — a major mobile phone manufacturer — somewhere the beginning of January 2009. I hope to be able to share some of the results here afterwards.

Office space

Since December 1 I am a resident of the Dutch Game Garden’s Business Club. That means I now have a nice office smack in the centre of Utrecht. The building’s home to lots of wonderful games companies, some, like me, operating on the fringes — like FourceLabs and Monobanda. If you’re curious and would like to drop by for a tour, a coffee and some conversation, let me know.

Brainstorm

I was invited do help compose one of the cases for the ‘Grote Amsterdamse Waterbrainwave’. A one-day brainstorm in which 45 students from various institutions were asked to come up with water-related innovations that would make the Netherlands a significant global player once again. It was organised by the Port of Amsterdam, Waternet and Verleden van Nederland2. I also attended the day itself as an outside expert on games and the creative industry in general. Read a report of the event at FD.nl (in Dutch).

Book

Dan Saffer’s book Designing Gestural Interfaces has been published by O’Reilly and is now available. Turn to page 109 and you’ll find a storyboard by yours truly used for illustration purposes. That’s the first time any work of mine is featured in print, so naturally I’m quite proud. I have yet to receive my copy, but got a sneak peek this weekend and I must say it looks promising. If you’re a designer needing to get up to speed with multi-touch, physical computing and such, this should be a good place to start.

That’s about it for now. There’s a lot of exciting stuff in the works, the outcomes of which I will hopefully be able to share with you in 2009.

  1. The creators of This happened in London have been nominated for a best of the year award by the Design Museum, by the way. Well-deserved, I would say! []
  2. A cross-media campaign aimed at increasing awareness of Dutch national history. []

Teaching design for mobile social play

Last week, the group project I am coaching at the Utrecht School of the Arts kicked off. The project is part of the school’s master of arts program. The group consists of ten students with very different backgrounds, ranging from game design & development to audio design, as well as arts management, media studies, and more. Their assignment is to come up with a number of concepts for games that incorporate mobile phones, social interactions, audio and the web. Nokia Research Center has commissioned the project, and Jussi Holopainen, game design researcher and co-author of Patterns in Game Design, is the client. In the project brief there is a strong emphasis on sketching and prototyping, and disciplined documentation of the design process. The students are working full time on the project and it will run for around 4 months.

I am very happy with the opportunity to coach this group. It’s a new challenge for me as a teacher – moving away from teaching theory and into the area of facilitation. I am also looking forward to seeing what the students will come up with, of course, as the domain they are working in overlaps hugely with my interests. So far, working with Jussi has proven to be very inspirational, so I am getting something out of it as a designer too.

On presentations

One of the most enjoyable things about attending conferences is seeing a lot of people presenting in various ways. A while ago I challenged my own presenting skills by doing a Pecha Kucha. Today, I attended a class (part of a didactics course) on giving lectures. Two prominent lecturers (Giep Hagoort and Jeroen van Mastrigt) from within the Utrecht School of Arts gave us a taste of their own unique presentation format and the way they prepared for a talk.

This triggered some things in my head, such as stuff I’d seen before on the web and that could be helpful to the people attending the class. A lot of them didn’t seem to be too familiar with it, so I’ve decided to collect them here. Maybe they’ll come in handy to those who pass by here: