Designing Learning Experiences in a Post-ChatGPT World

Transcript of a talk delivered at LXDCON’25 on June 12, 2025.

My name is Kars. I am a postdoc at TU Delft. I research contestable AI—how to use design to ensure AI systems remain subject to societal control. I teach the responsible design of AI systems. In a previous life, I was a practicing designer of digital products and services. I will talk about designing learning experiences in a post-ChatGPT world.

Let’s start at this date.

This is when OpenAI released an early demo of ChatGPT. The chatbot quickly went viral on social media. Users shared examples of what it could do. Stories and samples included everything from travel planning to writing fables to coding computer programs. Within five days, the chatbot had attracted over one million users.

Fast forward to today, 2 years, 6 months, and 14 days later, we’ve seen a massive impact across domains, including on education.

For example, the article on the left talks about how AI cheating has become pervasive in higher education. It is fundamentally undermining the educational process itself. Students are using ChatGPT for nearly every assignment while educators struggle with ineffective detection methods and question whether traditional academic work has lost all meaning.

The one on the right talks about how students are accusing professors of being hypocritical. Teachers are using AI tools for things like course materials and grading while telling students they cannot use them.

What we’re looking at is a situation where academic integrity was already in question, on top of that, both students and faculty are quickly adopting AI, and institutions aren’t really ready for it.

These transformations in higher education give me pause. What should we change about how we design learning experiences given this new reality?

So, just to clarify, when I mention “AI” in this talk, I’m specifically referring to generative AI, or GenAI, and even more specifically, to chatbots that are powered by large language models, like ChatGPT.

Throughout this talk I will use this example of a learning experience that makes use of GenAI. Sharad Goel, Professor at Harvard Kennedy School, developed an AI Slackbot named “StatGPT” that aims to enhance student learning through interactive engagement.

It was tested in a statistics course with positive feedback from students. They described it as supportive and easily accessible, available anytime for student use. There are plans to implement StatGPT in various other courses. They say it assists in active problem-solving and consider it an example of how AI can facilitate learning, rather than replace it.

The debate around GenAI and learning has become polarized. I see the challenge as trying to find a balance. On one side, there’s complete skepticism about AI, and on the other, there’s this blind acceptance of it. What I propose is that we need an approach I call Conscious Adaptation: moving forward with full awareness of what’s being transformed.

To build the case for this approach, I will be looking at two common positions in the debates around AI and education. I’ll be focusing on four pieces of writing.

Two of them are by Ethan Mollick, from his blog. He’s a professor at the University of Pennsylvania specializing in innovation and entrepreneurship, known for his work on the potential of AI to transform different fields.

The other two pieces are by Ian Bogost, published at The Atlantic. He’s a media studies scholar, author, and game designer who teaches at Washington University. He’s known for his sobering, realist critiques of the impact of technology on society.

These, to me, exemplify two strands of the debate around AI in education.

Ethan Mollick’s position, in essence, is that AI in education is an inevitable transformation that educators must embrace and redesign around, not fight.

You could say Mollick is an optimist. But he is also really clear-eyed about how much disruption is going on. He even refers to it as the “Homework Apocalypse.” He talks about some serious issues: there are failures in detection, students are not learning as well (with exam performance dropping by about 17%), and there are a lot of misunderstandings about AI on both sides—students and faculty.

But his perspective is more about adapting to a tough situation. He’s always focused on solutions, constantly asking, “What can we do about this?” He believes that with thoughtful human efforts, we can really influence the outcomes positively.

On the other hand, Ian Bogost’s view is that AI has created an unsolvable crisis that’s fundamentally breaking traditional education and leaving teachers demoralized.

Bogost, I would describe as a realist. He accepts the inevitability of AI, noting that the “arms race will continue” and that technology will often outpace official policies. He also highlights the negative impact on faculty morale, the dependency of students, and the chaos in institutions.

He’s not suggesting that we should ban AI or go back to a time before it existed. He sees AI as something that might be the final blow to a profession that’s already struggling with deeper issues. At the same time, he emphasizes the need for human agency by calling out the lack of reflection and action from institutions.

So, they both observe the same reality, but they look at it differently. Mollick sees it as an engineering challenge—one that’s complicated but can be tackled with smart design. On the other hand, Bogost views it as a social issue that uncovers deeper problems that can’t just be fixed with technology.

Mollick thinks it’s possible to rebuild after a sort of collapse, while Bogost questions if the institutions that are supposed to do that rebuilding are really fit for the job.

Mollick would likely celebrate it as an example of co-intelligence. Bogost would likely ask what the rollout of the bot would be at the expense of, or what deeper problems its deployment unveils.

Getting past the conflict between these two views isn’t just about figuring out the best technical methods or the right order of solutions. The real challenge lies in our ability as institutions to make real changes, and we need to be careful that focusing on solutions doesn’t distract us from the important discussions we need to have.

I see three strategies that work together to create an approach that addresses the conflict between these two perspectives in a way that I believe will be more effective.

First, institutional realism is about designing interventions assuming institutions will resist change, capture innovations, or abandon initiatives. Given this, we could focus on individual teacher practices, learner-level tools, and changes that don’t require systemic transformation. We could treat every implementation as a diagnostic probe revealing actual (vs. stated) institutional capacity.

Second, loss-conscious innovation is about before implementing AI-enhanced practices, explicitly identifying what human learning processes, relationships, or skills are being replaced. We could develop metrics that track preservation alongside progress. We could build “conservation” components into new approaches to protect irreplaceable educational values.

Third, and finally, we should recognize that Mollick-style solution-building and Bogost-style critical analysis serve different but essential roles. Practitioners need actionable guidance; while the broader field needs diagnostic consciousness. We should avoid a false synthesis but instead maintain both approaches as distinct intellectual work that informs each other.

In short, striking a balance may not be the main focus; it’s more about taking practical actions while considering the overall context. Progress is important, but it’s also worth reflecting on what gets left behind. Conscious adaptation.

So, applying these strategies to Harvard’s chatbot, we could ask: (1) How can we create a feedback loop between an intervention like this and the things it uncovers about institutional limits, so that those can be addressed in the appropriate place? (2) How can we measure what value this bot adds for students and for teachers? What is it replacing, what is it adding, what is it making room for? (3) What critique of learning at Harvard is implied by this intervention?

What does all of this mean, finally, for LXD? This is an LXD conference, so I don’t need to spend a lot of time explaining what it is. But let’s just use this basic definition as a starting point. It’s about experiences, it’s about centering the learner, it’s about achieving learning outcomes, etc.

Comparing my conscious adaptation approach to what typifies LXD, I can see a number of alignments.

Both LXD and Conscious Adaptation prioritize authentic human engagement over efficiency. LXD through human-centered design, conscious adaptation through protecting meaningful intellectual effort from AI displacement.

LXD’s focus on holistic learning journeys aligns with both Mollick’s “effort is the point” and Bogost’s concern that AI shortcuts undermine the educational value embedded in struggle and synthesis.

LXD’s experimental, prototype-driven approach mirrors my “diagnostic pragmatism”—both treat interventions as learning opportunities that reveal what actually works rather than pursuing idealized solutions.

So, going back one final time to Harvard’s bot, an LXD practice aligned in this way would lead us to ask: (1) Is this leveraging GenAI to protect and promote genuine intellectual effort? (2) Are teachers and learners meaningfully engaged in the ongoing development of this technology? (3) Is this prototype properly embedded, so that its potential to create learning for the organization can be realized?

So, where does this leave us as learning experience designers? I see three practical imperatives for Conscious Adaptation.

First, we need to protect meaningful human effort while leveraging AI’s strengths. Remember that “the effort is the point” in learning. Rather than asking “can AI do this?”, we should ask “should it?” Harvard’s bot works because it scaffolds thinking rather than replacing it. We should use AI for feedback and iteration while preserving human work for synthesis and struggle.

Second, we must design for real institutions, not ideal ones. Institutions resist change, capture innovations, and abandon initiatives. We need to design assuming limited budgets, overworked staff, and competing priorities. Every implementation becomes a diagnostic probe that reveals what resistance actually tells us about institutional capacity.

Third, we have to recognize the limits of design. AI exposes deeper structural problems like grade obsession, teacher burnout, and test-driven curricula. You can’t design your way out of systemic issues, and sometimes the best move is recognizing when the problem isn’t experiential at all.

This is Conscious Adaptation—moving forward with eyes wide open.

Thanks.

On the design and regulation of technology

The following is a section from a manuscript in press on the similarities and differences in approaches to explainable and contestable AI in design and law (Schmude et al., 2025). It ended up on the cutting room floor, but it is the kind of thing I find handy to refer back to, so I chose to share it here.

The responsible design of AI, including practices that seek to make AI systems more explainable and contestable, must somehow relate to legislation and regulations. Writing about responsible research and innovation (RRI) more broadly, Stilgoe et al. (2013) assert that RRI, which we would say includes design, must be embedded in regulation. But does it really make sense to think of the relationship between design and regulation in this way? Understood abstractly, there are in fact at least four ways in which we can think about the relationship between the design and regulation of technology (Figure 1).

Figure 1: We see four possible ways that the relationship between (D) the design and (R) regulation of technology can be conceptualized: (1) design and regulation are independent spheres, (2) design and regulation partially overlap, (3) design is embedded inside of regulation, or (4) regulation is embedded inside design. In all cases, we assume an interactional relation between the two spheres.

To establish the relationship between design and regulation, we first need to establish how we should think about regulation, and related concepts such as governance and policymaking more generally. One straightforward definition would be that regulation entails formal rules and enforcement mechanisms that constrain behavior. These are backed by authority—typically state authority, but increasingly also non-stake actors. Regulation and governance are interactive and mutually constitutive. Regulation is one mechanism within governance systems. Governance frameworks establish contexts for regulation. Policymaking connects politics to governance by translating political contestation into actionable frameworks. Politics, then, influences all these domains: policymaking, governance, and regulation. And they, in turn, operate within and reciprocally shape society writ large. See Table 1 for working definitions of ‘regulation’ and associated concepts.

ConceptDefinition
RegulationFormal rules and enforcement mechanisms that constrain behavior, typically state-backed but increasingly emerging from non-state actors (industry self-regulation, transnational regulatory bodies).
GovernanceBroader arrangements for coordinating social action across public, private, and civil society spheres through both formal and informal mechanisms.
PolicymakingProcess of formulating courses of action to address public problems.
PoliticsContestation of power, interests, and values that shapes governance arrangements.
SocietyBroader context of social relations, norms, and institutions.
Table 1: Working definitions of ‘regulation’ and associated concepts.

What about design? Scholars of regulation have adopted the notion of ‘regulation by design’ (RBD) to refer to the inscribing of rules into the world through the creation and implementation of technological artifacts. Prifti (2024) identifies two prevailing approaches to RBD: The essentialist view treats RBD as policy enactments, or “rules for design.” By contrast, the functionalist view treats design as a mere instrument, or “ruling by design.” We agree with Prifti when he states that both approaches are limited. Essentialism neglects the complexity of regulatory environments, while functionalism neglects the autonomy and complexity of design as a practice.

Prifti proposes a pragmatist reconstruction that views regulation as a rule-making activity (“regulativity”) performed through social practices including design (the “rule of design”). Design is conceptualized as a contextual, situated social practice that performs changes in the environment, rather than just a tool or set of rules. Unlike the law, markets, or social norms, which rely on incentives and sanctions, design can simply disable the possibility of non-compliance, making it a uniquely powerful form of regulation. The pragmatist approach distinguishes between regulation and governance, with governance being a meta-regulative activity that steers how other practices (like design) regulate. This reconceptualization helps address legitimacy concerns by allowing for greater accountability for design practices that might bypass democratic processes.

Returning to the opening question then, out of the four basic ways in which the relationship between design and regulation can be drawn (Figure 1), if we were to adopt Prifti’s pragmatist view, Type 3 would most accurately capture the relationship, with design being one of a variety of more specific ways in which regulation (understood as regulativity) actually makes changes in the world. These other forms of regulatory practice are not depicted in the figure. This seems to align with Stilgoe et al.’s aforementioned position that responsible design must be embedded within regulation. Although there is a slight nuance to our position: Design is conceived of as a form of regulation always, regardless of active work on the part of designers to ‘embed’ their work inside regulatory practices. Stilgoe et al.’s admonition can be better understood as a normative claim: Responsible designers would do well to understand and align their design work with extant laws and regulations. Furthermore, following Prifti, design is beholden to governance and must be reflexively aware of how governance steers its practices (cf. Figure 2).

Figure 2: Conceptual model of the relationship between design, ‘classical’ regulation (i.e., law-making), and governance. Both design and law-making are forms of regulation (i.e., ‘regulativity’). Governance steers how design and law-making regulate, and design and law-making are both accountable to (and reflexively aware of) governance.

Bibliography

  • Prifti, K. (2024). The theory of ‘Regulation By Design’: Towards a pragmatist reconstruction. Technology and Regulation2024, 152–166. https://doi.org/10/g9dr24
  • Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy42(9), 1568–1580. https://doi.org/10/f5gv8h

On how to think about large language models

How should we think about large language models (LLMs)? People commonly think and talk about them in terms of human intelligence. To the extent this metaphor does not accurately reflect the properties of the technology, this may lead to misguided diagnoses and prescriptions. It seems to me an LLM is not like a human or a human brain in so many ways. One crucial distinction for me is that LLMs lack individuality and subjectivity.

What are organisms that similarly lack these qualities? Coral polyps and Portuguese man o’ war come to mind, or slime mold colonies. Or maybe a single bacterium, like an E. coli. Each is essentially identical to its clones, responds automatically to chemical gradients (bringing to mind how LLMs respond to prompts), and doesn’t accumulate unique experiences in any meaningful way.

Considering all these examples, the meme about LLMs being like a shoggoth (an amorphous blob-like monster originating from the speculative fiction of Howard Philips Lovecraft) is surprisingly accurate. The thing about these metaphors though is that it’s about as hard to reason about such organisms as it is to reason about LLMs. So to use them as a metaphor for thinking about LLMs won’t work. A shoggoth is even less helpful because the reference will only be familiar to those who know their H.P. Lovecraft.

So perhaps we should abandon metaphorical thinking and think historically instead. LLMs are a new language technology. As with previous technologies, such as the printing press, when they are introduced, our relationship to language changes. How does this change occur?

I think the change is dialectical. First, we have a relationship to language that we recognize as our own. Then, a new technology destabilizes this relationship, alienating us from the language practice. We no longer see our own hand in it. And we experience a lack of control over language practice. Finally, we reappropriate this language use in our practices. In this process of reappropriation, language practice as a whole is transformed. And the cycle begins again.

For an example of this dialectical transformation of language practice under the influence of new technology, we can take Eisenstein’s classic account of the history of the printing press (1980). Following its introduction many things changed about how we relate to language. Our engagement with language shifted from a primarily oral one to a visual and deliberative one. Libraries became more abundantly stocked, leading to the practice of categorization and classification of works. Preservation and analysis of stable texts became a possibility. The solitary reading experience gained prominence, producing a more private and personal relationship between readers and texts. Concerns about information overload first reared its head.

All of these things were once new and alien to humans. Now we consider them part of the natural order of things. They weren’t predetermined by the technology, they emerged through this active tug of war between groups in society about what the technology would be used for, mediated by the affordances of the technology itself.

In concrete material terms, what does an LLM consist of? An LLM is just numerical values stored in computer memory. It is a neural network architecture consisting of billions of parameters in weights and biases, organized in matrices. The storage is distributed across multiple devices. System software loads these parameters and enables the calculation of inferences. This all runs in physical data centers housing computing infrastructure, power, cooling, and networking infrastructure. Whenever people start talking about LLMs having agency or being able to reason, I remind myself of these basic facts.

A printing press, although a cleverly designed, engineered, and manufactured device, is similarly banal when you break it down to its essential components. Still, the ultimate changes to how we relate to language have been profound. From these first few years of living with LLMs, I think it is not unreasonable to think they will cause similar upheavals. What is important for me is to recognize how we become alienated from language, and to see ourselves as having agency in reappropriating LLM-mediated language practice as our own.

Engineering Ethics Reading List

I recently followed an excellent three-day course on engineering ethics. It was offered by the TU Delft graduate school and taught by Behnam Taibi with guest lectures from several of our faculty.

I found it particularly helpful to get some suggestions for further reading that represent some of the foundational ideas in the field. I figured it would be useful to others as well to have a pointer to them.

So here they are. I’ve quickly gutted these for their meaning. The one by Van de Poel I did read entirely and can highly recommend for anyone who’s doing design of emerging technologies and wants to escape from the informed consent conundrum.

I intend to dig into the Doorn one, not just because she’s one of my promoters but also because resilience is a concept that is closely related to my own interests. I’ll also get into the Floridi one in detail but the concept of information quality and the care ethics perspective on the problem of information abundance and attention scarcity I found immediately applicable in interaction design.

  1. Stilgoe, Jack, Richard Owen, and Phil Macnaghten. “Developing a framework for responsible innovation.” Research Policy 42.9 (2013): 1568-1580.
  2. Van den Hoven, Jeroen. “Value sensitive design and responsible innovation.” Responsible innovation (2013): 75-83.
  3. Hansson, Sven Ove. “Ethical criteria of risk acceptance.” Erkenntnis 59.3 (2003): 291-309.
  4. Van de Poel, Ibo. “An ethical framework for evaluating experimental technology.” Science and engineering ethics22.3 (2016): 667-686.
  5. Hansson, Sven Ove. “Philosophical problems in cost–benefit analysis.” Economics & Philosophy 23.2 (2007): 163-183.
  6. Floridi, Luciano. “Big Data and information quality.” The philosophy of information quality. Springer, Cham, 2014. 303-315.
  7. Doorn, Neelke, Paolo Gardoni, and Colleen Murphy. “A multidisciplinary definition and evaluation of resilience: The role of social justice in defining resilience.” Sustainable and Resilient Infrastructure (2018): 1-12.

We also got a draft of the intro chapter to a book on engineering and ethics that Behnam is writing. That looks very promising as well but I can’t share yet for obvious reasons.

‘Unboxing’ at Behavior Design Amsterdam #16

Below is a write-up of the talk I gave at the Behavior Design Amsterdam #16 meetup on Thursday, February 15, 2018.

'Pandora' by John William Waterhouse (1896)
‘Pandora’ by John William Waterhouse (1896)

I’d like to talk about the future of our design practice and what I think we should focus our attention on. It is all related to this idea of complexity and opening up black boxes. We’re going to take the scenic route, though. So bear with me.

Software Design

Two years ago I spent about half a year in Singapore.

While there I worked as product strategist and designer at a startup called ARTO, an art recommendation service. It shows you a random sample of artworks, you tell it which ones you like, and it will then start recommending pieces it thinks you like. In case you were wondering: yes, swiping left and right was involved.

We had this interesting problem of ingesting art from many different sources (mostly online galleries) with metadata of wildly varying levels of quality. So, using metadata to figure out which art to show was a bit of a non-starter. It should come as no surprise then, that we started looking into machine learning—image processing in particular.

And so I found myself working with my engineering colleagues on an art recommendation stream which was driven at least in part by machine learning. And I quickly realised we had a problem. In terms of how we worked together on this part of the product, it felt like we had taken a bunch of steps back in time. Back to a way of collaborating that was less integrated and less responsive.

That’s because we have all these nice tools and techniques for designing traditional software products. But software is deterministic. Machine learning is fundamentally different in nature: it is probabilistic.

It was hard for me to take the lead in the design of this part of the product for two reasons. First of all, it was challenging to get a first-hand feel of the machine learning feature before it was implemented.

And second of all, it was hard for me to communicate or visualise the intended behaviour of the machine learning feature to the rest of the team.

So when I came back to the Netherlands I decided to dig into this problem of design for machine learning. Turns out I opened up quite the can of worms for myself. But that’s okay.

There are two reasons I care about this:

The first is that I think we need more design-led innovation in the machine learning space. At the moment it is engineering-dominated, which doesn’t necessarily lead to useful outcomes. But if you want to take the lead in the design of machine learning applications, you need a firm handle on the nature of the technology.

The second reason why I think we need to educate ourselves as designers on the nature of machine learning is that we need to take responsibility for the impact the technology has on the lives of people. There is a lot of talk about ethics in the design industry at the moment. Which I consider a positive sign. But I also see a reluctance to really grapple with what ethics is and what the relationship between technology and society is. We seem to want easy answers, which is understandable because we are all very busy people. But having spent some time digging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a feature. And we should embrace it.

Machine Learning

At the end of 2016 I attended ThingsCon here in Amsterdam and I was introduced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both interested in machine learning. So with encouragement from Ianus we decided to put together a workshop that would enable industrial design master students to tangle with it in a hands-on manner.

About a year later now, this has grown into a thing we call Prototyping the Useless Butler. During the workshop, you use machine learning algorithms to train a model that takes inputs from a network-connected arduino’s sensors and drives that same arduino’s actuators. In effect, you can create interactive behaviour without writing a single line of code. And you get a first hand feel for how common applications of machine learning work. Things like regression, classification and dynamic time warping.

The thing that makes this workshop tick is an open source software application called Wekinator. Which was created by Rebecca Fiebrink. It was originally aimed at performing artists so that they could build interactive instruments without writing code. But it takes inputs from anything and sends outputs to anything. So we appropriated it towards our own ends.

You can find everything related to Useless Butler on this GitHub repo.

The thinking behind this workshop is that for us designers to be able to think creatively about applications of machine learning, we need a granular understanding of the nature of the technology. The thing with designers is, we can’t really learn about such things from books. A lot of design knowledge is tacit, it emerges from our physical engagement with the world. This is why things like sketching and prototyping are such essential parts of our way of working. And so with useless butler we aim to create an environment in which you as a designer can gain tacit knowledge about the workings of machine learning.

Simply put, for a lot of us, machine learning is a black box. With Useless Butler, we open the black box a bit and let you peer inside. This should improve the odds of design-led innovation happening in the machine learning space. And it should also help with ethics. But it’s definitely not enough. Knowledge about the technology isn’t the only issue here. There are more black boxes to open.

Values

Which brings me back to that other black box: ethics. Like I already mentioned there is a lot of talk in the tech industry about how we should “be more ethical”. But things are often reduced to this notion that designers should do no harm. As if ethics is a problem to be fixed in stead of a thing to be practiced.

So I started to talk about this to people I know in academia and more than once this thing called Value Sensitive Design was mentioned. It should be no surprise to anyone that scholars have been chewing on this stuff for quite a while. One of the earliest references I came across, an essay by Batya Friedman in Interactions is from 1996! This is a lesson to all of us I think. Pay more attention to what the academics are talking about.

So, at the end of last year I dove into this topic. Our host Iskander Smit, Rob Maijers and myself coordinate a grassroots community for tech workers called Tech Solidarity NL. We want to build technology that serves the needs of the many, not the few. Value Sensitive Design seemed like a good thing to dig into and so we did.

I’m not going to dive into the details here. There’s a report on the Tech Solidarity NL website if you’re interested. But I will highlight a few things that value sensitive design asks us to consider that I think help us unpack what it means to practice ethical design.

First of all, values. Here’s how it is commonly defined in the literature:

“A value refers to what a person or group of people consider important in life.”

I like it because it’s common sense, right? But it also makes clear that there can never be one monolithic definition of what ‘good’ is in all cases. As we designers like to say: “it depends” and when it comes to values things are no different.

“Person or group” implies there can be various stakeholders. Value sensitive design distinguishes between direct and indirect stakeholders. The former have direct contact with the technology, the latter don’t but are affected by it nonetheless. Value sensitive design means taking both into account. So this blows up the conventional notion of a single user to design for.

Various stakeholder groups can have competing values and so to design for them means to arrive at some sort of trade-off between values. This is a crucial point. There is no such thing as a perfect or objectively best solution to ethical conundrums. Not in the design of technology and not anywhere else.

Value sensitive design encourages you to map stakeholders and their values. These will be different for every design project. Another approach is to use lists like the one pictured here as an analytical tool to think about how a design impacts various values.

Furthermore, during your design process you might not only think about the short-term impact of a technology, but also think about how it will affect things in the long run.

And similarly, you might think about the effects of a technology not only when a few people are using it, but also when it becomes wildly successful and everybody uses it.

There are tools out there that can help you think through these things. But so far much of the work in this area is happening on the academic side. I think there is an opportunity for us to create tools and case studies that will help us educate ourselves on this stuff.

There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the technologies we work with, it helps to dig deeper into the nature of the relationship between technology and society. Yes, it complicates things. But that is exactly the point.

Privileging simple and scalable solutions over those adapted to local needs is socially, economically and ecologically unsustainable. So I hope you will join me in embracing complexity.

High-skill robots, low-skill workers

Some notes on what I think I understand about technology and inequality.

Let’s start with an obvious big question: is technology destroying jobs faster than they can be replaced? On the long term the evidence isn’t strong. Humans always appear to invent new things to do. There is no reason this time around should be any different.

But in the short term technology has contributed to an evaporation of mid-skilled jobs. Parts of these jobs are automated entirely, parts can be done by fewer people because of higher productivity gained from tech.

While productivity continues to grow, jobs are lagging behind. The year 2000 appears to have been a turning point. “Something” happened around that time. But no-one knows exactly what.

My hunch is that we’ve seen an emergence of a new class of pseudo-monopolies. Oligopolies. And this is compounded by a ‘winner takes all’ dynamic that technology seems to produce.

Others have pointed to globalisation but although this might be a contributing factor, the evidence does not support the idea that it is the major cause.

So what are we left with?

Historically, looking at previous technological upsets, it appears education makes a big difference. People negatively affected by technological progress should have access to good education so that they have options. In the US the access to high quality education is not equally divided.

Apparently family income is associated with educational achievement. So if your family is rich, you are more likely to become a high skilled individual. And high skilled individuals are privileged by the tech economy.

And if Piketty’s is right, we are approaching a reality in which money made from wealth rises faster than wages. So there is a feedback loop in place which only exacerbates the situation.

One more bullet: If you think trickle-down economics, increasing the size of the pie will help, you might be mistaken. It appears social mobility is helped more by decreasing inequality in the distribution of income growth.

So some preliminary conclusions: a progressive tax on wealth won’t solve the issue. The education system will require reform, too.

I think this is the central irony of the whole situation: we are working hard to teach machines how to learn. But we are neglecting to improve how people learn.

Waiting for the smart city

Nowadays when we talk about the smart city we don’t necessarily talk about smartness or cities.

I feel like when the term is used it often obscures more than it reveals.

Here a few reasons why.

To begin with, the term suggests something that is yet to arrive. Some kind of tech-enabled utopia. But actually, current day cities are already smart to a greater or lesser degree depending on where and how you look.

This is important because too often we postpone action as we wait for the smart city to arrive. We don’t have to wait. We can act to improve things right now.

Furthermore, ‘smart city’ suggests something monolithic that can be designed as a whole. But a smart city, like any city, is a huge mess of interconnected things. It resists topdown design.

History is littered with failed attempts at authoritarian high-modernist city design. Just stop it.

Smartness should not be an end but a means.

I read ‘smart’ as a shorthand for ‘technologically augmented’. A smart city is a city eaten by software. All cities are being eaten (or have been eaten) by software to a greater or lesser extent. Uber and Airbnb are obvious examples. Smaller more subtle ones abound.

The question is, smart to what end? Efficiency? Legibility? Controllability? Anti-fragility? Playability? Liveability? Sustainability? The answer depends on your outlook.

These are ways in which the smart city label obscures. It obscures agency. It obscures networks. It obscures intent.

I’m not saying don’t ever use it. But in many cases you can get by without it. You can talk about specific parts that make up the whole of a city, specific technologies and specific aims.


Postscript 1

We can do the same exercise with the ‘city’ part of the meme.

The same process that is making cities smart (software eating the world) is also making everything else smart. Smart towns. Smart countrysides. The ends are different. The networks are different. The processes play out in different ways.

It’s okay to think about cities but don’t think they have a monopoly on ‘disruption’.

Postscript 2

Some of this inspired by clever things I heard Sebastian Quack say at Playful Design for Smart Cities and Usman Haque at ThingsCon Amsterdam.

Artificial intelligence as partner

Some notes on artificial intelligence, technology as partner and related user interface design challenges. Mostly notes to self, not sure I am adding much to the debate. Just summarising what I think is important to think about more. Warning: Dense with links.

Matt Jones writes about how artificial intelligence does not have to be a slave, but can also be partner.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

I would add a third possibility, which is AI as master. A common fear we humans have and one I think only growing as things like AlphaGo and new Boston Dynamics robots keep happening.

I have had a tweet pinned to my timeline for a while now, which is a quote from Play Matters.

“tech­no­logy is not a ser­vant or a mas­ter but a source of expres­sion, a way of being”

So this idea actually does not just apply to AI but to tech in general. Of course, as tech gets smarter and more independent from humans, the idea of a ‘third way’ only grows in importance.

More tweeting. A while back, shortly after AlphaGo’s victory, James tweeted:

On the one hand, we must insist, as Kasparov did, on Advanced Go, and then Advanced Everything Else https://en.wikipedia.org/wiki/Advanced_Chess

Advanced Chess is a clear example of humans and AI partnering. And it is also an example of technology as a source of expression and a way of being.

Also, in a WIRED article on AlphaGo, someone who had played the AI repeatedly says his game has improved tremendously.

So that is the promise: Artificially intelligent systems which work together with humans for mutual benefit.

Now of course these AIs don’t just arrive into the world fully formed. They are created by humans with particular goals in mind. So there is a design component there. We can design them to be partners but we can also design them to be masters or slaves.

As an aside: Maybe AIs that make use of deep learning are particularly well suited to this partner model? I do not know enough about it to say for sure. But I was struck by this piece on why Google ditched Boston Dynamics. There apparently is a significant difference between holistic and reductionist approaches, deep learning being holistic. I imagine reductionist AI might be more dependent on humans. But this is just wild speculation. I don’t know if there is anything there.

This insistence of James on “advanced everything else” is a world view. A politics. To allow ourselves to be increasingly entangled with these systems, to not be afraid of them. Because if we are afraid, we either want to subjugate them or they will subjugate us. It is also about not obscuring the systems we are part of. This is a sentiment also expressed by James in the same series of tweets I quoted from earlier:

These emergences are also the best model we have ever built for describing the true state of the world as it always already exists.

And there is overlap here with ideas expressed by Kevin in ‘Design as Participation’:

[W]e are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?

AI partners might help us to better see the systems the world is made up of and engage with them more deeply. This hope is expressed by Matt Webb, too:

with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.

I am reminded of the mixed-initiative systems being researched in the area of procedural content generation for games. I wrote about these a while back on the Hubbub blog. Such systems are partners of designers. They give something like super powers. Now imagine such powers applied to other problems. Quite exciting.

Actually, in the aforementioned article I distinguish between tools for making things and tools for inspecting possibility spaces. In the first case designers manipulate more abstract representations of the intended outcome and the system generates the actual output. In the second case the system visualises the range of possible outcomes given a particular configuration of the abstract representation. These two are best paired.

From a design perspective, a lot remains to be figured out. If I look at those mixed-initiative tools I am struck by how poorly they communicate what the AI is doing and what its capabilities are. There is a huge user interface design challenge there.

For stuff focused on getting information, a conversational UI seems to be the current local optimum for working with an AI. But for tools for creativity, to use the two-way split proposed by Victor, different UIs will be required.

What shape will they take? What visual language do we need to express the particular properties of artificial intelligence? What approaches can we take in addition to personifying AI as bots or characters? I don’t know and I can hardly think of any good examples that point towards promising approaches. Lots to be done.

My plans for 2016

Long story short: my plan is to make plans.

Hubbub has gone into hibernation. After more than six years of leading a boutique playful design agency I am returning to freelance life. At least for the short term.

I will use the flexibility afforded by this freeing up of time to take stock of where I have come from and where I am headed. ‘Orientation is the Schwerpunkt,’ as Boyd says. I have definitely cycled back through my meta-OODA-loop and am firmly back in the second O.

To make things more interesting I have exchanged the Netherlands for Singapore. I will be here until August. It is going to be fun to explore the things this city has to offer. I am curious what the technology and design scene is like when seen up close. So I hope to do some work locally.

I will take on short commitments. Let’s say no longer than two to three months. Anything goes really, but I am particularly interested in work related to creativity and learning. I am also keen on getting back into teaching.

So if you are in Singapore, work in technology or design and want to have a cup of coffee. Drop me a line.

Happy 2016!

Nobody does thoroughly argued presentations quite like Sebastian. This is good stuff on ethics and design.

I decided to share some thoughts it sparked via Twitter and ended up ranting a bit:

I recently talked about ethics to a bunch of “behavior designers” and found myself concluding that any designed system that does not allow for user appropriation is fundamentally unethical because as you rightly point out what is the good life is a personal matter. Imposing it is an inherently violent act. A lot of design is a form of technologically mediated violence. Getting people to do your bidding, however well intended. Which given my own vocation and work in the past is a kind of troubling thought to arrive at… Help?

Sebastian makes his best point on slides 113-114. Ethical design isn’t about doing the least harm, but about doing the most good. And, to come back to my Twitter rant, for me the ultimate good is for others to be free. Hence non-prescriptive design.

(via Designing the Good Life: Ethics and User Experience Design)