Designing Learning Experiences in a Post-ChatGPT World

Transcript of a talk delivered at LXDCON’25 on June 12, 2025.

My name is Kars. I am a postdoc at TU Delft. I research contestable AI—how to use design to ensure AI systems remain subject to societal control. I teach the responsible design of AI systems. In a previous life, I was a practicing designer of digital products and services. I will talk about designing learning experiences in a post-ChatGPT world.

Let’s start at this date.

This is when OpenAI released an early demo of ChatGPT. The chatbot quickly went viral on social media. Users shared examples of what it could do. Stories and samples included everything from travel planning to writing fables to coding computer programs. Within five days, the chatbot had attracted over one million users.

Fast forward to today, 2 years, 6 months, and 14 days later, we’ve seen a massive impact across domains, including on education.

For example, the article on the left talks about how AI cheating has become pervasive in higher education. It is fundamentally undermining the educational process itself. Students are using ChatGPT for nearly every assignment while educators struggle with ineffective detection methods and question whether traditional academic work has lost all meaning.

The one on the right talks about how students are accusing professors of being hypocritical. Teachers are using AI tools for things like course materials and grading while telling students they cannot use them.

What we’re looking at is a situation where academic integrity was already in question, on top of that, both students and faculty are quickly adopting AI, and institutions aren’t really ready for it.

These transformations in higher education give me pause. What should we change about how we design learning experiences given this new reality?

So, just to clarify, when I mention “AI” in this talk, I’m specifically referring to generative AI, or GenAI, and even more specifically, to chatbots that are powered by large language models, like ChatGPT.

Throughout this talk I will use this example of a learning experience that makes use of GenAI. Sharad Goel, Professor at Harvard Kennedy School, developed an AI Slackbot named “StatGPT” that aims to enhance student learning through interactive engagement.

It was tested in a statistics course with positive feedback from students. They described it as supportive and easily accessible, available anytime for student use. There are plans to implement StatGPT in various other courses. They say it assists in active problem-solving and consider it an example of how AI can facilitate learning, rather than replace it.

The debate around GenAI and learning has become polarized. I see the challenge as trying to find a balance. On one side, there’s complete skepticism about AI, and on the other, there’s this blind acceptance of it. What I propose is that we need an approach I call Conscious Adaptation: moving forward with full awareness of what’s being transformed.

To build the case for this approach, I will be looking at two common positions in the debates around AI and education. I’ll be focusing on four pieces of writing.

Two of them are by Ethan Mollick, from his blog. He’s a professor at the University of Pennsylvania specializing in innovation and entrepreneurship, known for his work on the potential of AI to transform different fields.

The other two pieces are by Ian Bogost, published at The Atlantic. He’s a media studies scholar, author, and game designer who teaches at Washington University. He’s known for his sobering, realist critiques of the impact of technology on society.

These, to me, exemplify two strands of the debate around AI in education.

Ethan Mollick’s position, in essence, is that AI in education is an inevitable transformation that educators must embrace and redesign around, not fight.

You could say Mollick is an optimist. But he is also really clear-eyed about how much disruption is going on. He even refers to it as the “Homework Apocalypse.” He talks about some serious issues: there are failures in detection, students are not learning as well (with exam performance dropping by about 17%), and there are a lot of misunderstandings about AI on both sides—students and faculty.

But his perspective is more about adapting to a tough situation. He’s always focused on solutions, constantly asking, “What can we do about this?” He believes that with thoughtful human efforts, we can really influence the outcomes positively.

On the other hand, Ian Bogost’s view is that AI has created an unsolvable crisis that’s fundamentally breaking traditional education and leaving teachers demoralized.

Bogost, I would describe as a realist. He accepts the inevitability of AI, noting that the “arms race will continue” and that technology will often outpace official policies. He also highlights the negative impact on faculty morale, the dependency of students, and the chaos in institutions.

He’s not suggesting that we should ban AI or go back to a time before it existed. He sees AI as something that might be the final blow to a profession that’s already struggling with deeper issues. At the same time, he emphasizes the need for human agency by calling out the lack of reflection and action from institutions.

So, they both observe the same reality, but they look at it differently. Mollick sees it as an engineering challenge—one that’s complicated but can be tackled with smart design. On the other hand, Bogost views it as a social issue that uncovers deeper problems that can’t just be fixed with technology.

Mollick thinks it’s possible to rebuild after a sort of collapse, while Bogost questions if the institutions that are supposed to do that rebuilding are really fit for the job.

Mollick would likely celebrate it as an example of co-intelligence. Bogost would likely ask what the rollout of the bot would be at the expense of, or what deeper problems its deployment unveils.

Getting past the conflict between these two views isn’t just about figuring out the best technical methods or the right order of solutions. The real challenge lies in our ability as institutions to make real changes, and we need to be careful that focusing on solutions doesn’t distract us from the important discussions we need to have.

I see three strategies that work together to create an approach that addresses the conflict between these two perspectives in a way that I believe will be more effective.

First, institutional realism is about designing interventions assuming institutions will resist change, capture innovations, or abandon initiatives. Given this, we could focus on individual teacher practices, learner-level tools, and changes that don’t require systemic transformation. We could treat every implementation as a diagnostic probe revealing actual (vs. stated) institutional capacity.

Second, loss-conscious innovation is about before implementing AI-enhanced practices, explicitly identifying what human learning processes, relationships, or skills are being replaced. We could develop metrics that track preservation alongside progress. We could build “conservation” components into new approaches to protect irreplaceable educational values.

Third, and finally, we should recognize that Mollick-style solution-building and Bogost-style critical analysis serve different but essential roles. Practitioners need actionable guidance; while the broader field needs diagnostic consciousness. We should avoid a false synthesis but instead maintain both approaches as distinct intellectual work that informs each other.

In short, striking a balance may not be the main focus; it’s more about taking practical actions while considering the overall context. Progress is important, but it’s also worth reflecting on what gets left behind. Conscious adaptation.

So, applying these strategies to Harvard’s chatbot, we could ask: (1) How can we create a feedback loop between an intervention like this and the things it uncovers about institutional limits, so that those can be addressed in the appropriate place? (2) How can we measure what value this bot adds for students and for teachers? What is it replacing, what is it adding, what is it making room for? (3) What critique of learning at Harvard is implied by this intervention?

What does all of this mean, finally, for LXD? This is an LXD conference, so I don’t need to spend a lot of time explaining what it is. But let’s just use this basic definition as a starting point. It’s about experiences, it’s about centering the learner, it’s about achieving learning outcomes, etc.

Comparing my conscious adaptation approach to what typifies LXD, I can see a number of alignments.

Both LXD and Conscious Adaptation prioritize authentic human engagement over efficiency. LXD through human-centered design, conscious adaptation through protecting meaningful intellectual effort from AI displacement.

LXD’s focus on holistic learning journeys aligns with both Mollick’s “effort is the point” and Bogost’s concern that AI shortcuts undermine the educational value embedded in struggle and synthesis.

LXD’s experimental, prototype-driven approach mirrors my “diagnostic pragmatism”—both treat interventions as learning opportunities that reveal what actually works rather than pursuing idealized solutions.

So, going back one final time to Harvard’s bot, an LXD practice aligned in this way would lead us to ask: (1) Is this leveraging GenAI to protect and promote genuine intellectual effort? (2) Are teachers and learners meaningfully engaged in the ongoing development of this technology? (3) Is this prototype properly embedded, so that its potential to create learning for the organization can be realized?

So, where does this leave us as learning experience designers? I see three practical imperatives for Conscious Adaptation.

First, we need to protect meaningful human effort while leveraging AI’s strengths. Remember that “the effort is the point” in learning. Rather than asking “can AI do this?”, we should ask “should it?” Harvard’s bot works because it scaffolds thinking rather than replacing it. We should use AI for feedback and iteration while preserving human work for synthesis and struggle.

Second, we must design for real institutions, not ideal ones. Institutions resist change, capture innovations, and abandon initiatives. We need to design assuming limited budgets, overworked staff, and competing priorities. Every implementation becomes a diagnostic probe that reveals what resistance actually tells us about institutional capacity.

Third, we have to recognize the limits of design. AI exposes deeper structural problems like grade obsession, teacher burnout, and test-driven curricula. You can’t design your way out of systemic issues, and sometimes the best move is recognizing when the problem isn’t experiential at all.

This is Conscious Adaptation—moving forward with eyes wide open.

Thanks.

On the design and regulation of technology

The following is a section from a manuscript in press on the similarities and differences in approaches to explainable and contestable AI in design and law (Schmude et al., 2025). It ended up on the cutting room floor, but it is the kind of thing I find handy to refer back to, so I chose to share it here.

The responsible design of AI, including practices that seek to make AI systems more explainable and contestable, must somehow relate to legislation and regulations. Writing about responsible research and innovation (RRI) more broadly, Stilgoe et al. (2013) assert that RRI, which we would say includes design, must be embedded in regulation. But does it really make sense to think of the relationship between design and regulation in this way? Understood abstractly, there are in fact at least four ways in which we can think about the relationship between the design and regulation of technology (Figure 1).

Figure 1: We see four possible ways that the relationship between (D) the design and (R) regulation of technology can be conceptualized: (1) design and regulation are independent spheres, (2) design and regulation partially overlap, (3) design is embedded inside of regulation, or (4) regulation is embedded inside design. In all cases, we assume an interactional relation between the two spheres.

To establish the relationship between design and regulation, we first need to establish how we should think about regulation, and related concepts such as governance and policymaking more generally. One straightforward definition would be that regulation entails formal rules and enforcement mechanisms that constrain behavior. These are backed by authority—typically state authority, but increasingly also non-stake actors. Regulation and governance are interactive and mutually constitutive. Regulation is one mechanism within governance systems. Governance frameworks establish contexts for regulation. Policymaking connects politics to governance by translating political contestation into actionable frameworks. Politics, then, influences all these domains: policymaking, governance, and regulation. And they, in turn, operate within and reciprocally shape society writ large. See Table 1 for working definitions of ‘regulation’ and associated concepts.

ConceptDefinition
RegulationFormal rules and enforcement mechanisms that constrain behavior, typically state-backed but increasingly emerging from non-state actors (industry self-regulation, transnational regulatory bodies).
GovernanceBroader arrangements for coordinating social action across public, private, and civil society spheres through both formal and informal mechanisms.
PolicymakingProcess of formulating courses of action to address public problems.
PoliticsContestation of power, interests, and values that shapes governance arrangements.
SocietyBroader context of social relations, norms, and institutions.
Table 1: Working definitions of ‘regulation’ and associated concepts.

What about design? Scholars of regulation have adopted the notion of ‘regulation by design’ (RBD) to refer to the inscribing of rules into the world through the creation and implementation of technological artifacts. Prifti (2024) identifies two prevailing approaches to RBD: The essentialist view treats RBD as policy enactments, or “rules for design.” By contrast, the functionalist view treats design as a mere instrument, or “ruling by design.” We agree with Prifti when he states that both approaches are limited. Essentialism neglects the complexity of regulatory environments, while functionalism neglects the autonomy and complexity of design as a practice.

Prifti proposes a pragmatist reconstruction that views regulation as a rule-making activity (“regulativity”) performed through social practices including design (the “rule of design”). Design is conceptualized as a contextual, situated social practice that performs changes in the environment, rather than just a tool or set of rules. Unlike the law, markets, or social norms, which rely on incentives and sanctions, design can simply disable the possibility of non-compliance, making it a uniquely powerful form of regulation. The pragmatist approach distinguishes between regulation and governance, with governance being a meta-regulative activity that steers how other practices (like design) regulate. This reconceptualization helps address legitimacy concerns by allowing for greater accountability for design practices that might bypass democratic processes.

Returning to the opening question then, out of the four basic ways in which the relationship between design and regulation can be drawn (Figure 1), if we were to adopt Prifti’s pragmatist view, Type 3 would most accurately capture the relationship, with design being one of a variety of more specific ways in which regulation (understood as regulativity) actually makes changes in the world. These other forms of regulatory practice are not depicted in the figure. This seems to align with Stilgoe et al.’s aforementioned position that responsible design must be embedded within regulation. Although there is a slight nuance to our position: Design is conceived of as a form of regulation always, regardless of active work on the part of designers to ‘embed’ their work inside regulatory practices. Stilgoe et al.’s admonition can be better understood as a normative claim: Responsible designers would do well to understand and align their design work with extant laws and regulations. Furthermore, following Prifti, design is beholden to governance and must be reflexively aware of how governance steers its practices (cf. Figure 2).

Figure 2: Conceptual model of the relationship between design, ‘classical’ regulation (i.e., law-making), and governance. Both design and law-making are forms of regulation (i.e., ‘regulativity’). Governance steers how design and law-making regulate, and design and law-making are both accountable to (and reflexively aware of) governance.

Bibliography

  • Prifti, K. (2024). The theory of ‘Regulation By Design’: Towards a pragmatist reconstruction. Technology and Regulation2024, 152–166. https://doi.org/10/g9dr24
  • Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy42(9), 1568–1580. https://doi.org/10/f5gv8h