Participatory AI and ML engineering

In the first half of this year, I’ve presented several versions of a brief talk on participatory AI. I figured I would post an amalgam of these to the blog for future reference. (Previously, on the blog, I posted a brief lit review on the same topic; this talk builds on that.)

So, to start, the main point of this talk is that many participatory approaches to AI don’t engage deeply with the specifics of the technology. One such specific is the translation work engineers do to make a problem “learnable” by a machine (Kang, 2023). From this perspective, the main question to ask becomes, how does translation happen in our specific projects? Should citizens be involved in this translation work? If so, how to achieve this?

Before we dig into the state of participatory AI, let’s begin by clarifying why we might want to enable participation in the first place. A common motivation is a lack of democratic control over AI systems. (This is particularly concerning when AI systems are used for government policy execution. These are the systems I mostly look at in my own research.) And so the response is to bring the people into the development process, and to let them co-decide matters.

In these cases, participation can be understood as an enabler of democratic agency, i.e., a way for subjects to legitimate the use of AI systems (cf. Peter, 2020 in Rubel et al., 2021). Peter distinguishes two pathways: a normative one and a democratic one. Participation can be seen as an example of the democratic pathway to legitimation. A crucial detail Peter mentions here, which is often overlooked in participatory AI literature, is that normative constraints must limit the democratic pathway to avoid arbitrariness.

So, what is the state of participatory AI research and practice? I will look at each in turn next.

As mentioned, I previously posted on the state of participatory AI research, so I won’t repeat that in full here. (For the record, I reviewed Birhane et al. (2022), Bratteteig & Verne (2018), Delgado et al. (2023), Ehsan & Riedl (2020), Feffer et al. (2023), Gerdes (2022), Groves et al. (2023), Robertson et al. (2023), Sloane et al. (2020), and Zytko et al. (2022).) Elements that jump out include:

  • Superficial and unrepresentative involvement.
  • Piecemeal approaches that have minimal impact on decision-making.
  • Participants with a consultative role rather than that of active decision-makers.
  • A lack of bridge-builders between stakeholder perspectives.
  • Participation washing and exploitative community involvement.
  • Struggles with the dynamic nature of technology over time.
  • Discrepancies between the time scales for users to evaluate design ideas versus the pace at which systems are developed.
  • A demand for participation to enhance community knowledge and to actually empower them.

Taking a step back, if I were to evaluate the state of the scientific literature on participatory AI, it strikes me that many of these issues are not new to AI. They have been present in participatory design more broadly for some time already. Many of these issues are also not necessarily specific to AI. The ones I would call out include the issues related to AI system dynamism, time scales of participation versus development, and knowledge gaps between various actors in participatory processes (and, relatedly, the lack of bridge-builders).

So, what about practice? Let’s look at two reports that I feel are a good representation of the broader field: Framework for Meaningful Stakeholder Involvement by ECNL & SocietyInside, and Democratizing AI: Principles for Meaningful Public Participation by Data & Society.

Framework for Meaningful Stakeholder Involvement is aimed at businesses, organizations, and institutions that use AI. It focuses on human rights, ethical assessment, and compliance. It aims to be a tool for planning, delivering, and evaluating stakeholder engagement effectively, emphasizing three core elements: Shared Purpose, Trustworthy Process, and Visible Impact.

Democratizing AI frames public participation in AI development as a way to add legitimacy and accountability and to help prevent harmful impacts. It outlines risks associated with AI, including biased outcomes, opaque decision-making processes, and designers lacking real-world impact awareness. Causes for ineffective participation include unidirectional communication, socioeconomic barriers, superficial engagement, and ineffective third-party involvement. The report uses environmental law as a reference point and offers eight guidelines for meaningful public participation in AI.

Taking stock of these reports, we can say that the building blocks for the overall process are available to those seriously looking. The challenges facing participatory AI are, on the one hand, economic and political. On the other hand, they are related to the specifics of the technology at hand. For the remainder of this piece, let’s dig into the latter a bit more.

Let’s focus on translation work done by engineers during model development.

For this, I build on work by Kang (2023), which focuses on the qualitative analysis of how phenomena are translated into ML-compatible forms, paying specific attention to the ontological translations that occur in making a problem learnable. Translation in ML means transforming complex qualitative phenomena into quantifiable and computable forms. Multifaceted problems are converted into a “usable quantitative reference” or “ground truth.” This translation is not a mere representation of reality but a reformulation of a problem into mathematical terms, making it understandable and processable by ML algorithms. This transformation involves a significant amount of “ontological dissonance,” as it mediates and often simplifies the complexity of real-world phenomena into a taxonomy or set of classes for ML prediction. The process of translating is based on assumptions and standards that may alter the nature of the ML task and introduce new social and technical problems.

So what? I propose we can use the notion of translation as a frame for ML engineering. Understanding ML model engineering as translation is a potentially useful way to analyze what happens at each step of the process: What gets selected for translation, how the translation is performed, and what the resulting translation consists of.

So, if we seek to make participatory AI engage more with the technical particularities of ML, we could begin by identifying translations that have happened or might happen in our projects. We could then ask to what extent these acts of translation are value-laden. For those that are, we could think about how to communicate these translations to a lay audience. A particular challenge I expect we will be faced with is what the meaningful level of abstraction for citizen participation during AI development is. We should also ask what the appropriate ‘vehicle’ for citizen participation will be. And we should seek to move beyond small-scale, one-off, often unrepresentative forms of direct participation.

Bibliography

  • Birhane, A., Isaac, W., Prabhakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Power to the People? Opportunities and Challenges for Participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization, 1–8. https://doi.org/10/grnj99
  • Bratteteig, T., & Verne, G. (2018). Does AI make PD obsolete?: Exploring challenges from artificial intelligence to participatory design. Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial – Volume 2, 1–5. https://doi.org/10/ghsn84
  • Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 1–23. https://doi.org/10/gs8kvm
  • Ehsan, U., & Riedl, M. O. (2020). Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In C. Stephanidis, M. Kurosu, H. Degen, & L. Reinerman-Jones (Eds.), HCI International 2020—Late Breaking Papers: Multimodality and Intelligence (pp. 449–466). Springer International Publishing. https://doi.org/10/gskmgf
  • Feffer, M., Skirpan, M., Lipton, Z., & Heidari, H. (2023). From Preference Elicitation to Participatory ML: A Critical Survey & Guidelines for Future Research. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 38–48. https://doi.org/10/gs8kvx
  • Gerdes, A. (2022). A participatory data-centric approach to AI Ethics by Design. Applied Artificial Intelligence, 36(1), 2009222. https://doi.org/10/gs8kt4
  • Groves, L., Peppin, A., Strait, A., & Brennan, J. (2023). Going public: The role of public participation approaches in commercial AI labs. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1162–1173. https://doi.org/10/gs8kvs
  • Kang, E. B. (2023). Ground truth tracings (GTT): On the epistemic limits of machine learning. Big Data & Society, 10(1), 1–12. https://doi.org/10/gtfgvx
  • Peter, F. (2020). The Grounds of Political Legitimacy. Journal of the American Philosophical Association, 6(3), 372–390. https://doi.org/10/grqfhn
  • Robertson, S., Nguyen, T., Hu, C., Albiston, C., Nikzad, A., & Salehi, N. (2023). Expressiveness, Cost, and Collectivism: How the Design of Preference Languages Shapes Participation in Algorithmic Decision-Making. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10/gr6q2t
  • Rubel, A., Castro, C., & Pham, A. K. (2021). Algorithms and autonomy: The ethics of automated decision systems. Cambridge University Press.
  • Sloane, M., Moss, E., Awomolo, O., & Forlano, L. (2020). Participation is not a Design Fix for Machine Learning. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  • Zytko, D., J. Wisniewski, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Participatory Design of AI Systems: Opportunities and Challenges Across Diverse Users, Relationships, and Application Domains. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, 1–4. https://doi.org/10/gs8kv6

Participatory AI literature review

I’ve been thinking alot about civic participation in machine learning systems development. In particular, involving non-experts in the potentially value-laden translation work from specifications that engineers do when they build their models. Below is a summary of a selection of literature I found on the topic, which may serve as a jumping-off point for future research.

Abstract

The literature on participatory artificial intelligence (AI) reveals a complex landscape marked by challenges and evolving methodologies. Feffer et al. (2023) critique the reduction of participation to computational mechanisms that only approximate narrow moral values. They also note that engagements with stakeholders are often superficial and unrepresentative. Groves et al. (2023) identify significant barriers in commercial AI labs, including high costs, fragmented approaches, exploitation concerns, lack of transparency, and contextual complexities. These barriers lead to a piecemeal approach to participation with minimal impact on decision-making in AI labs. Delgado et al. (2023) observe that participatory AI involves stakeholders mostly in a consultative role without integrating them as active decision-makers throughout the AI design lifecycle.

Gerdes (2022) proposes a data-centric approach to AI ethics and underscores the need for interdisciplinary bridge builders to reconcile different stakeholder perspectives. Robertson et al. (2023) explore participatory algorithm design, emphasizing the need for preference languages that balance expressiveness, cost, and collectivism—Sloane et al. (2020) caution against “participation washing” and the potential for exploitative community involvement. Bratteteig & Verne (2018) highlight AI’s challenges to traditional participatory design (PD) methods, including unpredictable technological changes and a lack of user-oriented evaluation. Birhane et al. (2022) call for a clearer understanding of meaningful participation, advocating for a shift towards vibrant, continuous engagement that enhances community knowledge and empowerment. The literature suggests a pressing need for more effective, inclusive, and empowering participatory approaches in AI development.

Bibliography

  1. Birhane, A., Isaac, W., Prabhakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Power to the People? Opportunities and Challenges for Participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization, 1–8. https://doi.org/10/grnj99
  2. Bratteteig, T., & Verne, G. (2018). Does AI make PD obsolete?: Exploring challenges from artificial intelligence to participatory design. Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial – Volume 2, 1–5. https://doi.org/10/ghsn84
  3. Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 1–23. https://doi.org/10/gs8kvm
  4. Ehsan, U., & Riedl, M. O. (2020). Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In C. Stephanidis, M. Kurosu, H. Degen, & L. Reinerman-Jones (Eds.), HCI International 2020—Late Breaking Papers: Multimodality and Intelligence (pp. 449–466). Springer International Publishing. https://doi.org/10/gskmgf
  5. Feffer, M., Skirpan, M., Lipton, Z., & Heidari, H. (2023). From Preference Elicitation to Participatory ML: A Critical Survey & Guidelines for Future Research. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 38–48. https://doi.org/10/gs8kvx
  6. Gerdes, A. (2022). A participatory data-centric approach to AI Ethics by Design. Applied Artificial Intelligence, 36(1), 2009222. https://doi.org/10/gs8kt4
  7. Groves, L., Peppin, A., Strait, A., & Brennan, J. (2023). Going public: The role of public participation approaches in commercial AI labs. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1162–1173. https://doi.org/10/gs8kvs
  8. Robertson, S., Nguyen, T., Hu, C., Albiston, C., Nikzad, A., & Salehi, N. (2023). Expressiveness, Cost, and Collectivism: How the Design of Preference Languages Shapes Participation in Algorithmic Decision-Making. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10/gr6q2t
  9. Sloane, M., Moss, E., Awomolo, O., & Forlano, L. (2020). Participation is not a Design Fix for Machine Learning. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  10. Zytko, D., J. Wisniewski, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Participatory Design of AI Systems: Opportunities and Challenges Across Diverse Users, Relationships, and Application Domains. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, 1–4. https://doi.org/10/gs8kv6

‘Playful Design for Workplace Change Management’ at PLAYTrack conference 2017 in Aarhus

Lase defender collab at FUSE

At the end of last year I was invited to speak at the PLAYTrack conference in Aarhus about the workplace change management games made by Hubbub. It turned out to be a great opportunity to reconnect with the play research community.

I was very much impressed by the program assembled by the organisers. People came from a wide range of disciplines and crucially, there was ample time to discuss and reflect on the materials presented. As I tweeted afterwards, this is a thing that most conference organisers get wrong.

I was particularly inspired by the work of Benjamin Mardell and Mara Krechevsky at Harvard’s Project ZeroMaking Learning Visible looks like a great resource for anyone who teaches. Then there was Reed Stevens from Northwestern University whose project FUSE is one of the most solid examples of playful learning for STEAM I’ve seen thus far. I was also fascinated by Ciara Laverty’s work at PEDAL on observing parent-child play. Miguel Sicart delivered another great provocation on the dark side of playful design. And finally I was delighted to hear about and experience for myself some of Amos Blanton’s work at the LEGO Foundation. I should also call out Ben Fincham’s many provocative contributions from the audience.

The abstract for my talk is below, which covers most of what I talked about. I tried to give people a good sense of:

  • what the games consisted of,
  • what we were aiming to achieve,
  • how both the fiction and the player activities supported these goals,
  • how we made learning outcomes visible to our players and clients,
  • and finally how we went about designing and developing these games.

Both projects have solid write-ups over at the Hubbub website, so I’ll just point to those here: Code 4 and Ripple Effect.

In the final section of the talk I spent a bit of time reflecting on how I would approach projects like this today. After all, it has been seven years since we made Code 4, and four years since Ripple Effect. That’s ages ago and my perspective has definitely changes since we made these.

Participatory design

First of all, I would get even more serious about co-designing with players at every step. I would recruit representatives of players and invest them with real influence. In the projects we did, the primary vehicle for player influence was through playtesting. But this is necessarily limited. I also won’t pretend this is at all easy to do in a commercial context.

But, these games are ultimately about improving worker productivity. So how do we make it so that workers share in the real-world profits yielded by a successful culture change?

I know of the existence of participatory design but from my experience it is not a common approach in the industry. Why?

Value sensitive design

On a related note, I would get more serious about what values are supported by the system, in whose interest they are and where they come from. Early field research and workshops with audience do surface some values but values from customer representatives tend to dominate. Again, the commercial context we work in is a potential challenge.

I know of value sensitive design, but as with participatory design, it has yet to catch on in a big way in the industry. So again, why is that?

Disintermediation

One thing I continue to be interested in is to reduce the complexity of a game system’s physical affordances (which includes its code), and to push even more of the substance of the game into those social allowances that make up the non-material aspects of the game. This allows for spontaneous renegotiation of the game by the players. This is disintermediation as a strategy. David Kanaga’s take on games as toys remains hugely inspirational in this regard, as does Bernard De Koven’s book The Well Played Game.

Gamefulness versus playfulness

Code 4 had more focus on satisfying the need for autonomy. Ripple Effect had more focus on competence, or in any case, it had less emphasis on autonomy. There was less room for ‘play’ around the core digital game. It seems to me that mastering a subjective simulation of a subject is not necessarily what a workplace game for culture change should be aiming for. So, less gameful design, more playful design.

Adaptation

Finally, the agency model does not enable us to stick around for the long haul. But workplace games might be better suited to a setup where things aren’t thought of as a one-off project but more of an ongoing process.

In How Buildings Learn, Stewart Brand talks about how architects should revisit buildings they’ve designed after they are built to learn about how people are actually using them. He also talks about how good buildings are buildings that its inhabitants can adapt to their needs. What does that look like in the context of a game for workplace culture change?


Playful Design for Workplace Change Management

Code 4 (2011, commissioned by the Tax Administration of the Netherlands) and Ripple Effect (2013, commissioned by Royal Dutch Shell) are both games for workplace change management designed and developed by Hubbub, a boutique playful design agency which operated from Utrecht, The Netherlands and Berlin, Germany between 2009 and 2015. These games are examples of how a goal-oriented serious game can be used to encourage playful appropriation of workplace infrastructure and social norms, resulting in an open-ended and creative exploration of new and innovative ways of working.

Serious game projects are usually commissioned to solve problems. Solving the problem of cultural change in a straightforward manner means viewing games as a way to persuade workers of a desired future state. They typically take videogame form, simulating the desired new way of working as determined by management. To play the game well, players need to master its system and by extension—it is assumed—learning happens.

These games can be be enjoyable experiences and an improvement on previous forms of workplace learning, but in our view they decrease the possibility space of potential workplace cultural change. They diminish worker agency, and they waste the creative and innovative potential of involving them in the invention of an improved workplace culture.

We instead choose to view workplace games as an opportunity to increase the space of possibility. We resist the temptation to bake the desired new way of working into the game’s physical and digital affordances. Instead, we leave how to play well up to the players. Since these games are team-based and collaborative, players need to negotiate their way of working around the game among themselves. In addition, because the games are distributed in time—running over a number of weeks—and are playable at player discretion during the workday, players are given license to appropriate workplace infrastructure and subvert social norms towards in-game ends.

We tried to make learning tangible in various ways. Because the games at the core are web applications to which players log on with individual accounts we were able to collect data on player behaviour. To guarantee privacy, employers did not have direct access to game databases and only received anonymised reports. We took responsibility for player learning by facilitating coaching sessions in which they could safely reflect on their game experiences. Rounding out these efforts, we conducted surveys to gain insight into the player experience from a more qualitative and subjective perspective.

These games offer a model for a reasonably democratic and ethical way of doing game-based workplace change management. However, we would like to see efforts that further democratise their design and development—involving workers at every step. We also worry about how games can be used to create the illusion of worker influence while at the same time software is deployed throughout the workplace to limit their agency.

Our examples may be inspiring but because of these developments we feel we can’t continue this type of work without seriously reconsidering our current processes, technology stacks and business practices—and ultimately whether we should be making games at all.

Game player needs and designing architectures of participation

How do you create a corporate environment in which people share knowledge out of free will?1 This is a question my good friends of Wemind2 are working to answer for their clients on a daily basis.3 We’ve recently decided to collaboratively develop methods useful for the design of a participatory context in the workplace. Our idea is that since knowledge sharing is essentially about people interacting in a context, we’ll apply interaction design methods to the problem. Of course, some methods will be more suited to the problem than others, and all will need to be made specific for them to really work. That’s the challenge.

Naturally I will be looking for inspiration in game design theory. This gives me a good reason to blog about the PENS model. I read about this in an excellent Gamasutra article titled Rethinking Carrots: A New Method For Measuring What Players Find Most Rewarding and Motivating About Your Game. The creators of this model4 wanted to better understand what fundamentally motivates game players as well as come up with a practical play testing model. What they’ve come up with is intriguing: They’ve demonstrated that to offer a fun experience, a game has to satisfy certain basic human psychological needs: competence, autonomy and relatedness.5

I urge anyone interested in what makes games work their magic to read this article. It’s really enlightening. The cool thing about this model is that it provides a deeper vocabulary for talking about games.6 In the article’s conclusion the authors note the same, and point out that by using this vocabulary we can move beyond creating games that are ‘mere’ entertainment. They mention serious games as an obvious area of application, I can think of many more (3C products for instance). But I plan on applying this understanding of game player needs to the design of architectures of participation. Wish me luck.

  1. Traditionally, sharing knowledge in large organisations is explicitly rewarded in some way. Arguably true knowledge can only be shared voluntarily. []
  2. Who have been so kind to offer me some free office space, Wi-Fi and coffee since my arrival in Copenhagen. []
  3. They are particularly focused on the value of social software in this equation. []
  4. Scott Rigby and Richard Ryan of Immersyve []
  5. To nuance this, the amount to which a player expects each need to be satisfied varies from game genre to genre. []
  6. Similar to the work of Koster and of Salen & Zimmerman. []