Participatory AI and ML engineering

In the first half of this year, I’ve presented several versions of a brief talk on participatory AI. I figured I would post an amalgam of these to the blog for future reference. (Previously, on the blog, I posted a brief lit review on the same topic; this talk builds on that.)

So, to start, the main point of this talk is that many participatory approaches to AI don’t engage deeply with the specifics of the technology. One such specific is the translation work engineers do to make a problem “learnable” by a machine (Kang, 2023). From this perspective, the main question to ask becomes, how does translation happen in our specific projects? Should citizens be involved in this translation work? If so, how to achieve this?

Before we dig into the state of participatory AI, let’s begin by clarifying why we might want to enable participation in the first place. A common motivation is a lack of democratic control over AI systems. (This is particularly concerning when AI systems are used for government policy execution. These are the systems I mostly look at in my own research.) And so the response is to bring the people into the development process, and to let them co-decide matters.

In these cases, participation can be understood as an enabler of democratic agency, i.e., a way for subjects to legitimate the use of AI systems (cf. Peter, 2020 in Rubel et al., 2021). Peter distinguishes two pathways: a normative one and a democratic one. Participation can be seen as an example of the democratic pathway to legitimation. A crucial detail Peter mentions here, which is often overlooked in participatory AI literature, is that normative constraints must limit the democratic pathway to avoid arbitrariness.

So, what is the state of participatory AI research and practice? I will look at each in turn next.

As mentioned, I previously posted on the state of participatory AI research, so I won’t repeat that in full here. (For the record, I reviewed Birhane et al. (2022), Bratteteig & Verne (2018), Delgado et al. (2023), Ehsan & Riedl (2020), Feffer et al. (2023), Gerdes (2022), Groves et al. (2023), Robertson et al. (2023), Sloane et al. (2020), and Zytko et al. (2022).) Elements that jump out include:

  • Superficial and unrepresentative involvement.
  • Piecemeal approaches that have minimal impact on decision-making.
  • Participants with a consultative role rather than that of active decision-makers.
  • A lack of bridge-builders between stakeholder perspectives.
  • Participation washing and exploitative community involvement.
  • Struggles with the dynamic nature of technology over time.
  • Discrepancies between the time scales for users to evaluate design ideas versus the pace at which systems are developed.
  • A demand for participation to enhance community knowledge and to actually empower them.

Taking a step back, if I were to evaluate the state of the scientific literature on participatory AI, it strikes me that many of these issues are not new to AI. They have been present in participatory design more broadly for some time already. Many of these issues are also not necessarily specific to AI. The ones I would call out include the issues related to AI system dynamism, time scales of participation versus development, and knowledge gaps between various actors in participatory processes (and, relatedly, the lack of bridge-builders).

So, what about practice? Let’s look at two reports that I feel are a good representation of the broader field: Framework for Meaningful Stakeholder Involvement by ECNL & SocietyInside, and Democratizing AI: Principles for Meaningful Public Participation by Data & Society.

Framework for Meaningful Stakeholder Involvement is aimed at businesses, organizations, and institutions that use AI. It focuses on human rights, ethical assessment, and compliance. It aims to be a tool for planning, delivering, and evaluating stakeholder engagement effectively, emphasizing three core elements: Shared Purpose, Trustworthy Process, and Visible Impact.

Democratizing AI frames public participation in AI development as a way to add legitimacy and accountability and to help prevent harmful impacts. It outlines risks associated with AI, including biased outcomes, opaque decision-making processes, and designers lacking real-world impact awareness. Causes for ineffective participation include unidirectional communication, socioeconomic barriers, superficial engagement, and ineffective third-party involvement. The report uses environmental law as a reference point and offers eight guidelines for meaningful public participation in AI.

Taking stock of these reports, we can say that the building blocks for the overall process are available to those seriously looking. The challenges facing participatory AI are, on the one hand, economic and political. On the other hand, they are related to the specifics of the technology at hand. For the remainder of this piece, let’s dig into the latter a bit more.

Let’s focus on translation work done by engineers during model development.

For this, I build on work by Kang (2023), which focuses on the qualitative analysis of how phenomena are translated into ML-compatible forms, paying specific attention to the ontological translations that occur in making a problem learnable. Translation in ML means transforming complex qualitative phenomena into quantifiable and computable forms. Multifaceted problems are converted into a “usable quantitative reference” or “ground truth.” This translation is not a mere representation of reality but a reformulation of a problem into mathematical terms, making it understandable and processable by ML algorithms. This transformation involves a significant amount of “ontological dissonance,” as it mediates and often simplifies the complexity of real-world phenomena into a taxonomy or set of classes for ML prediction. The process of translating is based on assumptions and standards that may alter the nature of the ML task and introduce new social and technical problems.

So what? I propose we can use the notion of translation as a frame for ML engineering. Understanding ML model engineering as translation is a potentially useful way to analyze what happens at each step of the process: What gets selected for translation, how the translation is performed, and what the resulting translation consists of.

So, if we seek to make participatory AI engage more with the technical particularities of ML, we could begin by identifying translations that have happened or might happen in our projects. We could then ask to what extent these acts of translation are value-laden. For those that are, we could think about how to communicate these translations to a lay audience. A particular challenge I expect we will be faced with is what the meaningful level of abstraction for citizen participation during AI development is. We should also ask what the appropriate ‘vehicle’ for citizen participation will be. And we should seek to move beyond small-scale, one-off, often unrepresentative forms of direct participation.

Bibliography

  • Birhane, A., Isaac, W., Prabhakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Power to the People? Opportunities and Challenges for Participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization, 1–8. https://doi.org/10/grnj99
  • Bratteteig, T., & Verne, G. (2018). Does AI make PD obsolete?: Exploring challenges from artificial intelligence to participatory design. Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial – Volume 2, 1–5. https://doi.org/10/ghsn84
  • Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 1–23. https://doi.org/10/gs8kvm
  • Ehsan, U., & Riedl, M. O. (2020). Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In C. Stephanidis, M. Kurosu, H. Degen, & L. Reinerman-Jones (Eds.), HCI International 2020—Late Breaking Papers: Multimodality and Intelligence (pp. 449–466). Springer International Publishing. https://doi.org/10/gskmgf
  • Feffer, M., Skirpan, M., Lipton, Z., & Heidari, H. (2023). From Preference Elicitation to Participatory ML: A Critical Survey & Guidelines for Future Research. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 38–48. https://doi.org/10/gs8kvx
  • Gerdes, A. (2022). A participatory data-centric approach to AI Ethics by Design. Applied Artificial Intelligence, 36(1), 2009222. https://doi.org/10/gs8kt4
  • Groves, L., Peppin, A., Strait, A., & Brennan, J. (2023). Going public: The role of public participation approaches in commercial AI labs. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1162–1173. https://doi.org/10/gs8kvs
  • Kang, E. B. (2023). Ground truth tracings (GTT): On the epistemic limits of machine learning. Big Data & Society, 10(1), 1–12. https://doi.org/10/gtfgvx
  • Peter, F. (2020). The Grounds of Political Legitimacy. Journal of the American Philosophical Association, 6(3), 372–390. https://doi.org/10/grqfhn
  • Robertson, S., Nguyen, T., Hu, C., Albiston, C., Nikzad, A., & Salehi, N. (2023). Expressiveness, Cost, and Collectivism: How the Design of Preference Languages Shapes Participation in Algorithmic Decision-Making. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10/gr6q2t
  • Rubel, A., Castro, C., & Pham, A. K. (2021). Algorithms and autonomy: The ethics of automated decision systems. Cambridge University Press.
  • Sloane, M., Moss, E., Awomolo, O., & Forlano, L. (2020). Participation is not a Design Fix for Machine Learning. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  • Zytko, D., J. Wisniewski, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Participatory Design of AI Systems: Opportunities and Challenges Across Diverse Users, Relationships, and Application Domains. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, 1–4. https://doi.org/10/gs8kv6

Democratizing AI Through Continuous Adaptability: The Role of DevOps

Below are the abstract and slides for my contribution to the TILTing Perspectives 2024 panel “The mutual shaping of democratic practices & AI,” moderated by Merel Noorman.

Slides

Abstract

Contestability

This presentation delves into democratizing artificial intelligence (AI) systems through contestability. Contestability refers to the ability of AI systems to remain open and responsive to disputes throughout their lifecycle. It approaches AI systems as arenas where groups compete for power over designs and outcomes.

Autonomy, democratic agency, legitimation

We identify contestability as a critical system quality for respecting people’s autonomy. This includes their democratic agency: their ability to legitimate policies. This includes policies enacted by AI systems.

For a decision to be legitimate, it must be democratically willed or rely on “normative authority.” The democratic pathway should be constrained by normative bounds to avoid arbitrariness. The appeal to authority should meet the “access constraint,” which ensures citizens can form beliefs about policies with a sufficient degree of agency (Peter, 2020 in Rubel et al., 2021).

Contestability is the quality that ensures mechanisms are in place for subjects to exercise their democratic agency. In the case of an appeal to normative authority, contestability mechanisms are how subjects and their representatives gain access to the information that will enable them to evaluate its justifiability. In this way, contestability satisfies the access constraint. In the case of democratic will, contestability-by-design practices are how system development is democratized. The autonomy account of legitimation adds the normative constraints that should bind this democratic pathway.

Himmelreich (2022) similarly argues that only a “thick” conception of democracy will address some of the current shortcomings of AI development. This is a pathway that not only allows for participation but also includes deliberation over justifications.

The agonistic arena

Elsewhere, we have proposed the Agonistic Arena as a metaphor for thinking about the democratization of AI systems (Alfrink et al., 2024). Contestable AI embodies the generative metaphor of the Arena. This metaphor characterizes public AI as a space where interlocutors embrace conflict as productive. Seen through the lens of the Arena, public AI problems stem from a need for opportunities for adversarial interaction between stakeholders.

This metaphorical framing suggests prescriptions to make more contentious and open to dispute the norms and procedures that shape:

  1. AI system design decisions on a global level, and
  2. human-AI system output decisions on a local level (i.e., individual decision outcomes), establishing new dialogical feedback loops between stakeholders that ensure continuous monitoring.

The Arena metaphor encourages a design ethos of revisability and reversibility so that AI systems embody the agonistic ideal of contingency.

Post-deployment malleability, feedback-ladenness

Unlike physical systems, AI technologies exhibit a unique malleability post-deployment.

For example, LLM chatbots optimize their performance based on a variety of feedback sources, including interactions with users, as well as feedback collected through crowd-sourced data work.

Because of this open-endedness, democratic control and oversight in the operations phase of the system’s lifecycle become a particular concern.

This is a concern because while AI systems are dynamic and feedback-laden (Gilbert et al., 2023), many of the existing oversight and control measures are static, one-off exercises that struggle to track systems as they evolve over time.

DevOps

The field of DevOps is pivotal in this context. DevOps focuses on system instrumentation for enhanced monitoring and control for continuous improvement. Typically, metrics for DevOps and their machine learning-specific MLOps offshoot emphasize technical performance and business objectives.

However, there is scope to expand these to include matters of public concern. The matters-of-concern perspective shifts the focus on issues such as fairness or discrimination, viewing them as challenges that cannot be resolved through universal methods with absolute certainty. Rather, it highlights how standards are locally negotiated within specific institutional contexts, emphasizing that such standards are never guaranteed (Lampland & Star, 2009, Geiger et al., 2023).

MLOps Metrics

In the context of machine learning systems, technical metrics focus on model accuracy. For example, a financial services company might use Area Under The Curve Receiver Operating Characteristics (AUC-ROC) to continuously monitor and maintain the performance of their fraud detection model in production.

Business metrics focus on cost-benefit analyses. For example, a bank might use a cost-benefit matrix to balance the potential revenue from approving a loan against the risk of default, ensuring that the overall profitability of their loan portfolio is optimized.

Drift

These metrics can be monitored over time to detect “drift” between a model and the world. Training sets are static. Reality is dynamic. It changes over time. Drift occurs when the nature of new input data diverges from the data a model was trained on. A change in performance metrics may be used to alert system operators, who can then investigate and decide on a course of action, e.g., retraining a model on updated data. This, in effect, creates a feedback loop between the system in use and its ongoing development.

An expansion of these practices in the interest of contestability would require:

  1. setting different metrics,
  2. exposing these metrics to additional audiences, and
  3. establishing feedback loops with the processes that govern models and the systems they are embedded in.

Example 1: Camera Cars

Let’s say a city government uses a camera-equipped vehicle and a computer vision model to detect potholes in public roads. In addition to accuracy and a favorable cost-benefit ratio, citizens, and road users in particular, may care about the time between a detected pothole and its fixing. Or, they may care about the distribution of potholes across the city. Furthermore, when road maintenance appears to be degrading, this should be taken up with department leadership, the responsible alderperson, and council members.

Example 2: EV Charching

Or, let’s say the same city government uses an algorithmic system to optimize public electric vehicle (EV) charging stations for green energy use by adapting charging speeds to expected sun and wind. EV drivers may want to know how much energy has been shifted to greener time windows and its trends. Without such visibility on a system’s actual goal achievement, citizens’ ability to legitimate its use suffers. As I have already mentioned, democratic agency, when enacted via the appeal to authority, depends on access to “normative facts” that underpin policies. And finally, professed system functionality must be demonstrated as well (Raji et al., 2022).

DevOps as sociotechnical leverage point for democratizing AI

These brief examples show that the DevOps approach is a potential sociotechnical leverage point. It offers pathways for democratizing AI system design, development, and operations.

DevOps can be adapted to further contestability. It creates new channels between human and machine actors. One of DevOps’s essential activities is monitoring (Smith, 2020), which presupposes fallibility, a necessary precondition for contestability. Finally, it requires and provides infrastructure for technical flexibility so that recovery from error is low-cost and continuous improvement becomes practically feasible.

The mutual shaping of democratic practices & AI

Zooming out further, let’s reflect on this panel’s overall theme, picking out three elements: legitimation, representation of marginalized groups, and dealing with conflict and contestation after implementation and during use.

Contestability is a lever for demanding justifications from operators, which is a necessary input for legitimation by subjects (Henin & Le Métayer, 2022). Contestability frames different actors’ stances as adversarial positions on a political field rather than “equally valid” perspectives (Scott, 2023). And finally, relations, monitoring, and revisability are all ways to give voice to and enable responsiveness to contestations (Genus & Stirling, 2018).

And again, all of these things can be furthered in the post-deployment phase by adapting the DevOps lens.

Bibliography

  • Alfrink, K., Keller, I., Kortuem, G., & Doorn, N. (2022). Contestable AI by Design: Towards a Framework. Minds and Machines33(4), 613–639. https://doi.org/10/gqnjcs
  • Alfrink, K., Keller, I., Yurrita Semperena, M., Bulygin, D., Kortuem, G., & Doorn, N. (2024). Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI. She Ji: The Journal of Design, Economics, and Innovation10(1), 53–93. https://doi.org/10/gtzwft
  • Geiger, R. S., Tandon, U., Gakhokidze, A., Song, L., & Irani, L. (2023). Making Algorithms Public: Reimagining Auditing From Matters of Fact to Matters of Concern. International Journal of Communication18(0), Article 0.
  • Genus, A., & Stirling, A. (2018). Collingridge and the dilemma of control: Towards responsible and accountable innovation. Research Policy47(1), 61–69. https://doi.org/10/gcs7sn
  • Gilbert, T. K., Lambert, N., Dean, S., Zick, T., Snoswell, A., & Mehta, S. (2023). Reward Reports for Reinforcement Learning. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 84–130. https://doi.org/10/gs9cnh
  • Henin, C., & Le Métayer, D. (2022). Beyond explainability: Justifiability and contestability of algorithmic decision systems. AI & SOCIETY37(4), 1397–1410. https://doi.org/10/gmg8pf
  • Himmelreich, J. (2022). Against “Democratizing AI.” AI & SOCIETYhttps://doi.org/10/gr95d5
  • Lampland, M., & Star, S. L. (Eds.). (2008). Standards and Their Stories: How Quantifying, Classifying, and Formalizing Practices Shape Everyday Life (1st edition). Cornell University Press.
  • Peter, F. (2020). The Grounds of Political Legitimacy. Journal of the American Philosophical Association6(3), 372–390. https://doi.org/10/grqfhn
  • Raji, I. D., Kumar, I. E., Horowitz, A., & Selbst, A. (2022). The Fallacy of AI Functionality. 2022 ACM Conference on Fairness, Accountability, and Transparency, 959–972. https://doi.org/10/gqfvf5
  • Rubel, A., Castro, C., & Pham, A. K. (2021). Algorithms and autonomy: The ethics of automated decision systems. Cambridge University Press.
  • Scott, D. (2023). Diversifying the Deliberative Turn: Toward an Agonistic RRI. Science, Technology, & Human Values48(2), 295–318. https://doi.org/10/gpk2pr
  • Smith, J. D. (2020). Operations anti-patterns, DevOps solutions. Manning Publications.
  • Treveil, M. (2020). Introducing MLOps: How to scale machine learning in the enterprise (First edition). O’Reilly.

PhD update – June 2024

I am writing this final PhD update as a freshly minted doctor. On Thursday, May 23, 2024, I successfully defended my thesis, ‘Contestable Artificial Intelligence: Constructive Design Research for Public Artificial Intelligence Systems that are Open and Responsive to Dispute.’

I started the PhD on September 1, 2018 (read the very first update posted on that day here). So, that’s five years, eight months, 23 days from start to finish. It has been quite the journey, and I feel happy and relieved to have completed it. I am proud of the work embodied in the thesis. Most of all, I am thankful for the transformative learning experience, none of which would have been possible without the support of my supervisors Gerd, Ianus, and Neelke.

On the day itself, I was honored to have as my external committee members professors Dignum, Löwgren, van Zoonen, and van de Poel, professor Voûte as the chair, and Joost and Mireia as my paranymphs.

The thesis PDF can be downloaded at the TU Delft repository, and a video of the proceedings is available on YouTube.

Me, with a copy of the thesis, shortly before starting the layperson’s talk. Photo: Roy Borghouts.

Recent events

Reviewing my notes since the last update, below are some more notable things that happened in the past eight months.

  • I ran a short workshop on AI Pedagogy Through A Design Lens, together with Hosana Morales, at the TU Delft spring symposium on AI education. Read the post.
  • A story about my research was published on the TU Delft industrial design engineering website in the run-up to my defense on May 14, 2024. Read the story.
  • I updated and ran the fifth and final iteration of the AI & Society industrial design engineering master elective course from February 28 through April 10, 2024. A previous version is documented here, which I plan to update sometime in the near future.
  • I gave a talk titled Contestable AI: Designing for Human Autonomy at the Amsterdam UX meetup on February 21, 2024. Download the slides.
  • The outcomes of a design sprint on tools for third-party scrutiny, organized by the Responsible Sensing Lab, which took inspiration from my research, were published on December 7, 2023. Read the report.
  • I was interviewed by Mireia Yurrita Semperena for a DCODE podcast episode titled Beyond Values in Algorithmic Design, published November 6, 2023. Listen to the episode.
  • Together with Claudio Sarra and Marco Almada, I hosted an online seminar titled Building Contestable Systems on October 26, 2023. Read the thread.
  • I was a panelist at the Design & AI Symposium 2023 on October 18, 2023.
  • A paper I co-authored titled When ‘Doing Ethics’ Meets Public Procurement of Smart City Technology – an Amsterdam Case Study, was presented by first author Mike de Kreek at IASDR 2023 on October 9-13. Read the paper.

Looking ahead

I will continue at TU Delft as a postdoctoral researcher and will stay focused on design, AI, and politics, but I will try to evolve my research into something that builds on my thesis work but adds a new angle.

The Envisioning Contestability Loops article mentioned in previous updates is now in press with She Ji, which I am very pleased about. It should be published “soon.”

Upcoming appearances include a brief talk on participatory AI at a Cities Coalition for Digital Rights event and a presentation as part of a panel on The Mutual Shaping Of Democratic Practices And AI at TILTing Perspectives 2024.

That’s it for this final PhD update. I will probably continue these posts under a new title. We’ll see.

AI pedagogy through a design lens

At a TU Delft spring symposium on AI education, Hosana and I ran a short workshop titled “AI pedagogy through a design lens.” In it, we identified some of the challenges facing AI teaching, particularly outside of computer science, and explored how design pedagogy, particularly the practices of studios and making, may help to address them. The AI & Society master elective I’ve been developing and teaching over the past five years served as a case study. The session was punctuated by brief brainstorming using an adapted version of the SQUID gamestorming technique. Below are the slides we used.

Participatory AI literature review

I’ve been thinking alot about civic participation in machine learning systems development. In particular, involving non-experts in the potentially value-laden translation work from specifications that engineers do when they build their models. Below is a summary of a selection of literature I found on the topic, which may serve as a jumping-off point for future research.

Abstract

The literature on participatory artificial intelligence (AI) reveals a complex landscape marked by challenges and evolving methodologies. Feffer et al. (2023) critique the reduction of participation to computational mechanisms that only approximate narrow moral values. They also note that engagements with stakeholders are often superficial and unrepresentative. Groves et al. (2023) identify significant barriers in commercial AI labs, including high costs, fragmented approaches, exploitation concerns, lack of transparency, and contextual complexities. These barriers lead to a piecemeal approach to participation with minimal impact on decision-making in AI labs. Delgado et al. (2023) observe that participatory AI involves stakeholders mostly in a consultative role without integrating them as active decision-makers throughout the AI design lifecycle.

Gerdes (2022) proposes a data-centric approach to AI ethics and underscores the need for interdisciplinary bridge builders to reconcile different stakeholder perspectives. Robertson et al. (2023) explore participatory algorithm design, emphasizing the need for preference languages that balance expressiveness, cost, and collectivism—Sloane et al. (2020) caution against “participation washing” and the potential for exploitative community involvement. Bratteteig & Verne (2018) highlight AI’s challenges to traditional participatory design (PD) methods, including unpredictable technological changes and a lack of user-oriented evaluation. Birhane et al. (2022) call for a clearer understanding of meaningful participation, advocating for a shift towards vibrant, continuous engagement that enhances community knowledge and empowerment. The literature suggests a pressing need for more effective, inclusive, and empowering participatory approaches in AI development.

Bibliography

  1. Birhane, A., Isaac, W., Prabhakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Power to the People? Opportunities and Challenges for Participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization, 1–8. https://doi.org/10/grnj99
  2. Bratteteig, T., & Verne, G. (2018). Does AI make PD obsolete?: Exploring challenges from artificial intelligence to participatory design. Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial – Volume 2, 1–5. https://doi.org/10/ghsn84
  3. Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 1–23. https://doi.org/10/gs8kvm
  4. Ehsan, U., & Riedl, M. O. (2020). Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In C. Stephanidis, M. Kurosu, H. Degen, & L. Reinerman-Jones (Eds.), HCI International 2020—Late Breaking Papers: Multimodality and Intelligence (pp. 449–466). Springer International Publishing. https://doi.org/10/gskmgf
  5. Feffer, M., Skirpan, M., Lipton, Z., & Heidari, H. (2023). From Preference Elicitation to Participatory ML: A Critical Survey & Guidelines for Future Research. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 38–48. https://doi.org/10/gs8kvx
  6. Gerdes, A. (2022). A participatory data-centric approach to AI Ethics by Design. Applied Artificial Intelligence, 36(1), 2009222. https://doi.org/10/gs8kt4
  7. Groves, L., Peppin, A., Strait, A., & Brennan, J. (2023). Going public: The role of public participation approaches in commercial AI labs. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1162–1173. https://doi.org/10/gs8kvs
  8. Robertson, S., Nguyen, T., Hu, C., Albiston, C., Nikzad, A., & Salehi, N. (2023). Expressiveness, Cost, and Collectivism: How the Design of Preference Languages Shapes Participation in Algorithmic Decision-Making. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10/gr6q2t
  9. Sloane, M., Moss, E., Awomolo, O., & Forlano, L. (2020). Participation is not a Design Fix for Machine Learning. arXiv:2007.02423 [Cs]. http://arxiv.org/abs/2007.02423
  10. Zytko, D., J. Wisniewski, P., Guha, S., P. S. Baumer, E., & Lee, M. K. (2022). Participatory Design of AI Systems: Opportunities and Challenges Across Diverse Users, Relationships, and Application Domains. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, 1–4. https://doi.org/10/gs8kv6

PhD update – September 2023

I’m back again with another Ph.D. update. Five years after I started in Delft, we are nearing the finish line on this whole thing. But before we look ahead, let’s review notable events since the previous update in March 2023.

Occurrences

  1. I presented our framework, Contestable AI by Design, at the annual NWO ICT Open conference, which, for the first time, had an entire track dedicated to HCI research in the Netherlands. It was an excellent opportunity to meet fellow researchers from other Dutch institutions. The slides are available as PDF at contestable.ai.
  2. I visited Hamburg to present our paper, Contestable Camera Cars, at CHI 2023. We also received a Best Paper award, which I am, of course, very pleased with. The conference was equal parts inspiring and overwhelming. The best part of it was meeting in-person researchers who shared my interests.
  3. Also, at CHI, I was interviewed about my research by Mike Green for his podcast Understanding Users. You can listen to it here. It is always good practice to try and lay out some of my arguments spontaneously live.
  4. In June, I joined a panel at a BOLD Cities “talk show” to discuss the design of smart city systems for contestability. It was quite an honor to be on the same panel as Eefje Cuppen, director of the Rathenau Institute. This event was great because we had technological, design, political, and policy perspectives. Several guests argued for the need to reinvigorate representative democracy and give a more prominent role to elected politicians in setting technology policy. A report is available here.
  5. In August, the BRIDE project had its closing event. This is the NWO research project that partially funded my Ph.D. The event was an excellent opportunity to reflect on our work together over the past years. I took the opportunity to revisit the work of Saskia Sassen on cityness and to think through some of the implications of my work on contestability for the field of smart urbanism. The slides are available at contestable.ai.
  6. Finally, last week, a short opinion piece that lays out the argument for contestable AI in what I hope is a reasonably accessible manner, was published on the TU Delft website.
Photo of Eefje Cuppen and I being interviewed by Inge Janse at the BOLD Cities talk show on June 22, 2023—photo by Tiffany Konings.

Eefje Cuppen and I being interviewed by Inge Janse at the BOLD Cities talk show on June 22, 2023—photo by Tiffany Konings.

Envisioning Contestability Loops

Throughout this, I have been diligently chipping away at my final publication, “Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI.” I had a great time collaborating with Leon de Korte on an infographic of part of my design framework.

We took this infographic on a tour of Dutch interaction design agencies and conducted concept design workshops. I enjoyed returning to practice and sharing the work of the past couple of years with peers in practice. My friends at Eend wrote a nice blog post about it.

The analysis of the outcomes of these workshops forms the basis for the article, in which I explore the degree to which the guiding concept (generative metaphor) behind contestable AI, which I have dubbed the “Agonistic Arena” is a productive one for design practitioners. Spoilers: It is, but competing metaphors are also at play in the public AI design space.

The manuscript is close to completion. As usual, putting something like this together is a heavy but gratifying lift. I look forward to sharing the results and the underlying infographic with the broader world.

Are we there yet?

Looking ahead, I will be on a panel alongside the great Julian Bleecker and a host of others at the annual TU Delft Design & AI symposium in October.

But aside from that, I will keep my head down and focus on completing my thesis. The aim is to hand it in by the end of November. So, two more months on the clock. Will I make it? Let’s find out!

PhD update – March 2023

Hello again, and welcome to another update on my Ph.D. research progress. I will briefly run down the things that happened since the last update, what I am currently working on, and some notable events on the horizon.

Recent happenings

CHI 2023 paper

Stills from Contestable Camera Cars concept video.

First off, the big news is that the paper I submitted to CHI 2023 was accepted. This is a big deal for me because HCI is the core field I aim to contribute to, and CHI is its flagship conference.

Here’s the full citation:

Alfrink, K., Keller, I., Doorn, N., & Kortuem, G. (2023). Contestable Camera Cars: A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute. https://doi.org/10/jwrx

I have had several papers rejected in the past (CHI is notoriously hard to get accepted at), so I feel vindicated. The paper is already available as an arXiv preprint, as is the concept video that forms the core of the study I report on (many thanks to my pal Simon for collaborating on this with me). CHI 2023 happens in late April. I will be riding a train over there to present the paper in person. Very much looking forward to that.

Contestable Camera Cars concept video.

Responsible Sensing Lab anniversary event

I briefly presented my research at the Responsible Sensing Lab anniversary event on February 16. The whole event was quite enjoyable, and I got some encouraging responses to my ideas afterward which is always nice. The event was recorded in full. My appearance starts around the 1:47:00 mark.

It me. (Credit: Responsible Sensing Lab.)
Video of my contribution. (Pakhuis de Zwijger / Responsible Sensing Lab.)

Tweeting, tooting, blogging

I have been getting back into the habit of tweeting, tooting, and even the occasional spot of blogging on this website again. As the end of my Ph.D. nears, I figured it might be worth it to engage more actively with “the discourse,” as they say. I mostly share stuff I read that is related to my research and that I find interesting. Although, of course, posts related to my twin sons’ music taste and struggles with university bureaucracy always win out in the end. (Yes, I am aware my timing is terrible, seeing as how we have basically finally concluded social media was a bad idea after all.)

Current activities

Envisioning Contestability Loops

At the moment, the majority of my time is taken up by conducting a final study (working title: “Envisioning Contestability Loops”). I am excited about this one because I get to once again collaborate with a professional designer on an artifact, in this case, a visual explanation of my framework, and use the result as a research instrument to dig into, in this case, the strengths and weaknesses of contestability as a generative metaphor for the design of public AI.

Thesis

In parallel, I have begun to put together my thesis. It is paper-based, but of course, the introductory and concluding chapters require some thought still.

The aim is to have both the final article and thesis finished by the end of summer and then begin the arduous process of getting a date for my defense, assembling a committee, etc.

Agonistic Machine Vision Development

In the meantime, I am also mentoring Laura, another brilliant master graduation student. Her project, titled “Agonistic Machine Vision Development,” builds on my previous research. In particular, one of the challenges I identified in Contestable Camera Cars, that of the differential in information position between citizens and experts when they collaborate in participatory machine learning sessions. It’s very gratifying to see others do design work that pushes these ideas further.

Upcoming events

So yeah, like I already mentioned, I will be speaking at CHI 2023, which takes place on 23-28 April in Hamburg. The schedule says I am presenting on April 25 as part of the session on “AI Trust, Transparency and Fairness”, which includes some excellent-looking contributions.

And before that, I will be at ICT.OPEN in Utrecht on April 20 to present briefly on the Contestable AI by Design framework as part of the CHI NL track. It should be fun.

…

That’s it for this update. Maybe, by the time the next one rolls around, I will be able to share a date for my defense. But let’s not jinx it.

Tensions in the professional field of design

I liked a passage in a Kees Dorst paper on “academic design” so much, I turned it into a little diagram.

Tensions in the professional field of design. (PDF)

Note that these tensions are independent of each other. The diagram does not imply two “sides” of design. At any given moment, a design activity can be plotted on each axis independently. This is also not an exhaustive list of tensions. Finally, Dorst claims these tensions are irreconcilable.

The original passage:

Contemporary developments in design can be described and understood in much the same way. The professional field that we so easily label ‘design’ is complex, and full of inner contradictions. These inner tensions feed the discussions in the field. To name a few: (1) the objectives of design and the motivation of designers can range from commercial success to the common good. (2) The role and position of the designer can be as an autonomous creator, or as a problem solver in-service to the client. (3) The drive of the designer can be idealistic, or it can be more pragmatic (4) The resulting design can be a ‘thing’, but also immaterial (5) The basis for the process of designing can be intuitive, or based on knowledge and research… Etcetera… The development of the design disciplines can be traced along these lines of tension – with designers in different environments and times changing position relative to these fundamental paradoxes, but never resolving them. Ultimately, the real strength and coherence of design as a field of professions comes from recognizing these contradictions, and the dynamics of the field is a result of continuous experimentation along the rifts defined by them. Rather than a common set of practices and skills that designers might have [Cross, 1990] it is these inner contradictions in design that define its culture, its mentality. Design research should be an active force in these discussions, building bridges between them where possible. Not to resolve them into a monolithic Science of Design, but advancing the discussion in this dynamically shifting set of relations.

Dorst, K. (2016, June 27). Design practice and design research: Finally together? Proceedings of DRS 2016. Design Research Society 50th Anniversary Conference, Brighton, UK. https://www.drs2016.org/212

Citizen participation in “The End of the End of History”

Below are some choice quotes on “citizen participation” from chapter 8 of The End of the End of History, a recommended book on our recent global political history. I feel like many of us in the participatory technology design space are complicit in these practices to some extent. I continue to grapple with alternative models of mass democratic control over technology.

The Center-Left will propose a range of measures designed to promote “civic engagement” or “community participation.”

Citizens’ summits, juries and panels all aim at participation rather than power, at the technocratic incorporation of the people into politics in order to manage away conflict.

Likewise the popularity of deliberative modes of engagement, deliberative stakeholder events or workshops are characteristic tools of technocratic do-gooders as they create the simulacrum of a democratic process in which people are assembled to provide an ostensibly collective solution to a problem, but decisions lack a binding quality or have already been taken in advance.

Though unable to gain traction at a transnational level, the Left may find some success in municipal politics, following the 2010s example of Barcelona.

Sidestepping […] animus toward Big Tech companies, [tech solutionism (Morozov, 2013) and the ideology of ease (Greenfield, 2017)] may come to be applied to non-market activities, such as solving community problems, perhaps at the level of municipal government.

Sovereign, national politics – which neoliberalism was designed to defang – will remain beyond the grasp of the Left. Progressives will prefer instead to operate at the municipal, the everyday or the supranational level – precisely the arena to which neoliberalism sought to displace politics, to where it could do no harm.

Hochuli, A., Hoare, G., & Cunliffe, P. (2021). The End of the End of History. Zero Books.

De opkomst van de meritocratie

Thijs Kleinpaste heeft een mooie boekbespreking van Michael Young’s De opkomst van de meritocratie in de Nederlandse Boekengids. Een paar passages die ik vooral sterk vond hieronder.

De grote verdienste van Young is dat hij inzichtelijk maakt hoe onschuldige principes als ‘beloning naar verdienste’ volkomen kunnen ontsporen als ze worden ingezet binnen een verder onveranderd sociaal en economisch stelsel. Concreet: sommigen een uitverkoren positie geven in een maatschappelijke hiërarchie en anderen opdragen om hun plek te kennen.

Het klassenbelang van de meritocratie is abstracter. Het belangrijkste is om allereerst een klasse of kaste te blijven om zo de voordelen daarvan te kunnen blijven oogsten. In iedere moderne staat wordt macht uitgeoefend – of meritocratischer gezegd: moet er bestuurd worden – en als er dan toch een kaste moet zijn die deze taak vervult, laat dat die van de hoogst gediplomeerden zijn. De meritocratie reproduceert zichzelf door deze gedachte mee te geven aan elke nieuwe lichting die tot haar uitverkoren rangen toetreedt: dat zij de juiste, met recht geroepen groep is om de wereld te ordenen. Niet de arbeidersklasse, niet de ongeleide democratie, niet het gekrioel van belangengroepjes – maar zij. Alle materiële voordelen van de meritocratie vloeien voort uit het in stand houden van die uitverkoren status.

Te vaak lijkt de gedachte te zijn dat vertegenwoordiging en het bedienen van belangen onproblematisch in elkaars verlengde liggen. Om die zelfgenoegzaamheid te doorbreken is kennelijk iets stelligers nodig, zoals de gedachte dat waar managers en bestuurders zijn, er gestaakt moet kunnen worden: dat waar macht wordt uitgeoefend en waar aanwijzingen worden gegeven, zij die de aanwijzingen moeten opvolgen kunnen stemmen met hun voeten. Dat conflict omarmd wordt en niet wordt gezien als iets wat gevaarlijk is voor de maatschappelijke lieve vrede, de ‘economie’, of zelfs de democratie. Conflict is ongetwijfeld gevaarlijk voor de hegemonie van de manager en diens klasse van droomkoninkjes, en daarmee voor de soevereiniteit van de meritocratische orde, maar dat gevaar is zowel heilzaam als noodzakelijk. Een van de lessen van het boek van Young is immers ook dat je moet kiezen: zelf een revolutie maken, of wachten tot die uitbreekt.

Zelf lezen: https://www.nederlandseboekengids.com/20221116-thijs-kleinpaste/