Seven months since the last update. Much better than the gap of three years between the previous two. These past months I feel like I have begun to reap the rewards of the grunt work of the last couple of years. Two papers finally saw the light of day, as well as a course syllabus. Read on for some more details.
Things that happened:
First, a pair of talks. In February I presented on “Contestable AI& Civic Co-Design” as part of a panel chaired by Roy Bendor at Reinventing the City. A PDF of my slides is available on the contestable.ai website, here. In March, I presented at the AiTech Agora. The title of the talk is “Meaningful Human Control Through Contestability by Design” and the slides are available here.
In February a short interview was published by Bold Cities, a smart city research center I am loosely affiliated with.
Then, in March, came a big moment for me, with the publication of my first journal article in AI& Society. Here’s the abstract, and reference. It’s available open access.
The increasing use of artificial intelligence (AI) by public actors has led to a push for more transparency. Previous research has conceptualized AI transparency as knowledge that empowers citizens and experts to make informed choices about the use and governance of AI. Conversely, in this paper, we critically examine if transparency-as-knowledge is an appropriate concept for a public realm where private interests intersect with democratic concerns. We conduct a practice-based design research study in which we prototype and evaluate a transparent smart electric vehicle charge point, and investigate experts’ and citizens’ understanding of AI transparency. We find that citizens experience transparency as burdensome; experts hope transparency ensures acceptance, while citizens are mostly indifferent to AI; and with absent means of control, citizens question transparency’s relevance. The tensions we identify suggest transparency cannot be reduced to a product feature, but should be seen as a mediator of debate between experts and citizens.
Alfrink, Kars, Ianus Keller, Neelke Doorn, and Gerd Kortuem. “Tensions in Transparent Urban AI: Designing a Smart Electric Vehicle Charge Point.” AI&SOCIETY, March 31, 2022. https://doi.org/10/gpszwh.
In April, the Responsible Sensing Lab published a report on “Responsible Drones”, to which I contributed a little as participant on workshops that lead up to it.
Artificial Intelligence (AI) is increasingly used by a variety of organizations in ways that impact society at scale. This 6 EC master elective course aims to equip students with tools and methods for the responsible design of public AI. During seven weeks students attend a full-day session of lectures and workshops. Students collaborate on a group design project throughout. At the end, students individually deliver a short paper.
The third big milestone was the publication of my second journal article in Minds & Machines. It is the theoretical cornerstone of my thesis, a provisional framework for designing contestability into AI systems. Abstract and reference follow. This one is also open access.
As the use of AI systems continues to increase, so do concerns over their lack of fairness, legitimacy and accountability. Such harmful automated decision-making can be guarded against by ensuring AI systems are contestable by design: responsive to human intervention throughout the system lifecycle. Contestable AI by design is a small but growing field of research. However, most available knowledge requires a significant amount of translation to be applicable in practice. A proven way of conveying intermediate-level, generative design knowledge is in the form of frameworks. In this article we use qualitative-interpretative methods and visual mapping techniques to extract from the literature sociotechnical features and practices that contribute to contestable AI, and synthesize these into a design framework.
Alfrink, Kars, Ianus Keller, Gerd Kortuem, and Neelke Doorn. “Contestable AI by Design: Towards a Framework.” Minds and Machines, August 13, 2022. https://doi.org/10/gqnjcs.
And finally, as these things were going on, I have been quietly chipping away at a third paper that applies the contestable AI by design framework to the phenomenon of camera cars used by municipalities. My aim was to create an example of what I mean by contestable AI, and use the example to interview civil servants about their views on the challenges facing implementation of contestability in the public AI systems they are involved with. I’ve submitted the manuscript, titled “Contestable Camera Cars: A speculative design exploration of public AI that is open and responsive to dispute”, to CHI, and will hear back early November. Fingers crossed for that one.
So what’s next? Well, I have little under a year left on my PhD contract, so I should really begin wrapping up. I am considering a final publication, but have not settled on any topic in particular yet. Current interests include AI system monitoring, visual methods, and more besides. Once that final paper is in the can I will turn my attention to putting together the thesis itself, which is paper-based, so mostly requires writing an overall introduction and conclusion to bookend the included publications. Should be a piece of cake, right?
And after the PhD? I am not sure yet, but I hope to remain involved in research and teaching, while at the same time perhaps getting a bit more back into design practice besides. If at all possible, hopefully in the domain of public sector applications of AI.
That’s it for this update. I will be back at some point when there is more news to share.
It has been three years since I last wrote an update on my PhD. I guess another post is in order.
My PhD plan was formally green-lit in October 2019. I am now over three years into this thing. There are roughly two more years left on the clock. I update my plans on a rolling basis. By my latest estimation, I should be ready to request a date for my defense in May 2023.
Of course, the pandemic forced me to adjust course. I am lucky enough not to be locked into particular methods or cases that are fundamentally incompatible with our current predicament. But still, I had to change up my methods, and reconsider the sequencing of my planned studies.
The conference paper I mentioned in the previous update, using the MX3D bridge to explore smart cities’ logic of control and cityness, was rejected by DIS. I performed a rewrite, but then came to the conclusion it was kind of a false start. These kinds of things are all in the game, of course.
The second paper I wrote uses the Transparent Charging Station to investigate how notions of transparent AI differ between experts and citizens. It was finally accepted late last year and should see publication in AI& Society soon. It is titled Tensions in Transparent Urban AI: Designing A Smart Electric Vehicle Charge Point. This piece went through multiple major revisions and was previously rejected by DIS and CHI.
A third paper, Contestable AI by Design: Towards A Framework, uses a systematic literature review of AI contestability to construct a preliminary design framework, is currently under review at a major philosophy of technology journal. Fingers crossed.
And currently, I am working on my fourth publication, tangentially titled Contestable Camera Cars: A Speculative Design Exploration of Public AI Systems Responsive to Value Change, which will be based on empirical work that uses speculative design as a way to develop guidelines and examples for the aforementioned design framework, and to investigate civil servants’ views on the pathways towards contestable AI systems in public administration.
Once that one is done, I intend to do one more study, probably looking into monitoring and traceability as potential leverage points for contestability, after which I will turn my attention to completing my thesis.
Aside from my research, in 2021 was allowed to develop and teach a master elective centered around my PhD topic, titled AI& Society. In it, students are equipped with technical knowledge of AI, and tools for thinking about AI ethics. They apply these to a design studio project focused on conceptualizing a responsible AI-enabled service that addresses a social issue the city of Amsterdam might conceivably struggle with. Students also write a brief paper reflecting on and critiquing their group design work. You can see me on Vimeo do a brief video introduction for students who are considering the course. I will be running the course again this year starting end of February.
I also mentored a number of brilliant master graduation students: Xueyao Wang (with Jacky Bourgeois as chair) Jooyoung Park, Loes Sloetjes (both with Roy Bendor as chair) and currently Fabian Geiser (with Euiyoung Kim as chair). Working with students is one of the best parts of being in academia.
All of the above would not have been possible without the great support from my supervisory team: Ianus Keller, Neelke Doorn and Gerd Kortuem. I should also give special mention to Thijs Turel at AMS Institute’s Responsible Sensing Lab, where most of my empirical work is situated.
If you want to dig a little deeper into some of this, I recently set up a website for my PhD project over at contestable.ai.
After posting the list of engineering ethics readings it occurred to me I also have a really nice collection of things to read from a course on research through design taught by Pieter Jan Stappers, which I took earlier this year. I figured some might get some use out of it and I like having it for my own reference here as well.
The backbone for this course is the chapter on research through design by Stappers and Giaccardi in the encyclopedia of human-computer interaction, which I highly recommend.
All of the readings below are referenced in that chapter. I’ve read some, quickly gutted others for meaning and the remainder is still on my to-read list. For me personally, the things on annotated portfolios and intermediate-level knowledge by Gaver and Löwgren were the most immediately useful and applicable. I’d read the Zimmerman paper earlier and although it’s pretty concrete in its prescriptions I did not really latch on to it.
Brandt, Eva, and Thomas Binder. “Experimental design research: genealogy, intervention, argument.” International Association of Societies of Design Research, Hong Kong 10 (2007).
Gaver, Bill, and John Bowers. “Annotated portfolios.” interactions 19.4 (2012): 40–49.
Gaver, William. “What should we expect from research through design?.” Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2012.
Löwgren, Jonas. “Annotated portfolios and other forms of intermediate-level knowledge.” Interactions 20.1 (2013): 30–34.
Stappers, Pieter Jan, F. Sleeswijk Visser, and A. I. Keller. “The role of prototypes and frameworks for structuring explorations by research through design.” The Routledge Companion to Design Research (2014): 163–174.
Stappers, Pieter Jan. “Meta-levels in Design Research.”
Stappers, Pieter Jan. “Prototypes as central vein for knowledge development.” Prototype: Design and craft in the 21st century (2013): 85–97.
Wensveen, Stephan, and Ben Matthews. “Prototypes and prototyping in design research.” The Routledge Companion to Design Research. Taylor & Francis (2015).
Zimmerman, John, Jodi Forlizzi, and Shelley Evenson. “Research through design as a method for interaction design research in HCI.” Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 2007.
Bonus level: several items related to “muddling through”…
Flach, John M., and Fred Voorhorst. “What matters?: Putting common sense to work.” (2016).
Lindblom, Charles E. “Still Muddling, Not Yet Through.” Public Administration Review 39.6 (1979): 517–26.
Lindblom, Charles E. “The science of muddling through.” Public Administration Review 19.2 (1959): 79–88.
I recently followed an excellent three-day course on engineering ethics. It was offered by the TU Delft graduate school and taught by Behnam Taibi with guest lectures from several of our faculty.
I found it particularly helpful to get some suggestions for further reading that represent some of the foundational ideas in the field. I figured it would be useful to others as well to have a pointer to them.
So here they are. I’ve quickly gutted these for their meaning. The one by Van de Poel I did read entirely and can highly recommend for anyone who’s doing design of emerging technologies and wants to escape from the informed consent conundrum.
I intend to dig into the Doorn one, not just because she’s one of my promoters but also because resilience is a concept that is closely related to my own interests. I’ll also get into the Floridi one in detail but the concept of information quality and the care ethics perspective on the problem of information abundance and attention scarcity I found immediately applicable in interaction design.
Stilgoe, Jack, Richard Owen, and Phil Macnaghten. “Developing a framework for responsible innovation.” Research Policy 42.9 (2013): 1568–1580.
Van den Hoven, Jeroen. “Value sensitive design and responsible innovation.” Responsible innovation (2013): 75–83.
Hansson, Sven Ove. “Ethical criteria of risk acceptance.” Erkenntnis 59.3 (2003): 291–309.
Van de Poel, Ibo. “An ethical framework for evaluating experimental technology.” Science and engineering ethics22.3 (2016): 667–686.
Hansson, Sven Ove. “Philosophical problems in cost–benefit analysis.” Economics & Philosophy 23.2 (2007): 163–183.
Floridi, Luciano. “Big Data and information quality.” The philosophy of information quality. Springer, Cham, 2014. 303–315.
Doorn, Neelke, Paolo Gardoni, and Colleen Murphy. “A multidisciplinary definition and evaluation of resilience: The role of social justice in defining resilience.” Sustainable and Resilient Infrastructure (2018): 1–12.
We also got a draft of the intro chapter to a book on engineering and ethics that Behnam is writing. That looks very promising as well but I can’t share yet for obvious reasons.
Thought I’d post a quick update on my PhD. Since my previous post almost five months have passed. I’ve been developing my plan further, for which you’ll find an updated description below. I’ve also put together my very first conference paper, co-authored with my supervisor Gerd Kortuem. It’s a case study of the MX3D smart bridge for Designing Interactive Systems 2019. We’ll see if it gets accepted. But in any case, writing something has been hugely educational. And once I finally figured out what the hell I was doing, it was sort of fun as well. Still kind of a trip to be paid to do this kind of work. Looking ahead, I am setting goals for this year and the nearer term as well. It’s all very rough still but it will likely involve research through design as a method and maybe object oriented ontology as a theory. All of which will serve to operationalise and evaluate the usefulness of the “contestability” concept in the context of smart city infrastructure. To be continued—and I welcome all your thoughts!
Designing Smart City Infrastructure for Contestability
The use of information technology in cities increasingly subjects citizens to automated data collection, algorithmic decision making and remote control of physical space. Citizens tend to find these systems and their outcomes hard to understand and predict . Moreover, the opacity of smart urban systems precludes full citizenship and obstructs people’s ‘right to the city’ .
A commonly proposed solution is to improve citizens understanding of systems by making them more open and transparent . For example, GDPR prescribes people’s right to explanation of automated decisions they have been subjected to. For another example, the city of Amsterdam offers a publicly accessible register of urban sensors, and is committed to opening up all the data they collect.
However, it is not clear that openness and transparency in and of itself will yield the desired improvements in understanding and governing of smart city infrastructures . We would like to suggest that for a system to perceived as accountable, people must be able to contest its workings—from the data it collects, to the decisions it makes, all the way through to how those decisions are acted on in the world.
The leading research question for this PhD therefore is how to design smart city infrastructure—urban systems augmented with internet-connected sensing, processing and actuating capabilities—for contestability : the extent to which a system supports the ability of those subjected to it to oppose its workings as wrong or mistaken.
Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016): 2053951715622512.
Kitchin, Rob, Paolo Cardullo, and Cesare Di Feliciantonio. “Citizenship, Justice and the Right to the Smart City.” (2018).
Abdul, Ashraf, et al. “Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda.” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018.
Ananny, Mike, and Kate Crawford. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society 20.3 (2018): 973–989.
Hirsch, Tad, et al. “Designing contestability: Interaction design, machine learning, and mental health.” Proceedings of the 2017 Conference on Designing Interactive Systems. ACM, 2017.
Goodreads tells me I’ve read 48 books in 2018. I set myself the goal of 36 so it looks like I beat it handily. But included in that count are quite a few roleplaying game books and comics. If I discard those I’m left with 28 titles. Still a decent amount but nothing particularly remarkable. Below are a few lists and some notes to go with them.
Most of the non-fiction is somewhere on the intersection of design, technology and Left politics. A lot of this reading was driven by my desire to develop some kind of mental framework for the work we were doing with Tech Solidarity NL. More recently—since I started my PhD—I’ve mostly been reading textbooks on research methodology. Hidden from this list is the academic papers I’ve started consuming as part of this new job. I should figure out a way of sharing some of that here or elsewhere as well.
I took a break from technology and indulged in a deep dive into the history of the thirty year’s war with a massive non-fiction treatment as well as a classic picaresque set in the same time period. While reading these I was transitioning into my new role as a father of twin boys. Somewhat related was a brief history of The Netherlands, which I’ve started recommending to foreigners who are struggling to understand our idiosyncratic little nation and go beyond superficialities.
Then there’s the fiction, which in the beginning of the year consisted of highbrow weird and historical novels but then ventured into classic fantasy and (utopian) sci-fi territory. Again, mostly because of a justifiable desire for some escapism in the sleep deprived evenings and nights.
Having mentioned the arrival of our boys a few times it should come as no surprise that I also read a couple of parenting books. These were more than enough for me and really to be honest I think parenting is a thing best learned through practice. Especially if you’re raising two babies at once.
So that’s it. I’ve set myself the modest goal of 24 books for this year because I’m quite sure most of my reading will be papers and such. Here’s to a year of what I expect will be many more late night and early morning reading sessions of escapist weird fiction.
Today is the first official work day of my new doctoral researcher position at Delft University of Technology. After more than two years of laying the ground work, I’m starting out on a new challenge.
I remember sitting outside a Jewel coffee bar in Singapore1 and going over the various options for whatever would be next after shutting down Hubbub. I knew I wanted to delve into the impact of machine learning and data science on interaction design. And largely through process of elimination I felt the best place for me to do so would be inside of academia.
Back in the Netherlands, with help from Ianus Keller, I started making inroads at TU Delft, my first choice for this kind of work. I had visited it on and off over the years, coaching students and doing guest lectures. I’d felt at home right away.
There were quite a few twists and turns along the way but now here we are. Starting this month I am a doctoral candidate at Delft University of Technology’s faculty of Industrial Design Engineering.
Below is a first rough abstract of the research. But in the months to come this is likely to change substantially as I start hammering out a proper research plan. I plan to post the occasional update on my work here, so if you’re interested your best bet is probably to do the old RSS thing. There’s social media too, of course. And I might set up a newsletter at some point. We’ll see.
If any of this resonates, do get in touch. I’d love to start a conversation with as many people as possible about this stuff.
Intelligibility and Transparency of Smart Public Infrastructures: A Design Oriented Approach
This phd will explore how designers, technologists, and citizens can utilize rapid urban manufacturing and IoT technologies for designing urban space that expresses its intelligence from the intersection of people, places, activities and technology, not merely from the presence of cutting-edge technology. The key question is how smart public infrastructure, i.e. data-driven and algorithm-rich public infrastructures, can be understood by lay-people.
The design-oriented research will utilize a ‘research through design’ approach to develop a digital experience around the bridge and the surrounding urban space. During this extended design and making process the phd student will conduct empirical research to investigate design choices and their implications on (1) new forms of participatory data-informed design processes, (2) the technology-mediated experience of urban space, (3) the emerging relationship between residents and “their” bridge, and (4) new forms of data-informed, citizen led governance of public space.
At a recent Tech Solidarity NL meetup we dove into Value Sensitive Design. This approach had been on my radar for a while so when we concluded that for our community it would be useful to talk about how to practice ethical design and development of technology, I figured we should check it out.
Below, I have attempted to pull together the most salient points from what is a rather dense twenty-plus-slides deck. I hope it is of some use to those professional designers and developers who are looking for better ways of building technology that serves the interest of the many, not the few.
The departure point is the observation that “there is a need for an overarching theoretical and methodological framework with which to handle the value dimensions of design work.” In other words, something that accounts for what we already know about how to deal with values in design work in terms of theory and concepts, as well as methods and techniques.
This is of course not a new concern. For example, famed cyberneticist Norbert Wiener argued that technology could help make us better human beings, and create a more just society. But for it to do so, he argued, we have to take control of the technology.
We have to reject the “worshiping [of] the new gadgets which are our own creation as if they were our masters.” (Wiener 1953)
We can find many more similar arguments throughout the history of information technology. Recently such concerns have flared up in industry as well as society at large. (Not always for the right reasons in my opinion, but that is something we will set aside for now.)
To address these concerns, Value Sensitive Design was developed. It is “a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process.” It has been applied successfully for over 20 years.
But what is a value? In the literature it is defined as “what a person or group of people consider important in life.” I like this definition because it is easy to grasp but also underlines the slippery nature of values. Some things to keep in mind when talking about values:
In a narrow sense, the word “value” refers simply to the economic worth of an object. This is not the meaning employed by Value Sensitive Design.
Values should not be conflated with facts (the “fact/value distinction”) especially insofar as facts do not logically entail value.
“Is” does not imply “ought” (the naturalistic fallacy).
Values cannot be motivated only by an empirical account of the external world, but depend substantively on the interests and desires of human beings within a cultural milieu. (So contrary to what some right-wingers like to say: “Facts do care about your feelings.”)
Let’s dig into the way this all works. “Value Sensitive Design is an iterative methodology that integrates conceptual, empirical, and technical investigations.” So it distinguishes between three types of activities (“investigations”) and it prescribes cycling through these activities multiple times. Below are listed questions and notes that are relevant to each type of investigation. But in brief, this is how I understand them:
Defining the specific values at play in a project;
Observing, measuring, and documenting people’s behaviour and the context of use;
Analysing the ways in which a particular technology supports or hinders particular values.
Who are the direct and indirect stakeholders affected by the design at hand?
How are both classes of stakeholders affected?
What values are implicated?
How should we engage in trade-offs among competing values in the design, implementation, and use of information systems (e.g., autonomy vs. security, or anonymity vs. trust)?
Should moral values (e.g., a right to privacy) have greater weight than, or even trump, non-moral values (e.g., aesthetic preferences)?
How do stakeholders apprehend individual values in the interactive context?
How do they prioritise competing values in design trade-offs?
How do they prioritise individual values and usability considerations?
Are there differences between espoused practice (what people say) compared with actual practice (what people do)?
And, specifically focusing on organisations:
What are organisations’ motivations, methods of training and dissemination, reward structures, and economic incentives?
Not a list of questions here, but some notes:
Value Sensitive Design takes the position that technologies in general, and information and computer technologies in particular, have properties that make them more or less suitable for certain activities. A given technology more readily supports certain values while rendering other activities and values more difficult to realise.
Technical investigations involve the proactive design of systems to support values identified in the conceptual investigation.
Technical investigations focus on the technology itself. Empirical investigations focus on the individuals, groups, or larger social systems that configure, use, or are otherwise affected by the technology.
Value Sensitive Design enlarges the arena in which values arise to include not only the work place
Value Sensitive Design contributes a unique methodology that employs conceptual, empirical, and technical investigations, applied iteratively and integratively
Value Sensitive Design enlarges the scope of human values beyond those of cooperation (CSCW) and participation and democracy (Participatory Design) to include all values, especially those with moral import.
Value Sensitive Design distinguishes between usability and human values with ethical import.
Value Sensitive Design identifies and takes seriously two classes of stakeholders: direct and indirect.
Value Sensitive Design is an interactional theory
Value Sensitive Design builds from the psychological proposition that certain values are universally held, although how such values play out in a particular culture at a particular point in time can vary considerably
[ad 4] “By moral, we refer to issues that pertain to fairness, justice, human welfare and virtue, […] Value Sensitive Design also accounts for conventions (e.g., standardisation of protocols) and personal values”
[ad 5] “Usability refers to characteristics of a system that make it work in a functional sense, […] not all highly usable systems support ethical values”
[ad 6] “Often, indirect stakeholders are ignored in the design process.”
[ad 7] “values are viewed neither as inscribed into technology (an endogenous theory), nor as simply transmitted by social forces (an exogenous theory). […] the interactional position holds that while the features or properties that people design into technologies more readily support certain values and hinder others, the technology’s actual use depends on the goals of the people interacting with it. […] through human interaction, technology itself changes over time.”
[ad 8] “the more concretely (act-based) one conceptualises a value, the more one will be led to recognising cultural variation; conversely, the more abstractly one conceptualises a value, the more one will be led to recognising universals”
Value Sensitive Design doesn’t prescribe a particular process, which is fine by me, because I believe strongly in tailoring your process to the particular project at hand. Part of being a thoughtful designer is designing a project’s process as well. However, some guidance is offered for how to proceed in most cases. Here’s a list, plus some notes.
Start with a value, technology, or context of use
Identify direct and indirect stakeholders
Identify benefits and harms for each stakeholder group
Map benefits and harms onto corresponding values
Conduct a conceptual investigation of key values
Identify potential value conflicts
Integrate value considerations into one’s organisational structure
[ad 1] “We suggest starting with the aspect that is most central to your work and interests.”
[ad 2] “direct stakeholders are those individuals who interact directly with the technology or with the technology’s output. Indirect stakeholders are those individuals who are also impacted by the system, though they never interact directly with it. […] Within each of these two overarching categories of stakeholders, there may be several subgroups. […] A single individual may be a member of more than one stakeholder group or subgroup. […] An organisational power structure is often orthogonal to the distinction between direct and indirect stakeholders.”
[ad 3] “one rule of thumb in the conceptual investigation is to give priority to indirect stakeholders who are strongly affected, or to large groups that are somewhat affected […] Attend to issues of technical, cognitive, and physical competency. […] personas have a tendency to lead to stereotypes because they require a list of “socially coherent” attributes to be associated with the “imagined individual.” […] we have deviated from the typical use of personas that maps a single persona onto a single user group, to allow for a single persona to map onto to multiple stakeholder groups”
[ad 4] “In some cases, the corresponding values will be obvious, but not always.”
[ad 5] “the philosophical ontological literature can help provide criteria for what a value is, and thereby how to assess it empirically.”
[ad 6] “value conflicts should usually not be conceived of as “either/or” situations, but as constraints on the design space.”
[ad 7] “In the real world, of course, human values (especially those with ethical import) may collide with economic objectives, power, and other factors. However, even in such situations, Value Sensitive Design should be able to make positive contributions, by showing alternate designs that better support enduring human values.”
This table is a useful heuristic tool for values that might be considered. The authors note that it is not intended as a complete list of human values that might be implicated. Another more elaborate tool of a similar sort are the Envisioning Cards.
For the ethics nerds, it may be interesting to note that most of the values in this table hinge on the deontological and consequentialist moral orientations. In addition, the authors have chose several other values related to system design.
When doing the empirical investigations you’ll probably rely on stakeholder interviews quite heavily. Stakeholder interviews shouldn’t be a new thing to any design professional worth their salt. But the authors do offer some practical pointers to keep in mind.
First of all, keep the interview somewhat open-ended. This means conducting a semi-structured interview. This will allow you to ask the things you want to know, but also creates the opportunity for new and unexpected insights to emerge.
Laddering—repeatedly asking the question “Why?” can get you quite far.
The most important thing, before interviewing stakeholders, is to have a good understanding of the subject at hand. Demarcate it using criteria that can be explained to outsiders. Use descriptions of issues or tasks for participants to engage in, so that the subject of the investigation becomes more concrete.
Two things I find interesting here. First of all, we are encouraged to map the relationship between design trade-offs, value conflicts and stakeholder groups. The goal of this exercise is to be able to see how stakeholder groups are affected in different ways.
The second useful suggestion for technical investigations is to build flexibility into a product or service’s technical infrastructure. The reason for this is that over time, new values and value conflicts can emerge. As designers we are not always around anymore once a system is deployed so it is good practice to enable the stakeholders to adapt our design to their evolving needs. (I was very much reminded of the approach advocated by Stewart Brand in How Buildings Learn.)
When discussing matters of ethics in design with peers I often notice a reluctance to widen the scope of our practice to include these issues. Frequently, folks argue that since it is impossible to foresee all the potential consequences of design choices, we can’t possibly be held accountable for all the terrible things that can happen as a result of a new technology being introduced into society.
I think that’s a misunderstanding of what ethical design is about. We may not always be directly responsible for the consequences of our design (both good and bad). But we are responsible for what we choose to make part of our concerns as we practice design. This should include the values considered important by the people impacted by our designs.
In the 1996 article mentioned at the start of this post, Friedman concludes as follows:
“As with the traditional criteria of reliability, efficiency, and correctness, we do not require perfection in value-sensitive design, but a commitment. And progress.” (Friedman 1996)
I think that is an apt place to end it here as well.
Friedman, Batya, Peter Kahn, and Alan Borning. “Value sensitive design: Theory and methods.” University of Washington technical report (2002): 02–12.
Le Dantec, Christopher A., Erika Shehan Poole, and Susan P. Wyche. “Values as lived experience: evolving value sensitive design in support of value discovery.” Proceedings of the SIGCHI conference on human factors in computing systems.ACM, 2009.
Borning, Alan, and Michael Muller. “Next steps for value sensitive design.” Proceedings of the SIGCHI conference on human factors in computing systems.ACM, 2012.
Freidman, B., P. Kahn, and A. Borning. “Value sensitive design and information systems.” Human–computer interaction in management information systems: Foundations (2006): 348–372.
Returning to what is something of an annual tradition, these are the books I’ve read in 2017. I set myself the goal of getting to 36 and managed 38 in the end. They’re listed below with some commentary on particularly memorable or otherwise noteworthy reads. To make things a bit more user friendly I’ve gone with four broad buckets although as you’ll see within each the picks range across genres and subjects.
I always have one piece of fiction or narrative non-fiction going. I have a long-standing ‘project’ of reading cult classics. I can’t settle on a top pick for the first category so it’s going to have to be a tie between Lowry’s alcohol-drenched tale of lost love in pre-WWII Mexico, and Salter’s unmatched lyrical prose treatment of a young couple’s liaisons as imagined by a lecherous recluse in post-WWII France.
When I feel like something lighter I tend to seek out sci-fi written from before I was born. (Contemporary sci-fi more often than not disappoints me with its lack of imagination, or worse, nostalgia for futures past. I’m looking at you, Cline.) My top pick here would be the Strugatsky brothers, who blew me away with their weird tale of a world forever changed by the inexplicable visit by something truly alien.
I’ve also continued to seek out works by women, although I’ve been less strict with myself in this department than previous years. Here I’m ashamed to admit it took me this long to finally read anything by Woolf because Mrs Dalloway is every bit as good as they say it is. I recommend seeking out the annotated Penguin addition for additional insights into the many things she references.
I’ve also sometimes picked up a newer book because it popped up on my radar and I was just really excited about reading it. Most notably Dolan’s retelling of the Iliad in all its glorious, sad and gory detail, updated for today’s sensibilities.
Each time I read a narrative treatment of history or current affairs I feel like I should be doing more of it. All of these are recommended but Kapuściński towers over all with his heart-wrenching first-person account of the Iranian revolution.
A few books on design and technology here, although most of my ‘professional’ reading was confined to academic papers this year. I find those to be a more effective way of getting a handle on a particular subject. Books published on my métier are notoriously fluffy. I’ll point out Löwgren for a tough but rewarding read on how to do interaction design in a non-dogmatic but reflective way.
I got into leftist politics quite heavily this year and tried to educate myself a bit on contemporary anti-capitalist thinking. Fisher’s book is a most interesting and also amusing diagnosis of the current political and economic world system through a cultural lens. It’s a shame he’s no longer with us, I wonder what he would have made of recent events.
I decided to work my way through a bunch of roleplaying game books all ‘powered by the apocalypse’ – a family of games which I have been aware of for quite a while but haven’t had the opportunity to play myself. I like reading these because I find them oddly inspirational for professional purposes. But I will point to the original Apocalypse World as the one must-read as Baker remains one of the designers I am absolutely in awe of for the ways in which he manages to combine system and fiction in truly inventive ways.
The Perilous Wilds, Jason Lutes
Urban Shadows: Political Urban Fantasy Powered by the Apocalypse, Andrew Medeiros
Dungeon World, Sage LaTorra
Apocalypse World, D. Vincent Baker
I don’t usually read poetry for reasons similar to how I basically stopped reading comics earlier: I can’t seem to find a good way of discovering worthwhile things to read. The collection below was a gift, and a delightful one.
As always, I welcome suggestions for what to read next. I’m shooting for 36 again this year and plan to proceed roughly as I’ve been doing lately—just meander from book to book with a bias towards works that are non-anglo, at least as old as I am, and preferably weird or inventive.
Earlier this year I coached Design for Interaction master students at Delft University of Technology in the course Research Methodology. The students organised three seminars for which I provided the claims and assigned reading. In the seminars they argued about my claims using the Toulmin Model of Argumentation. The readings served as sources for backing and evidence.
The claims and readings were all related to my nascent research project about machine learning. We delved into both designing for machine learning, and using machine learning as a design tool.
Below are the readings I assigned, with some notes on each, which should help you decide if you want to dive into them yourself.
The only non-academic piece in this list. This served the purpose of getting all students on the same page with regards to what machine learning is, its applications of machine learning in interaction design, and common challenges encountered. I still can’t think of any other single resource that is as good a starting point for the subject as this one.
Fiebrink’s Wekinator is groundbreaking, fun and inspiring so I had to include some of her writing in this list. This is mostly of interest for those looking into the use of machine learning for design and other creative and artistic endeavours. An important idea explored here is that tools that make use of (interactive, supervised) machine learning can be thought of as instruments. Using such a tool is like playing or performing, exploring a possibility space, engaging in a dialogue with the tool. For a tool to feel like an instrument requires a tight action-feedback loop.
A really good survey of how designers currently deal with machine learning. Key takeaways include that in most cases, the application of machine learning is still engineering-led as opposed to design-led, which hampers the creation of non-obvious machine learning applications. It also makes it hard for designers to consider ethical implications of design choices. A key reason for this is that at the moment, prototyping with machine learning is prohibitively cumbersome.
The second Fiebrink piece in this list, which is more of a deep dive into how people use Wekinator. As with the chapter listed above this is required reading for those working on design tools which make use of interactive machine learning. An important finding here is that users of intelligent design tools might have very different criteria for evaluating the ‘correctness’ of a trained model than engineers do. Such criteria are likely subjective and evaluation requires first-hand use of the model in real time.
Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M Ramsey, 316–34. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139046855.020.
Bostrom is known for his somewhat crazy but thoughtprovoking book on superintelligence and although a large part of this chapter is about the ethics of general artificial intelligence (which at the very least is still a way out), the first section discusses the ethics of current “narrow” artificial intelligence. It makes for a good checklist of things designers should keep in mind when they create new applications of machine learning. Key insight: when a machine learning system takes on work with social dimensions—tasks previously performed by humans—the system inherits its social requirements.
Finally, a feet-in-the-mud exploration of what it actually means to design for machine learning with the tools most commonly used by designers today: drawings and diagrams of various sorts. In this case the focus is on using machine learning to make an interface adaptive. It includes an interesting discussion of how to balance the use of implicit and explicit user inputs for adaptation, and how to deal with inference errors. Once again the limitations of current sketching and prototyping tools is mentioned, and related to the need for designers to develop tacit knowledge about machine learning. Such tacit knowledge will only be gained when designers can work with machine learning in a hands-on manner.
I provided this to students so that they get some additional grounding in the various kinds of prototyping that are out there. It helps to prevent reductive notions of prototyping, and it makes for a nice complement to Buxton’s work on sketching.
Some of the papers refer to machine learning as a “design material” and this paper helps to understand what that idea means. Software is a material without qualities (it is extremely malleable, it can simulate nearly anything). Yet, it helps to consider it as a physical material in the metaphorical sense because we can then apply ways of design thinking and doing to software programming.