On autonomy, design, and AI

In my the­sis, I use auton­o­my to build the nor­ma­tive case for con­testa­bil­i­ty. It so hap­pens that this year’s theme at the Delft Design for Val­ues Insti­tute is also auton­o­my. On Octo­ber 15, 2024, I par­tic­i­pat­ed in a pan­el dis­cus­sion on auton­o­my to kick things off. I col­lect­ed some notes on auton­o­my that go beyond the con­cep­tu­al­iza­tion I used in my the­sis. I thought it might be help­ful and inter­est­ing to col­lect some of them here in adapt­ed form.

The notes I brought includ­ed, first of all, a sum­ma­ry of the ecu­meni­cal con­cep­tu­al­iza­tion of auton­o­my con­cern­ing auto­mat­ed deci­sion-mak­ing sys­tems offered by Alan Rubel, Clin­ton Cas­tro, and Adam Pham (2021). They con­ceive of auton­o­my as effec­tive self-gov­er­nance. To be autonomous, we need authen­tic beliefs about our cir­cum­stances and the agency to act on our plans. Regard­ing algo­rith­mic sys­tems, they offer this notion of a rea­son­able endorse­ment test—the degree to which a sys­tem can be said to respect auton­o­my depends on its reli­a­bil­i­ty, the stakes of its out­puts, the degree to which sub­jects can be held respon­si­ble for inputs, and the dis­tri­b­u­tion of bur­dens across groups.

Sec­ond, I col­lect­ed some notes from sev­er­al pieces by James Mul­doon, which get into notions of free­dom and auton­o­my that were devel­oped in social­ist repub­li­can thought by the likes of Lux­em­burg, Kaut­sky, and Cas­to­ri­adis ( 2020, 2021a, 2021b). This sto­ry of auton­o­my is sociopo­lit­i­cal rather than moral. This approach is quite appeal­ing for some­one inter­est­ed in non-ide­al the­o­ry in a real­ist mode like myself. The account of auton­o­my Mul­doon offers is one where indi­vid­ual auton­o­my hinges on greater group auton­o­my and stronger bonds of asso­ci­a­tion between those pro­duc­ing and con­sum­ing tech­nolo­gies. Free­dom is con­ceived of as col­lec­tive self-determination.

And then third and final­ly, there’s this con­nect­ed idea of rela­tion­al auton­o­my, which to a degree is part of the account offered by Rubel et al., but in the con­cep­tions here more rad­i­cal in how it seeks to cre­ate dis­tance from lib­er­al indi­vid­u­al­ism (e.g., Christ­man, 2004; Mhlam­bi & Tiri­bel­li, 2023; West­lund, 2009). In this, indi­vid­ual capac­i­ty for autonomous choice is shaped by social struc­tures. So free­dom becomes real­ized through net­works of care, respon­si­bil­i­ty, and interdependence.

That’s what I am inter­est­ed in: accounts of auton­o­my that are not premised on lib­er­al indi­vid­u­al­ism and that give us some alter­na­tive han­dle on the prob­lem of the social con­trol of tech­nol­o­gy in gen­er­al and of AI in particular.

From my point of view, the impli­ca­tions of all this for design and AI include the following.

First, to make a fair­ly obvi­ous but often over­looked point, the degree to which a giv­en sys­tem impacts people’s auton­o­my depends on var­i­ous fac­tors. It makes lit­tle sense to make blan­ket state­ments about AI destroy­ing our auton­o­my and so on.

Sec­ond, in val­ue-sen­si­tive design terms, you can think about auton­o­my as a val­ue to be bal­anced against others—in the case where you take the posi­tion that all val­ues can be con­sid­ered equal­ly impor­tant, at least in prin­ci­ple. Or you can con­sid­er auton­o­my more like a pre­con­di­tion for peo­ple to live with tech­nol­o­gy in con­cor­dance with their val­ues, mak­ing auton­o­my take prece­dence over oth­er val­ues. The sociopo­lit­i­cal and rela­tion­al accounts above point in this direction.

Third, sup­pose you buy into the rad­i­cal demo­c­ra­t­ic idea of tech­nol­o­gy and auton­o­my. In that case, it fol­lows that it makes lit­tle sense to admon­ish indi­vid­ual design­ers about respect­ing oth­ers’ auton­o­my. They may be asked to priv­i­lege tech­nolo­gies in their designs that afford indi­vid­ual and group auton­o­my. But design­ers also need orga­ni­za­tion and eman­ci­pa­tion more often than not. So it’s about build­ing pow­er. The pow­er of work­ers inside the orga­ni­za­tions that devel­op tech­nolo­gies and the pow­er of com­mu­ni­ties that “con­sume” those same technologies. 

With AI, the fact is that, in real­i­ty, in the cas­es I look at, the com­mu­ni­ties that AI is brought to bear on have lit­tle say in the mat­ter. The buy­ers and deploy­ers of AI could and should be made more account­able to the peo­ple sub­ject­ed to AI.

Towards a realist AI design practice?

This is a ver­sion of the open­ing state­ment I con­tributed to the pan­el “Evolv­ing Per­spec­tives on AI and Design” at the Design & AI sym­po­sium that was part of Dutch Design Week 2024. I had the plea­sure of join­ing Iohan­na Nicen­boim and Jesse Ben­jamin on stage to explore what could be called the post-GenAI pos­si­bil­i­ty space for design. Thanks also to Math­ias Funk for moderating. 

The slide I displayed:

My state­ment:

  1. There’s a lot of mag­i­cal think­ing in the AI field today. It assumes intel­li­gence is latent in the struc­ture of the inter­net. Metaphors like AGI and super­in­tel­li­gence are mag­i­cal in nature. AI prac­tice is also very secre­tive. It relies on demon­stra­tions. This leads to a lack of rig­or and polit­i­cal account­abil­i­ty (cf. Gilbert & Lam­bert in Ven­ture­Beat, 2023).
  2. Design in its ide­al­ist mode is eas­i­ly fooled by such mag­ic. For exam­ple, in a recent report, the Dutch Court of Audit states that 35% of gov­ern­ment AI sys­tems are not known to meet expec­ta­tions (cf. Raji et al., 2022).
  3. What is need­ed is design in a real­ist mode. Real­ism focus­es on who does what to whom in whose inter­est (cf. Geuss, 2008, 23 in von Busch & Palmås, 2023). Applied to AI the ques­tion becomes who gets to do AI to whom? This isn’t to say we should con­sid­er AI tech­nolo­gies com­plete­ly inert. It medi­ates our being in the world (Ver­beek, 2021). But we should also not con­sid­er it an inde­pen­dent force that’s just drag­ging us along.
  4. The chal­lenge is to steer a path between, on the one hand, whole­sale cyn­i­cal rejec­tion and naive, opti­mistic, uncon­di­tion­al embrace, on the oth­er hand.
  5. In my own work, what that looks like is to use design to make things that allow me to go into sit­u­a­tions where peo­ple are build­ing and using AI sys­tems. And to use those things as instru­ments to ask ques­tions relat­ed to human auton­o­my, social con­trol, and col­lec­tive free­dom in the face of AI.
  6. The exam­ple shown is an ani­mat­ed short depict­ing a design fic­tion sce­nario involv­ing intel­li­gent cam­era cars used for pol­i­cy exe­cu­tion in urban pub­lic space. I used this video to talk to civ­il ser­vants about the chal­lenges fac­ing gov­ern­ments who want to ensure cit­i­zens remain in con­trol of the AI sys­tems they deploy (cf. Alfrink et al., 2023).
  7. Why is this real­ist? Because the work looks at how some groups of peo­ple use par­tic­u­lar forms of actu­al­ly exist­ing AI to do things to oth­er peo­ple. The work also fore­grounds the com­pet­ing inter­ests that are at stake. And it frames AI as nei­ther ful­ly autonomous nor ful­ly pas­sive, but as a thing that medi­ates peo­ples’ per­cep­tions and actions.
  8. There are more exam­ples besides this. But I will stop here. I just want to reit­er­ate that I think we need a real­ist approach to the design of AI.

Machine Learning for Designers’ workshop

On Wednes­day Péter Kun, Hol­ly Rob­bins and myself taught a one-day work­shop on machine learn­ing at Delft Uni­ver­si­ty of Tech­nol­o­gy. We had about thir­ty master’s stu­dents from the indus­tri­al design engi­neer­ing fac­ul­ty. The aim was to get them acquaint­ed with the tech­nol­o­gy through hands-on tin­ker­ing with the Wek­ina­tor as cen­tral teach­ing tool.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Background

The rea­son­ing behind this work­shop is twofold. 

On the one hand I expect design­ers will find them­selves work­ing on projects involv­ing machine learn­ing more and more often. The tech­nol­o­gy has cer­tain prop­er­ties that dif­fer from tra­di­tion­al soft­ware. Most impor­tant­ly, machine learn­ing is prob­a­bilis­tic in stead of deter­min­is­tic. It is impor­tant that design­ers under­stand this because oth­er­wise they are like­ly to make bad deci­sions about its application. 

The sec­ond rea­son is that I have a strong sense machine learn­ing can play a role in the aug­men­ta­tion of the design process itself. So-called intel­li­gent design tools could make design­ers more effi­cient and effec­tive. They could also enable the cre­ation of designs that would oth­er­wise be impos­si­ble or very hard to achieve.

The work­shop explored both ideas.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Format

The struc­ture was rough­ly as follows: 

In the morn­ing we start­ed out pro­vid­ing a very broad intro­duc­tion to the tech­nol­o­gy. We talked about the very basic premise of (super­vised) learn­ing. Name­ly, pro­vid­ing exam­ples of inputs and desired out­puts and train­ing a mod­el based on those exam­ples. To make these con­cepts tan­gi­ble we then intro­duced the Wek­ina­tor and walked the stu­dents through get­ting it up and run­ning using basic exam­ples from the web­site. The final step was to invite them to explore alter­na­tive inputs and out­puts (such as game con­trollers and Arduino boards).

In the after­noon we pro­vid­ed a design brief, ask­ing the stu­dents to pro­to­type a data-enabled object with the set of tools they had acquired in the morn­ing. We assist­ed with tech­ni­cal hur­dles where nec­es­sary (of which there were more than a few) and closed out the day with demos and a group dis­cus­sion reflect­ing on their expe­ri­ences with the technology.

Photo credits: Holly Robbins
Pho­to cred­its: Hol­ly Robbins

Results

As I tweet­ed on the way home that evening, the results were… interesting. 

Not all groups man­aged to put some­thing togeth­er in the admit­ted­ly short amount of time they were pro­vid­ed with. They were most often stymied by get­ting an Arduino to talk to the Wek­ina­tor. Max was often picked as a go-between because the Wek­ina­tor receives OSC mes­sages over UDP, where­as the quick­est way to get an Arduino to talk to a com­put­er is over ser­i­al. But Max in my expe­ri­ence is a fick­le beast and would more than once crap out on us.

The groups that did build some­thing main­ly assem­bled pro­to­types from the exam­ples on hand. Which is fine, but since we were main­ly work­ing with the exam­ples from the Wek­ina­tor web­site they tend­ed towards the inter­ac­tive instru­ment side of things. We were hop­ing for explo­rations of IoT prod­uct con­cepts. For that more hand-rolling was required and this was only achiev­able for the stu­dents on the high­er end of the tech­ni­cal exper­tise spec­trum (and the more tena­cious ones).

The dis­cus­sion yield­ed some inter­est­ing insights into men­tal mod­els of the tech­nol­o­gy and how they are affect­ed by hands-on expe­ri­ence. A com­ment I heard more than once was: Why is this con­sid­ered learn­ing at all? The Wek­ina­tor was not per­ceived to be learn­ing any­thing. When chal­lenged on this by reit­er­at­ing the under­ly­ing prin­ci­ples it became clear the black box nature of the Wek­ina­tor ham­pers appre­ci­a­tion of some of the very real achieve­ments of the tech­nol­o­gy. It seems (for our stu­dents at least) machine learn­ing is stuck in a grey area between too-high expec­ta­tions and too-low recog­ni­tion of its capabilities.

Next steps

These results, and oth­ers, point towards some obvi­ous improve­ments which can be made to the work­shop for­mat, and to teach­ing design stu­dents about machine learn­ing more broadly. 

  1. We can improve the toolset so that some of the heavy lift­ing involved with get­ting the var­i­ous parts to talk to each oth­er is made eas­i­er and more reliable.
  2. We can build exam­ples that are geared towards the prac­tice of design­ing IoT prod­ucts and are ready for adap­ta­tion and hacking.
  3. And final­ly, and prob­a­bly most chal­leng­ing­ly, we can make the work­ings of machine learn­ing more trans­par­ent so that it becomes eas­i­er to devel­op a feel for its capa­bil­i­ties and shortcomings.

We do intend to improve and teach the work­shop again. If you’re inter­est­ed in host­ing one (either in an edu­ca­tion­al or pro­fes­sion­al con­text) let me know. And stay tuned for updates on this and oth­er efforts to get design­ers to work in a hands-on man­ner with machine learning.

Spe­cial thanks to the bril­liant Ianus Keller for con­nect­ing me to Péter and for allow­ing us to pilot this crazy idea at IDE Acad­e­my.

References

Sources used dur­ing prepa­ra­tion and run­ning of the workshop:

  • The Wek­ina­tor – the UI is infu­ri­at­ing­ly poor but when it comes to get­ting start­ed with machine learn­ing this tool is unmatched.
  • Arduino – I have become par­tic­u­lar­ly fond of the MKR1000 board. Add a lithi­um-poly­mer bat­tery and you have every­thing you need to pro­to­type IoT products.
  • OSC for ArduinoCNMAT’s imple­men­ta­tion of the open sound con­trol (OSC) encod­ing. Key puz­zle piece for get­ting the above two tools talk­ing to each other.
  • Machine Learn­ing for Design­ers – my pre­ferred intro­duc­tion to the tech­nol­o­gy from a design­er­ly perspective.
  • A Visu­al Intro­duc­tion to Machine Learn­ing – a very acces­si­ble visu­al expla­na­tion of the basic under­pin­nings of com­put­ers apply­ing sta­tis­ti­cal learning.
  • Remote Con­trol Theremin – an exam­ple project I pre­pared for the work­shop demo­ing how to have the Wek­ina­tor talk to an Arduino MKR1000 with OSC over UDP.

Artificial intelligence, creativity and metis

Boris point­ed me to Cre­ativeAI, an inter­est­ing arti­cle about cre­ativ­i­ty and arti­fi­cial intel­li­gence. It offers a real­ly nice overview of the devel­op­ment of the idea of aug­ment­ing human capa­bil­i­ties through tech­nol­o­gy. One of the claims the authors make is that arti­fi­cial intel­li­gence is mak­ing cre­ativ­i­ty more acces­si­ble. Because tools with AI in them sup­port humans in a range of cre­ative tasks in a way that short­cuts the tra­di­tion­al require­ments of long prac­tice to acquire the nec­es­sary tech­ni­cal skills. 

For exam­ple, Shad­ow­Draw (PDF) is a pro­gram that helps peo­ple with free­hand draw­ing by guess­ing what they are try­ing to cre­ate and show­ing a dynam­i­cal­ly updat­ed ‘shad­ow image’ on the can­vas which peo­ple can use as a guide.

It is an inter­est­ing idea and in some ways these kinds of soft­ware indeed low­er the thresh­old for peo­ple to engage in cre­ative tasks. They are good exam­ples of arti­fi­cial intel­li­gence as part­ner in stead of mas­ter or servant. 

While read­ing Cre­ativeAI I wasn’t entire­ly com­fort­able though and I think it may have been caused by two things. 

One is that I care about cre­ativ­i­ty and I think that a good under­stand­ing of it and a dai­ly prac­tice at it—in the broad sense of the word—improves lives. I am also in some ways old-fash­ioned about it and I think the joy of cre­ativ­i­ty stems from the infi­nite­ly high skill ceil­ing involved and the nev­er-end­ing prac­tice it affords. Let’s call it the Jiro per­spec­tive, after the sushi chef made famous by a won­der­ful doc­u­men­tary.

So, claim­ing that cre­ative tools with AI in them can short­cut all of this life-long joy­ful toil pro­duces a degree of pan­ic for me. Although it’s prob­a­bly a Pas­toral world­view which would be bet­ter to aban­don. In a world eat­en by soft­ware, it’s bet­ter to be a Promethean.

The sec­ond rea­son might hold more water but real­ly is more of an open ques­tion than some­thing I have researched in any mean­ing­ful way. I think there is more to cre­ativ­i­ty than just the tech­ni­cal skill required and as such the Cre­ativeAI sto­ry runs the risk of being reduc­tion­ist. While read­ing the arti­cle I was also slow­ly but sure­ly mak­ing my way through one of the final chap­ters of James C. Scott’s See­ing Like a State, which is about the con­cept of metis.

It is prob­a­bly the most inter­est­ing chap­ter of the whole book. Scott intro­duces metis as a form of knowl­edge dif­fer­ent from that pro­duced by sci­ence. Here are some quick excerpts from the book that pro­vide a sense of what it is about. But I real­ly can’t do the rich­ness of his descrip­tion jus­tice here. I am try­ing to keep this short.

The kind of knowl­edge required in such endeav­ors is not deduc­tive knowl­edge from first prin­ci­ples but rather what Greeks of the clas­si­cal peri­od called metis, a con­cept to which we shall return. […] metis is bet­ter under­stood as the kind of knowl­edge that can be acquired only by long prac­tice at sim­i­lar but rarely iden­ti­cal tasks, which requires con­stant adap­ta­tion to chang­ing cir­cum­stances. […] It is to this kind of knowl­edge that [social­ist writer] Lux­em­burg appealed when she char­ac­ter­ized the build­ing of social­ism as “new ter­ri­to­ry” demand­ing “impro­vi­sa­tion” and “cre­ativ­i­ty.”

Scott’s argu­ment is about how author­i­tar­i­an high-mod­ernist schemes priv­i­lege sci­en­tif­ic knowl­edge over metis. His explo­ration of what metis means is super inter­est­ing to any­one ded­i­cat­ed to hon­ing a craft, or to cul­ti­vat­ing organ­i­sa­tions con­ducive to the devel­op­ment and appli­ca­tion of craft in the face of uncer­tain­ty. There is a close link between metis and the con­cept of agility.

So cir­cling back to arti­fi­cial­ly intel­li­gent tools for cre­ativ­i­ty I would be inter­est­ed in explor­ing not only how we can dimin­ish the need for the acqui­si­tion of the tech­ni­cal skills required, but to also accel­er­ate the acqui­si­tion of the prac­ti­cal knowl­edge required to apply such skills in the ever-chang­ing real world. I sug­gest we expand our under­stand­ing of what it means to be cre­ative, but with­out los­ing the link to actu­al practice.

For the ancient Greeks metis became syn­ony­mous with a kind of wis­dom and cun­ning best exem­pli­fied by such fig­ures as Odysseus and notably also Prometheus. The lat­ter in par­tic­u­lar exem­pli­fies the use of cre­ativ­i­ty towards trans­for­ma­tive ends. This is the real promise of AI for cre­ativ­i­ty in my eyes. Not to sim­ply make it eas­i­er to repro­duce things that used to be hard to cre­ate but to cre­ate new kinds of tools which have the capac­i­ty to sur­prise their users and to pro­duce results that were impos­si­ble to cre­ate before.