top of page
Search
Writer's pictureKerry Mackereth

Os Keyes on Avoiding Universalism and 'Silver Bullets' in Tech Design

In this episode we chat to Os Keyes, an Ada Lovelace fellow and adjunct professor at Seattle University, and a PhD student at the University of Washington in the department of Human Centered Design & Engineering. We discuss everything from avoiding universalism and silver bullets in AI ethics to how feminism underlies Os’s work on autism and AI and automatic gender recognition technologies.


Anton Grabolle / Better Images of AI / Classification Cupboard / CC-BY 4.0


Reading List


By Our Guest


Rincón, C., Keyes, O., & Cath, C., (2021). “Speaking from Experience: Trans/Non-Binary Requirements for Voice-Activated AI”. Proceedings of the ACM on Human-Computer Interaction. 5, CSCW1, Article 132 (April 2021), 27 pages. https://doi.org/10.1145/3449206


Keyes, O., Huston, J., & Durbin, M. (2019). “A mulching proposal: Analysing and improving an algorithmic system for turning the elderly into high-nutrient slurry”. CHI 2019. Retrieved from https://ironholds.org/resources/papers/mulching.pdf


Keyes, O. (2020) “Automating Autism: Disability, discourse, and Artificial Intelligence”. Journal of Sociotechnical Critique, 1.1.


Keyes, O., Peil, B., Williams, R., Spiel, K. (2020) “Reimagining (Women’s) Health: HCI, Gender and Essentialised Embodiment”. ACM Transactions on Computer-Human Interaction (TOCHI), 27.4.


Our Guest Recommends


Cusick, C. M. (2019) “Testifying Bodies: Testimonial Injustice as Derivatization”, Social Epistemology, 33:2, 111-123, DOI: 10.1080/02691728.2019.1577919


Garvey, C. (2019). “Artificial Intelligence and Japan’s Fifth Generation: The Information Society, Neoliberalism, and Alternative Modernities”. Pacific Historical Review (2019) 88 (4): 619–658.https://doi.org/10.1525/phr.2019.88.4.619


Garry, A., & Pearsall, M. (Eds.). (1996). Women, Knowledge, and Reality: Explorations in Feminist Philosophy (2nd ed.). Routledge.


Harding, S. (1992). “RETHINKING STANDPOINT EPISTEMOLOGY: WHAT IS ‘STRONG OBJECTIVITY?’” The Centennial Review, 36(3), 437-470. Retrieved May 21, 2021, from http://www.jstor.org/stable/23739232


Norberg, A., O'Neill, J.: Transforming Computer Technology: Information Processing for the Pentagon, pp. 1962–1986. Johns Hopkins University Press


Roland, A., & Shiman, P. (2002). Strategic computing: DARPA and the quest for machine intelligence, 1983–1993. MIT Press.


Transcript


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today we’re talking to Os Keyes, an Ada Lovelace fellow and adjunct professor at Seattle University, and a PhD student at the University of Washington in the department of Human Centered Design & Engineering. We discuss everything from avoiding universalism and silver bullets in AI ethics to how feminism underlies Os’s work on autism and AI and gender recognition, as well as their approach to teaching.


OS KEYES:

Hey. Absolutely, and thank you for having me. So my name is Os Keyes, I'm a PhD student at the University of Washington in the department of Human Centered Design & Engineering, and to answer everyone's next question, I have no idea what that department name means either, I've been here for four years, trying to find someone who can tell me. Nobody can. I'm also an Ada Lovelace fellow, and an adjunct professor at Seattle University, where I am paid to, as Richard Rorty put it, corrupt the kids. My interest in gender technology and feminism took a long route. Basically I started off in my PhD programme generally interested in questions of the ethics of technology, particularly the ethics of data science and AI.


And I also started my PhD programme with the intention that I would, you know, take advantage of the fact that it was a complete break from my previous life, in the sense of like previously I was a industry data scientist, to come out as trans and to like start going through those rigmaroles and so my very naive idea was that I would start in the academy, come out, and then, just like do the data ethics work which I originally intended to do, and then everything would be fine. And what of course happened instead was that I entered the academy, came out, spent three to four years continually having to justify my own existence to my colleagues, to the literature I was reading, to the people writing it, and ended up, you know, having to become a like gender and technology person, because, the way I think of it, it turns out that even being taken seriously as someone with something useful to say requires you to first read all of the gender studies literature so that you can throw it at people. And if I'm going to basically be an amateur Gender Studies scholar I might as well be professional and get paid for it, you know.


More seriously, feminist approaches to things from like an epistemic point of view was, have always been of interest to me, like the only book I brought with me from my previous life was an edited volume by Ann Garry called Women, Knowledge and Reality which is like this classic feminist philosophy, but it's definitely, like something that kicked up a pace, as a result of, you know, existing in in a space where, existing in a form where it was no longer optional to think about in, like how I interact with other people as well as like inside my own head and existing in a space where, you know, a lot of the time it is feminist methods and like feminist theory and thinking that is pushing questions around ethics and technology forward.


Because, and I'll shut up, up to this, I promise, when you look at the history of feminist thought you also are looking in some ways at the history of radical thought, you are looking at a domain of thought that isn't just, you know, what if we weren't in misogynistic, but also what does addressing social injustice look like? What are Marxist approaches to this? What do Marxist approaches leave out? What would anarchist approaches be? What is the role of the state? What is the role of embodiment? Arguably, what is the role of technology? Questions of, what does technology do to human interactions have always been heavily intertwined with feminism. And so the answer to, I guess, how did I end up in in feminism and technology and gender and technology is in my own life but also in part, I don't believe that it is actually possible to be someone who thinks about ethics and technology, who doesn't engage with feminist literature around it - it is akin to being like taxi driver who doesn't engage in having wheels. Like, strictly speaking, you still got most of what you need, but you're not going to be going anywhere fast.

ELEANOR DRAGE:

What a great analogy and I think we both agree that feminist theory is very good at driving innovations in radical thinking. Our podcast is called The Good Robot so I wanted to ask you next what do you think good technology is, and is it even possible?


OS KEYES:

That’s a big one. With my amateur political theorist hat on, I would prefer not to prescribe this is what a good … these are the specific attributes of a good technology, specifically because I know that any answer I come up with will be contingent and very structured by who I am and where I am but what I can say very broadly is that my focus is more on that second part, how do we get there, because that is something that is often under-considered in popular discussion of technology and also popular discussion of what makes technology good or bad. And because my own leanings are very like a anarco-communist, frankly, and so something that is important to me is prefiguration. The idea that the way we go about doing a thing should match the ethos we want to find in the technology. You can't have feminist technology that is exploitativly made, that's a contradiction in terms even if it is for putatively feminist ends. And so I tend to focus on that question of, how do we get there? And to me the answer, it requires sort of like, three big things.


The first is an awareness of and an addressing of the intertwined nature of power, knowledge and technology. And taking almost like a meta-view at that. So not... almost not just saying, like, who gets to, like be directly involved in the design of technology in some kind of Sheryl Sandberg ‘lean in’ kind of way, but more saying like, what are the broader cultural notions like financial incentives, so on and so forth, that drive what products get funded and what products don't, and [which] get deployed and [which] don't.


Second is an avoidance of universalism and a support [for] almost fragile and contradictory forms of democracy in technology. I think there is a tendency in Western thought as a whole, but in technology in particular, to sort of be universalist, to say, like, Oh, this is the … you know, the phrase in the tech industry is this is the silver bullet, right? Unless you're killing werewolves, silver bullets aren't really what you need. And the vital thing about werewolves is that they don't exist, we’re always calling things silver bullets, or unicorns, both of which relate to fantastical concepts beyond practical reality. And I feel like this is telling. And I suspect that a lot of the harm of technology comes not from anything inherent to the design, but simply from universalist hubris. ‘Oh, we have designed this thing to fix this problem everywhere, in every context it appears to everyone it appears for’. Congrats, you've designed a thing that will not do that, because that is impossible. And so one of the things that I try and emphasise in my writing and my teaching is that it is rare to impossible to come up with one answer to any question in social spaces, and technology is designed as a social space. And so, and this sort of heavily overlaps with questions of economic incentives, economic structure and power, one of the things that I really want to emphasise is the importance of sort of small scale and collective and relative technology design. And the fact that you don't, not only do you not have to build something that will have a billion users, you probably shouldn't. If you build something that has a billion users, you're probably screwing over about 75% of them at a minimum, and giving most of the remainder a better, but still subpar experience. What I would rather see is people playing around with ways of designing technology and ways of deploying and sustaining technology that are more community-oriented, more local, more relational.


And finally, that third point is around solidarity and sort of affective relations. I think that we spend a lot of time talking about how we design technology as if technology is almost like a car engine. And we debate like, who gets to be the mechanic or what kind of car we are building, but I think a big part of it is also like, what are the relations between mechanics? What do we understand as the essential components of a car? The limitations of a purely relational form of design is simply that sometimes we do need standardisation, we do need to bridge across different spaces. And the answer to how do we do that in an ethical way starts with looking really seriously about the relationships we have with technology and with each other as people around technology.


KERRY MACKERETH:

Fantastic, thank you so much Os and I think the questions you’re raising around scale, universalism, they’re such important points and I really resonate with what you’re saying about the silver bullet - ‘all we need is one more piece of innovative tech which is somehow going to solve all the other problems that the tech has created’. So thinking about that though, could we think about that idea of harms, what kinds of harm do you think emerge from trying to create these solve-it-all technologies, or from technologies that emerge from the tech sector more broadly?


OS KEYES:

Totally, so you already in the question touched on one of them, right, which is this sort of direct material harm and I think there are countless examples of that. You know you can look at the really obvious big shiny case studies which I think people tend to focus on are things like facial recognition leading to people getting arrested and thrown in jail. Automated decision systems for things like, you know, identifying benefits fraudsters or child welfare cases, leading to people getting thrown out of their house or losing their kids or losing their parents. And these are very obvious direct forms of material harm, but they're also I think sort of separate harms, discursive harms, harms which are harms not because they directly cause some kind of like obvious immediate violence, not that they are: algorithm makes decision = you lose house, but instead decisions that change how the space is framed, change how you see yourself, change how other people see you or change how you see like a particular concept. And so alter the conditions of possibility that are available.


So to use a really like sort of trite hypothetical right the material harm of facial recognition is sort of fairly obvious, but there is a discursive harm that comes with, amongst other things, campaigns against it that insist on the solution is fairness, we need like fair facial recognition systems, and this harm is multiple; first that it makes it a technical problem with a technical solution, second that it maintains the idea that you can like tell inherently who people are by looking at their face, third because it doesn't challenge in any way the idea that we should have always on HD surveillance cameras on every pole, and fourth, because it positions the tech sector as the saviours of the tech sector. And in fact, tech methods as the saviours of the excesses of tech methods. Because if your conclusion is, we need fairness, then your next conclusion is, ‘and that is why we collected a load more surveillance data of people’, [and], ‘we’re not doing this because we're racist, we're doing this because we're anti-racist, we just happened to be legitimising and covering for and in fact enhancing a deeply racist system that we are contributing to’.


So, that kind of action it's, you know, it's harder to directly look at it and say like that is immediately causing material harm, but it absolutely is causing harm both because of the way that it shapes our conversations around, you know, what are appropriate interventions. And in the ways that it shapes in many cases how we see ourselves, right, like the question of infrastructural neutrality, for example, or technological neutrality is not just poor philosophical choice. It also means like, how are people who are caught in the wheels of technology to understand the apportionment of blame. If algorithms are their own thing, for example, and so the developers can't be held responsible, and if the algorithm can't be held responsible because technology is neutral, then who is responsible and is the answer going to be I should change, I should live my life a different way, it is in some way my fault.


I mean, I think it's, it's big, frankly, because both shape the things that are available to us, both shape the tools we have for addressing harms, you know, with material harms, this is fairly clear, generally speaking, people who have been structurally impoverished have fewer resources with which to fight that structural impoverishment and discursively people who have been sort of declared to be and are culturally understood to be not people or not capable of expertise in a particular area, you know, are taken less seriously in declaring that they are in fact people and can participate. You know one of the projects I've previously worked on was looking at the way that AI researchers frame autistic people and autistic personhood. And one of the points I made there was, you know they're doing this in the context of diagnostic algorithms that are the subject of their own paper, and like heinous, but one of the points I sort of brought up is that a lot of the framing of autism was the people who have autism can't communicate and can't know themselves and can't know other people. And you get this issue that is - so if you're framing as a population is, you can't communicate, and you don't can't know anything, how exactly you meant to challenge that? Because knowing stuff, and being proven to know stuff requires people to first listen, or take what you have to say, as at all legitimate. And so it is - the analogy I use is it's like climbing out of quicksand, not just in the sense that it's hard but also in the sense of the thing keeping you down is not just the quicksand it is in a way, a weird way, your own mass. Like you are having to fight twice as hard because of not just the substance but how the substance is pulling on you, shaping you, like limiting your ability to to move.


And at the same time I think that one of the ways that becomes clear is not just in our understanding of what problems should be addressed but also our understanding of our understanding, like what factors are relevant in fixing a thing. How is our assumption about - how are our assumptions about that sort of shaped already by the cultural frame that we're in, by the age that we're in, to return to [Sandra S.] Harding. And I've been working on a project on that recently actually, which looked at the question of the role of emotion and affect and feeling in structuring things like algorithmic audit processes, and this isn't something that people have really theorised about because I guess people like the idea of processes, when you say process, like you think of something standardised and repeatable and consistent, and not something that is inherently messy and relational and complex. And so people much prefer making Gantt charts and nice little diagrams of, where like, an algorithmic audit say or any other process of addressing and justices should, at most be as complex as an Ikea manual. Like, it should have pictographic instructions of, ‘you do exactly these things in exactly these lines, and please don't drop it on your face’.


ELEANOR DRAGE:

You’ve gestured towards your work a little bit, and you’re incredibly prolific and have written important work on autism and AI, on the trans and non-binary requirements for voice-activated AI and how automatic gender recognition is trans exclusive, as well as hundreds of wikipedia articles - so in this work how does feminism figure, what does it mean for you in your research?


OS KEYES:

It's interesting to hear those brought up as examples because it feels like on a day to day basis I'm mostly known as the person who wrote a paper featuring Henry Kissinger being carried off by a drone, because I also did that. I contain multitudes, and I would be remiss to not note that, you know, the creative voice-activated AI project I was lucky enough to work with Cami Rincón and Corinne Cath, and it actually originated as Cami’s master's thesis and they are the first author and did the bulk of the work. But in terms of how feminism features, that's an interesting one, historically prior to like a year ago, I would have said that, the way feminism appeared in my work was almost as a weapon. Just in the sense of like, it was a repertoire of tools and techniques for identifying and addressing injustices through scholarship through highlighting injustice through proposing better ways to approach it. And this is still the case right, like I think if you look at sort of my very old work, god it seems weird to say very old stuff, from 2018 - the pandemic has been approximately three centuries long so I'm going to qualify that as very old - you know, there's a lot of, sort of like pointers to and links into feminist theory there. And it's obviously about gender, which is like a classic topic of feminism.


But then my more recent work around autism doesn't appear to be about gender at all but is drawing from feminism in a very different way, which is that it's drawing from feminists’ investigations of the question of who can know and how do we construct knowers, and sort of epistemic status as as the philosophers put it, but these days like the way I've been thinking about it a lot more and like the place I've been trying to do a lot better job of modelling it, is almost in how I understand the way I do my work. And I don't just mean that in the sense of making sure that I'm always citing in an even-handed manner, and I'm not treating participants like garbage and/or calling them subjects, which I always meant we hear in like a Gollum voice, whenever I'm reading a paper, subjects and females, those are the two words which can only be heard in like this, like, moist cave-dweller voice, but also in the sense of like thinking about the approach I take to critique and the approach I take to relationships with other academics and with scholars and this sounds fuzzy and poorly thought out because I'm still like thinking it out.


But I think that one of the most important things that we can do as academics doesn't actually come from writing papers, papers are almost the excuse, then they're an excuse for relationships, relationships with the reader, relationships with the people who turn up to the conference talk, relationships with students that you're lucky enough to teach and given like privilege to interact with as a result of, you know, being someone who is published in papers and so like meets the academic, like, frankly, fairly phallic requirements of like ‘your CV must be this long to like be a real person’. You know all these things are privileges and all these things are opportunities and to me the most important thing right now. And the place that I've been trying to to sort of emphasise feminist means and scholarship is almost in those relationships and in how I how I frame my approach to them and in what I make possible, because you can write something that is heavily informed by feminist theory, and cites all the right people, and is like, tackling like misogyny.


But if you're also a complete dickhead, how can you point to that paper and say that your your career trajectory is feminist and your outcomes are feminist or, you know if your conclusion is just straight up nihilism, and if you're, and if you're oriented in a way where you might be producing work, that is, citing the right people, but it is ultimately work that is about you, exists only to to tear down and not to rebuild in its place. It's increasingly difficult for me to look at work like that and treat it as feminist scholarship in the sense of, like, representing a genuinely new way to do things and a better way to do things instead of simply being a reaction to and in some ways a continuation of like the old. And so, yeah, the place that it comes out right now, most of my research is just in trying to make sure that the relationships that I'm building from my work are good ones, trying to make sure that the conclusion people come away with is, is hope, is a kind of optimistic nihilism rather than a pessimistic nihilism.


I would much rather produce hope in 20 undergrads than citations in 20 articles. And that's where I'm focusing right now. Doing work that is thoughtful, that is caring and that puts love out into the world is in, like many cases going to be responded to in kind. And so I don't think that there is a contradiction between me, you know, doing research that uses all the right methods, and trying to make spaces for other people to have thoughts. And so right now the way it comes through is in trying to make sure that there is a good space for them that when they show up the kettle is on the boil and there is tea and coffee and hot chocolate because I don't drink tea or coffee, and it's less fancy than like conference presentations, but to me they are part and parcel of the same thing. You can't have good work without good relations, and good work is work that promotes those relations.


KERRY MACKERETH:

Wow, thank you. And then finally, what do you think that feminism can bring to industry practice in/around technology, around AI, how do you think feminism can uniquely address some of these problems raised by the tech industry?


OS KEYES:

I think that the main thing that feminism can can bring to industry - two things I guess, both of which come from, like, in some ways like the 70s feminist movement and then the postcolonial feminist movement, rather than from like, you know, feminist philosophy of technology specifically are questions of honestly consciousness raising and solidarity, like I think that the two biggest issues we're running into around change right now is 1) understanding what the limitations are on where we are, like, knowing what we don't know. And 2) understanding how we take a lot of different forms of exploitation, and a lot of different sort of fragmented and atomized populations who are suffering in different ways to different degrees under technology and forge bonds of solidarity between them, and I think that these two things go hand in hand. You know, if you have a “solution” to like facial recognition that doesn't have anything to say about like the Mechanical Turk workers who, yes, are coding this but also being just as exploited as you are, then that's a solution that you need to replace with a better one. And, or inversely if you say that, well, you know, we are the technologists we know things we will come up with technological solutions to these problems we will not reflect on ‘how did we get to believe that these were the correct solutions?’ Or, okay, we think we know what the actual like problem is, but is it an issue of like we didn't know that this was happening, like it's a bug in a bug tracker somewhere, or is it an issue of like our worldview needs to fundamentally change and we like actually need to sit and think deeply.


ELEANOR DRAGE:

Os, thank you so much for being on the podcast and I can’t wait to listen to this back again and take in all your thoughts.


OS KEYES:

Thank you for having me, both of you, it’s been great.


ELEANOR DRAGE:

This episode was made possible thanks to our generous funder, Christina Gaw. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.




60 views0 comments

Yorumlar


bottom of page