top of page
Search
Writer's pictureKerry Mackereth

Using Feminist Chatbots to Fight Trolls With Sarah Ciston

In this episode, we talk to Sarah Ciston, an artist, coder, writer, and critical AI scholar. We asked Sarah to talk about this badass chatbot they created called Ladymouth, which responds to trolls and incels on hate forums. We discussed the difficult labor of content moderation and the long lasting effects of trying to do feminist work online. We also talk about the surprising things that incels and feminists have in common and whether you can use AI to change people's minds and establish common humanity at scale. 


Sarah Ciston (they/she) is a Mellon Fellow and PhD Candidate in Media Arts and Practice at the University of Southern California, where they lead Creative Code Collective—a student community for co-learning programming using approachable, interdisciplinary strategies. They are also an Associated Researcher in the AI & Society Lab at the Humboldt Institute for Internet and Society in Berlin. Their research investigates how to bring intersectionality to artificial intelligence by employing queer, feminist, and anti-racist theories, ethics, and tactics at every stage of development and implementation. Their tactical media projects also include a natural language processing interface that ‘rewrites’ the inner critic and a chatbot that explains feminism to online misogynists.

 

Reading List:

 

Habits of Leaking: Of Sluts and Network Cards, Wendy Hui Kyong Chun; Sarah Friedland, differences (2015) 26 (2): 1–28, https://doi.org/10.1215/10407391-3145937


Ciston, S. (2019) ‘ladymouth: Anti-Social-Media Art As Research’, Ada (Eugene, Or.), 2019(15). Available at: https://doi.org/10.5399/uo/ada.2019.15.5.


Muldoon, J., Cant, C. and Graham, M. (2024) Feeding the machine : the hidden human labour powering AI. Edinburgh: Canongate Books.

 

Transcript:

 

Kerry: Hi, I'm Dr. Kerry McInerney. Dr. Eleanor Drage and I are the hosts of the Good Robot podcast. Join us as we ask the experts, what is good technology? Is it even possible? And how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk, where we've got a full transcript of the episode and a sample. We'd love hearing from listeners, so feel free to tweet or email us. And, and also so appreciate you leaving us a review on the podcast app, but until then, sit back, relax, and enjoy the episode.

 

Eleanor: In this episode, we talked to Sarah [00:01:00] Ciston, an artist, coder, writer, and critical AI scholar.


We asked Sarah to talk about this badass chatbot they created called Ladymouth, which responds to trolls and incels on hate forums. We discussed the difficult labor of content moderation and the long-lasting effects of trying to do feminist work online.


We also talk about the surprising things that incels and feminists have in common and whether you can use AI to change people's minds and establish common humanity at scale.

We hope you enjoy the show.


Kerry: So thank you so much for joining us today. So just to kick us off, could you tell us a little bit about who you are, what you do and what's brought you to thinking about gender feminism and technology?


Sarah: Sure. Thank you for having me. I'm very excited to be part of this podcast. I'm an artist, a coder, a writer, and a critical AI researcher.


And I'm interested in these topics. Because they affect me. [00:02:00] I'm always thinking about intersectionality and technology, I think in some respect as a queer person in particular, but also in general, I feel like for many people, there's no way to opt out of thinking about this. It's very tangible and very embodied.


So I'm sometimes also an academic, but I try to make tangible tools and artworks and resource guides, sometimes zines, and also like facilitate community spaces for rethinking machine learning.


Eleanor: Awesome. It's a joy to have you on the podcast. Can you help us answer our three good robot questions?


So what is good technology? Is it even possible? And how can feminism help us work towards it?


Sarah: Sure. I think of technologies as tools. So they're only as good as bad as. How they're being used, right? But I do [00:03:00] think that there are, the ways of thinking are baked into the technology. So how they're designed and how they're implemented are going to be shaped by the culture and the society, the processes that have been baked into us.

The example of this that I love was I recently, learned from trying to buy a teapot for a friend that there for a long time were no left handed teapots in Japan with the side handle teapots, the Kyusu because they had taught children not to be left handed in school. So you couldn't find a left handed teapot.


So instead of designing for people, they're shaping people to fit the technology. I feel like we need to be thinking the other way around, not just how to make technology good or good for what, but also good for whom and who's left out when we're [00:04:00] talking about good tech in general.


And I think to the second part, like, how can feminism help us do that? That, intersectional feminist practices, like the, not just the theories and the ethics, but like the tactics can help us answer these questions and also really help us remember that there are many communities who have been thinking about this for generations before the questions became digital.


Kerry: Absolutely. And, I love the example you give of a teapot that, never caters to left handedness. And I think it's such a simple example of something we encounter often all the times in our environments.


I'm thinking also of things like guitars and scissors and all these things as someone who is right-handed, I never thought about this until I was close friends with people who were left-handed and was like, wow, clearly that's just such a limitation of my experience and how I've moved through the world is that I've never had to think about that.


I've been born into a world and grown up in a space where my kind of handedness is [00:05:00] totally normal. And I found that really important and quite useful. But we want to have you on the podcast because amongst your amazing artistic and scholarly pursuits, we came across a chatbot that you made called Ladymouth.


And for our listeners, I highly recommend that you check it out because Ladymouth is both quite poignant and hilarious at the same time. It's a chatbot that responds to internet trolls with quotes from prominent feminist thinkers like bell hooks or Gloria E. Anzaldúa, and we just thought that this project, was so interesting and one of the things that really struck us about it was how hilariously detached Ladymouth is like most of us if we're in online spaces getting harassed or dealing with sexist or racist troll, so you naturally get really emotional.

And Ladymouth does it and and It reminds me as well of Wendy Chun, who we also had on the podcast at one point. So our listeners check out that episode. Wendy is fantastic, but Wendy Chun and Sarah Friedland argued that and it's a direct quote from their work from 2015 'we need to create a way to occupy networks that [00:06:00] thrive in the shadowy space between identity and anonymity that thrive through repetition'.


And so what are to you, Sarah, like the liberatory possibilities of having something like Ladymouth that just like repeats and detaches and it's engagement with trolls.


Sarah: Hi, that's such a great question. And the project was absolutely inspired by that piece that you're quoting. So that's perfect that's what you picked up on.


There's a few different ways that I think about this first. It's in the complete absurdity of the project. It's never going to solve misogyny. It's completely detached from trying to do that. So it's this performance of absurdity. And also it doesn't, because it's a bot and it's not me or any of us who've been in that position, it doesn't have to be polite.


It doesn't have to deal with the weight of those feelings. It doesn't have to be instructive. Like when I would first give talks about this project, people would ask [00:07:00] shouldn't, wouldn't you, wouldn't it succeed more if it were being kinder or more like trying to teach people better? It's Sure. But it doesn't have to do that. That's the job of a different bot. It doesn't ...because someone in person with an embodied physicality might have to protect themselves in that situation to be safe, this doesn't have that. So there's that kind of literal detachment. And even my advisor at the time taught me how to keep the project.


Completely air gapped and separate from my own digital footprint and was advising me do this project under a pseudonym, which I didn't do, probably should have done, but I, it was the space where I could be. I thought that I could be detached from my bodily stakes in it. And so there's also this other kind of liberatory potential that I see it as an artistic experiment, but it's not it's not an app.


It's not a tech [00:08:00] startup, but it's this imaginatory exercise. And it's an art piece that is also meant as a template that could be adapted for other groups or situations. So I always imagine it as this like potential force multiplier of thinking about how we can come at the problems of tech using tech, but from these kinds of oblique angles.


Eleanor: I really value the importance of the absurd in combating nastiness of all kinds. I once was sitting at lunch at my college at Cambridge, which is very... it's core demographic of life fellows is old and white. And one of them asked me whose guest I was. And I said that I was the resident joker and then made a weird face.


And he was confused and I was, I don't know what, but it was better than shouting or making them feel really [00:09:00] embarrassed. And I think afterwards, of course, it had the right effect in that they asked who I was to somebody else because I didn't want to have to do that job. So I love the absurd as a way of coming back against something that is is almost unspeakable and is emotionally difficult to handle when it becomes a pattern. You say that 'Ladymouth makes herself a target, wasting misogynist's time in tiny increments'. And I love that and it gets us all to think about the numerous unpaid hours that women spend doing social work or people spend combating racism. So tell us about the value of Ladymouth as a time waster, this sort of little irritant mosquito like time waster for online trolls.


Sarah: Sure. Yeah I was trying to think of it as this kind of not quite trolling the trolls, but nagging the trolls. If we feminized that trope and that it would be [00:10:00] this kind of automating and multiplying this... what feels like risky emotional labor of engaging misogyny that I did not.


I wanted it to be happening. But I didn't want to be doing it, I was fearful of doing it. And not just talking about misogyny too of course, but like racism anti- trans, anti- queer rhetoric. All of it and so It was imagining that even if it was Just matters of minutes that these people were spending engaging with arguing with a robot, that would be time that they would not be making stupid comments on an article by a Black female journalist or stalking a trans person on Twitter.


Just occupying somebody's time a little bit. There's also, possibly the hope that they would reflect on the statements that were there, but even just this act of taking up that [00:11:00] time, I felt was worthwhile. And then again, to the absurdity that, that there was something comic and absurd about people yelling at robots, or at least there was in 2015, that if it felt absurd for people to yell at and shame a robot, that you might notice that it should be absurd for people to yell at and shame other people.


And we were dismissing that was going on all the time anyway, and that it's horrible.


Kerry: I think what's also really interesting about Ladymouth is that is it does show the amount of work that goes into content moderation as well, because Ladymouth makes something that is a very dark, which is internet harassment, quite funny, but then doing so it surfaces that behavior.


And as you've said, it's absurd to yell at a bot, but also why are people yelling and creating such toxic spaces? And I think, we've seen, unfortunately over the past few years, The ways in which content moderation has been incredibly psychologically damaging, the way that's often outsourced to places in the [00:12:00] global south and deeply underpaid, and unremunerated.


So for example, Kenyan content moderators sued Meta for failing to provide them with adequate pay, training, or health care. And employees are regularly required to watch really horrific and violent imagery as part of this job. And if people who are listening to this want to know more about this, there's a new book out called Feeding the Machine, which does explore the kind of economy of what they call data work or like all the work that goes behind the data that we use and the content that we consume. But what does Ladymouth have to say about content moderation or the kind of labor that is required to keep platforms quote unquote safe?


Sarah: I so appreciate you drawing out this connection because I feel this question in my body.


Because I, content moderators are treated as these kind of invisible human bots. So we could reflect on that work as a kind of human version of Ladymouth. They have to see these [00:13:00] ultra concentrated amounts of platform kind of sludge so that we are protected from it and don't have to see it.


But it's designed to mask the real state of the platforms. And more importantly, the world that we're in, that's being producing them. And that's horrific. Interestingly in the opposite mode Ladymouth is designed to go into that space and then pull examples out of that and demonstrate those going into mostly unmoderated places and bringing that back up to examine it.


So I feel it in my body because that is what it started out being about this experience of seeing it and reading it and speaking it and trying to share it and that immediate effect that it has physically, which I wanted to convey in the pieces that I made with Ladymouth with the results of it.


So I so hard to even fathom the [00:14:00] degree that content moderators have been affected by the work that they're doing.


Eleanor: Absolutely. You even ended up feeling for Ladymouth, right?


Sarah: Yeah. Unexpectedly. So it was very much designed to be detached. I, I'm not anthropomorphizing it. I'm not, I'm not using female pronouns with it and it's in the other room doing its thing, but I was constantly thinking about it when I was doing other tasks and worrying about it and very much feeling a personal stake in it care and tending for a lot of different reasons. I ended up very much having to be involved in the forums that I was trying to avoid by designing Ladymouth, because in order to create the bot, I had to know what keywords to look for and how to, I had to understand the language of the space that I was trying to avoid.


So I spent a lot of time in the red pill and men's rights [00:15:00] subreddits in order to build it. So it really had the opposite effect on me that I was. That it was designed for.


Eleanor: It's really interesting. All this kind of protection work has downstream effects. What do you do to unwind?


Do you take a bath? Do you have a cup of tea? Watch something? What do you?


Sarah: I don't know. I always get that question and I never have a good answer. I just stopped doing performances with the bot for after a while.


Eleanor: And you said that you were going to try to put these two audiences against each other, the, have Ladymouth come up against these horrible incels.


And that wasn't going to result in both sides trying to see eye to eye. It was really just to see what happened when these very different sources of knowledge were forced to enter into conversation. Can you talk to us a little bit about what happened?


Sarah: Yeah. And I was thinking of it more as [00:16:00] putting the forms of discourse in space together rather than trying to pit people against each other.


But I think that a few surprising things happened. First of all, it was mostly the expected, all caps, yelling and hate speech and vitriol that you would very much expected, but there were a few instances where people in good faith would try to engage with the logic of the quotes from feminist scholars that were posted.


So that was really interesting. And there were also instances with people trying to express legitimate, hurt and emotion and shame. So it was interesting for me to notice what came up in terms of where the roots of this the mindset was coming from and where it even had parallels, strangely enough, with what [00:17:00] feminism is calling for, like the blame might be placed in a different place. Like they might be blaming women for all of the problems, and I might be calling that patriarchy and institutional, systemic structures. But some of the concerns are related. So that was a very strange feeling to reckon with yeah there's a lot to say about that goes much deeper, but I think it all kind of points to how this kind of work actually doesn't scale. It requires these kind of one on one interactions that require finding some common humanity and not automated solutions.


Kerry: And I think it's quite an interesting conversation then as well, around techno solutionism, or how increasingly like with the massive hype around AI, we see so much energy being put into kind of AI for diversity solutions, which claim, we can scale up the imposition and the sort of, [00:18:00] grounding of procedures and algorithms and patterns that will somehow make equality happen. And I think Ladymouth is a really interesting example of a project that really didn't try to do that was like, engaged in this very interesting feminist project of refusal and repetition and yet at the same time, really also undermines this idea of justice and equality can be programmed into a bot and then spread out for all to enjoy and, you know Merry Christmas, hallmark (credits) closes. And so I also thought, it was really touched because I hadn't really thought about it. And again, this kind of comes back to the way that we don't really always think about data work of what it must've been like for you training Ladymouth. And like you say, having to go into these communities and locate it and be like, what do these interventions look like.


And yeah, I guess like the emotional, but also just like the embodied experience of this. And something that you have said is that, our bodies contain the voices of those we come up against on social media and in life, and that they resurge or [00:19:00] they're accessed in moments of doubt or stress or anger.


So yeah, how did you cope, with these kind of bodily experience of it? Did you find that the work you did at Ladymouth like tended to resurge? I feel like you've implicitly said that by the fact that you said, I've stopped performing with her now. But yeah, do you feel like you still hold those experiences in some way?


Sarah: Yes, I do. I had already been thinking about kind of language processing as both a computational process and an embodied unconscious process and trying to think about this social way of learning, learning language initially and absorbing language more broadly and how digital experiences and artificial intelligence was shifting these processes.


So the projects that I've done since then have all continued to deal with these questions. So I have another [00:20:00] project that's looking at the idea of the inner voice and the inner critic and directly like how that's shaped through our participation in community and also like how machine learning might intervene in that sort of in a also absurd way.


And I also was really concerned at the beginning of the Ladymouth project with not trying to draw a distinction between online and offline with this idea that online speech is not a big deal. You just you can just close your laptop and it doesn't matter and, online threads aren't real.


So I had this sense already that everything I was reading and seeing online, I was absorbing and like in a very physical embodied way.


Eleanor: so you've also looked into the history of chatbots and people don't often know that chatbots have a long history.


And for example I love thinking about the origins of AI generated love [00:21:00] poetry, which began in 1952 with Christopher Strachey's automated, queer, strange love poems that he would pin on the Manchester University notice board in the hallways for his colleagues to realize he'd been up to no good in the middle of the night again.


And I like to feel that he was using these to express some kind of critique of heterosexual norms because he was gay and couldn't profess his love like these. And then ironically, when I was on online dating apps for a really long time and people would see AI in my job spec title, whatever it was, and people would then send me GPT poems and they were so awful. But anyway, there was a history of them that was redemptive for me. So tell us about Eliza, the very first chatbot and why you and some collaborators are diving back into her source code.


Sarah: Sure. Yeah. Eliza was a chatbot from [00:22:00] 1966, approximately. There's a few different versions.


And it became Really more influential than its creator, Joseph Weizenbaum, expected and in part because it was this uncanny mimic of a therapist. And it has been pretty widely replicated and immortalized in pop culture. You see strains of it even to today. But strangely enough, the original source code for it had been missing.


It was never actually available to reference. And then my collaborators rediscovered it in the MIT archives in 2020. So we have been analyzing the code and finding all sorts of misconceptions about it and connecting it with the history of this software. So I think it's interesting to me to look back to these examples, like the love letters and Eliza because the abilities [00:23:00] of something like Eliza were, like, very simple techniques which Weizenbaum was really admitting and trying to caution people not to over attribute intelligence to this bot.


But we're still thinking about new systems, emergent systems as cours. intelligent, even though if you look under the hood, they're maybe not as simple as Eliza, but still based on statistical processes and rule based processes, they're not intelligent. So I think it's helpful to look at that arc of history.


It was also one of the inspirations for Ladymouth and this sort of long history of feminized bots and assistants.


Kerry: I was going to say, because I feel like even at the origin point of Eliza, the fact that it's called Eliza, of course, like this very, in the Western world, this very female gendered name forming this kind of therapeutic or caring [00:24:00] role and that in some ways we've just never really lost that.


And I'm really grateful we got a little bit of a chance to talk about the history of Ladymouth, lots of time to dive into Ladymouth itself. It's such a fascinating project. I'd also just be interested though, to hear about, the future of Ladymouth because you made Ladymouth back, multiple years ago, I think it was 2015.


You said that you first came up with the project. It's now 2024 at the time of recording. And since then there've been huge computational changes and particularly large language models, or just in general when it comes to the field of language tech. And to what extent have these changes made you rethink Ladymouth?


Or, what does Ladymouth mean in the age of LLMs?


Sarah: Yeah. I always had this when I made Ladymouth, I was not a very good programmer. It was one of my first projects. So I've always carried this torch for Ladymouth 2. 0, but now that I do work with more sophisticated systems, and with kind of the lessons of Eliza, I don't know that [00:25:00] Ladymouth 2. 0 is necessary. I made it in 2015, as you said, pre Trump, the word incel did not exist yet. So it's really weird to see something that you've made that's like an absurd exercise become more relevant I wish it were less relevant unfortunately but I think that, it works in its simplicity.


Like it doesn't need to be more than it is. So I don't know. I think the ways that I'd be interested to expand it would be to bring in other voices and more contributors to the project and, It needs more humanness, maybe then it needs more machineness. So I think that rather than LLM's making me think again about, Ladymouth, like Ladymouth, and things like Eliza help me think more about LLMs and again, not to get wowed by the [00:26:00] techniques of the new systems because they're just, iterations on a theme rather than a shiny, new revolution.


Eleanor: Well thank you so much for coming on and talking to us. Your work is fantastic.


Sarah: Thank you so much for having me. It's really fun.


This episode was made possible thanks to the generosity of Christina Gaw and the Mercator Foundation. It was produced by Eleanor Drage and Kerry McInerney and edited by Eleanor [00:27:00] Drage. ,



12 views0 comments

Recent Posts

See All

コメント


bottom of page