top of page
Search
Writer's pictureKerry Mackereth

Arjun Subramonian on Queer Approaches to AI and Computing

In this episode we talk to Arjun Subramonian, a Computer Science PhD student at UCLA conducting machine learning research and a member of the grassroots organisation Queer in AI. In this episode we discuss why they joined Queer in AI, how Queer in AI is helping build artificial intelligence directed towards better, more inclusive, and queer futures, why ‘bias’ cannot be seen as a purely technical problem, and why Queer in AI rejected Google sponsorship.


Arjun Subramonian (pronouns: they/them) is a brown queer, agender PhD student at the University of California, Los Angeles. Their research focuses on graph representation learning, fairness, and machine learning (ML) ethics. They're a core organizer of Queer in AI, co-founded QWER Hacks, and teach machine learning and AI ethics at Title I schools in LA. They also love to run, hike, observe and document wildlife, and play the ukulele.


Reading List


Ashwin, William Agnew, Umut Pajaro, Hetvi Jethwani, and Arjun Subramonian. "Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk Management." 2021.


Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Arjun Subramonian, Jeff M Phillips, and Chang Kai-Wei. "Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies." ArXiv.org, 2021, ArXiv.org, 2021.


Jacques, Juliet, Juliet Heti, and Heti, Sheila. (2016) Trans : A Memoir.


Davis, Jenny L., Apryl Williams, and Michael W. Yang. "Algorithmic Reparation." Big Data & Society 8, no. 2 (2021): 205395172110448.


Keyes, Os. "The Misgendering Machines." Proceedings of the ACM on Human-computer Interaction 2, no. CSCW (2018): 1-22.


Transcript:

KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today we’re talking to Arjun Subramonian, a Computer Science PhD student at UCLA conducting machine learning research and a member of the grassroots organisation Queer in AI. In this episode we discuss why they joined Queer in AI, how Queer in AI is helping build artificial intelligence directed towards better, more inclusive, and queer futures, why ‘bias’ cannot be seen as a purely technical problem, and why Queer in AI rejected Google sponsorship. We hope you enjoy the show!


KERRY MACKERETH:

So thank you so much for joining us here today. So could you tell us a little bit about who you are, what you do, and what brought you to the organisation Queer in AI?

ARJUN SUBRAMONIAN:

Yeah, hi, my name is Arjun, I use they/them pronouns. So I'm currently a first year PhD student at the University of California, Los Angeles. And my kind of like, de research is more on how bias emerges from graph structure and natural language phenomena. So I'm interested in things like how can we use mathematical theory to look at the structure of social networks? And how does that translate into biases in machine learning systems? And how does how do those biases translate into harms? As well as trying to integrate queer perspectives as much as possible into the way that we look at language, and how models, especially these large language models, we are using now interact with language to kind of have all these biases against the LGBTQI plus community. Yeah, so I think that's kind of the general research interests I have. And I also kind of have a problem where I just think everything is very interesting. And so I go and explore a lot of other topics as well, recently auditing datasets. So I'm very interested in seeing these benchmarks that are being used in NLP and graph machine learning [and learning] what problems do they have? How are they issues with their conceptualization, operationalization and metrics, etc.? And I guess there's a whole lot of stuff, I think, going on research wise, but I do want to talk about how I got involved with Queer in AI. So just a little background, I want to start with, Oh, when I was young, I grew up here, I grew up in the Silicon Valley. And so I think I had a lot of access to Computer Science and Artificial Intelligence from a very young age, there's a lot of privilege that comes with growing up as a brown male-presenting individual in the Silicon Valley. And to be honest, I thought it was like the greatest thing ever. I was like, this seems to work like this fantastic. It's automating everything, like what could possibly be wrong, you know, and of course, with someone who has that much privilege, you kind of grow up and you realise, like, Wait, this is actually really terrible. So I think as I came to UCLA, I was exploring my identity, I was kind of being more out as queer. I was figuring out my gender, and I was figuring out things around neurodivergence, etc. And I was realising that there's so many ways in which these models just don't work for people who share my identity. And then on top of that, I was seeing all this amazing work by Timnit Gebru and Joy Buolamwini about how that machines just are not able to see Black individuals, and it's this even worse for Black women. So what are the ways in which intersectional identities are further marginalised by artificial intelligence, and that was when it kind of clicked as like, wow, this is a serious problem. And with my privilege, as someone who grew up in the Silicon Valley, like I need to really use that to be inspecting the ways that biases are learned and amplified and propagated by these systems. So I think that's kind of shaped a lot of my research, but then I realised that it's not enough to just do academic research in this thing, because academia is inherently a very violent and oppressive institution. And so we need to be creating external structures, that kind of co-opt like the work that's being done in academia, because you can't really be studying bias without understanding diversity and inclusion and advocacy, and social structures. And yeah, so I think Queer in AI does a great job of this, like, we are directly doing community education and empowerment, we're working with queer people who, who are at the margins of artificial intelligence development, looking at things like queerness and caste and like queerness, and race, and just what are their experiences with AI? Like, what is the knowledge that they can share with us so that we can learn from each other and build better artificial intelligence going into the future? Yeah, I think Queer in AI has been great because of that those community education empowerment initiatives. We also have tonnes of work to get more queer people into the machine learning academic pipeline, I guess, like having graduate application aid programmes, mentoring programmes. And, like, We also do a lot of like policy work to inform people who have in power about ways that they can make artificial intelligence more queer, inclusive, and we can collectively build better queer AI futures. But I know there's going to be some more discussion of the work that Queer in AI does later so I'm going to save more details for for that time.

ELEANOR DRAGE:

Fantastic, thank you. And the purpose of this is also to get people to go out and search Queer in AI and see what you guys are doing and join in. It's really great that you brought up how universities are really connected also to capitalist structures, to the private sector. None of these institutions exist in isolation from each other, so that's a really important thing to remember. And also, you brought up how bias and issues to do with AI ethics are really involved with diversity inclusion, you can't see those two things are separate. And yet, in a study that Kerry and I conducted recently with a technology multinational, we found that very few engineers actually think about diversity and inclusion, they think of that as a human problem. Whereas AI ethics is a tech problem. And those two things don't meet at all. We're going to ask you our $3 billion questions next, which are, what is good technology? Is it even possible and how the queer ideas and activism help us work towards it? And I'm sure you have lots of ideas. You mentioned before auditing and benchmarks, these two, perhaps not so sexy things that no one has brought up on the show before! And Kerry I think a lot about auditing, perhaps it's auditing that is our step to good technology. What do you think?

ARJUN SUBRAMONIAN:

Yes, so with, with good technology, I kind of want to start answering this question, by deconstructing what the word good means. And I think this is like a very common practice in AI to come up with these very totalizing terms, like, good and like efficient, powerful, like superhuman, right? Like, we have all these terms which make it seem as if everyone understands what these words mean, when in reality, we're implicitly like centering a certain group of individuals. So I think good - there was actually a recent work that was talking about how we can't not work on some of these models just because they're oppressing certain communities because that means we're ignoring the positive side effects. But then we were like, good for whom, right, like, who were like, who was this model good for? And I think that is how we need to work towards making good technology. And again, good in quotes, like, I think we need to be constantly critical of who is the technology good for who is being who's benefiting from the technology? So yeah, I think I don't have a very clear answer to what is good technology. But ideally, like, I think it would be a technology that allows people at the margins, people who have been excluded from AI development since it's since its conception, to be honest, a lot of marginalised communities, they've been part of technology for a long time, and then they've been actively marginalised in the process of development. So trying to bring these communities back in and ask them what they need in order to build technology that benefits them. And yeah, I think like, once we de-centre those in power from the definition of good, like, I know this is being done in a very implicit way already. But once we recognise that technology can have disparate impacts on different people, and that it can be bad for some and good for others, recognising that if we bring marginalised communities more actively into AI development then they can build technology that is good for them. And as part of this decentralising the development of AI because we can't just have Google and Microsoft developing all these products that are, again, in quotes, “good for everyone”, because that'll inherently make them good for those in power and not for and not for those at the margins. Yeah, I think that's kind of how we get towards good technology. And that's what good technology is. And regarding your second question about how to queer ideas and activism help us work towards it. As part of this thing about totalizing and the centralization of power and AI development, that's very typical of like Western epistemology. When it comes to science, I think you mentioned how people see ethics and diversity inclusion as separate issues. And I think, like Western epistemology has kind of moved towards making ethics like an abstract academic concept like we see with de-ontology. Like there's no mention of anything related to social structures or diversity and inclusion in that there's no kind of Western frameworks for thinking about how like we need to be including those at the at the margins. I think implicitly all these frameworks are centering just white men, like white men are the human that is being considered and nobody else is. Everyone else is basically being ignored. So, amongst, like, there's so many different kinds of epistemologies. We can bring in - Black feminist epistemology, indigenous epistemology, but particularly for queer ideas I think there's this misconception that queer ideas are irrelevant to AI people. They’re like, what does it matter who you're in love with or like what your gender is. But queerness is about so much more than just like sexuality and gender. Of course, as you know, like, it's about taking these categories that have been constructed, and then dismantling them and realising that these categories don't hold for everyone. And these categories are actually being used very heavily in AI development for the purpose of surveillance for the purpose of compromising people's privacy. And all of this is scaling at the cost of context, right, like, like the a lot of these models are not personalised at all, like people's identities are not being considered and the development of these systems, it's always just how can we group people in a way that makes it easy to surveil them, makes it easy to categorise them. And so I think that's really where queer epistemology comes in really nicely. And of course, activism is also super important. Because I do think that people who are most familiar with queer epistemology are not like necessarily queer academics, they're the people with lived experiences. Like, if you grew up in a society where you ever feel like you fit into the boxes that you’re kind of put in, then I think you will have a lot more insight just naturally into the development of AI and the ways that it could harm people who share your identity.

KERRY MACKERETH:

Absolutely. And I think this is something that feminist and queer kind of theories and approaches in this field really bring us this real attentiveness and care for lived experience and for collective and collaborative forms of knowing. And for our lovely listeners, epistemology - vocab for the day - is sort of how you know things or how you come to know the things that you take to be true or take to be knowledge. So we I was really interested in, you know, well, firstly, all your different areas of interest, you said that you love to squirrel into all sorts of things, and find everything interesting, they all sound really, really fascinating. But one part of your work that really resonates, I think, with Eleanor and I's work, and Professor Jude Browne here at Cambridge is thinking through these ideas of bias and de-biassing. And how, like you said, this has become such a pillar of like aI ethics work. But in our work, we've found though that a lot of people see bias as this kind of mechanical or mathematical phenomenon that can be just stripped or removed from a system. So fundamentally, as a technical problem. So I want to ask you, what do you think are the benefits and the limits of approaching bias or kind of ethics more broadly, as a matter of technical fixes?

ARJUN SUBRAMONIAN:

Yeah, that's a very excellent question. So this is something that I could probably talk about for a really long time. And because there's a time limit, I'm going to try to crunch it in a little bit. So like bias, inherently, is not, it's not like a purely technical problem. And that's where the problem starts, like people are come up with all these mathematical formulations, and conceptual notions of bias, that strip bias from the socio technical context in which our model is lived. And I think part of the reason is just because it's simple. Like, it's really nice to be able to divorce yourself from social context and just say, I can run these algorithms and remove bias. And of course, it's not going to work. Because I think like, some of the things that just come to mind off the top of my head are the ways in which we operationalize bias in AI. Like, everything is almost parody based, right, like we're like is the performance of this model the same for men as it is for women. And inherently there, you're already seeing that non binary people are not part of the equation. Is that like, the kind of justice that you actually care about is that I mean, obviously, that's not the only kind of justice there is distributive justice, representational justice, but some for some reason, we're focusing on parity-based justice because it's just so easy to compare two numbers and say, are they the same? There's also issues with the categories of bias that we've created, we often talk about overrepresentation or underrepresentation or stereotypes, but what about the ways in which certain languages just are not even included in language models, for example, to begin with? What about like computer bias, like a lot of people don't even have access to training artificial intelligence and these are kind of moving more towards the social context in which models exist, right like underrepresentation over representation. These have a more statistics ring to them, right? Like, it's kind of, it's not really explicitly, it's not really explicitly mentioning the affected groups. But if we start talking about the lack of a language model it has, or if we start talking about a lack of compute, then we inherently start moving closer to the problems like - some people are just financially excluded from the development of AI. And these models are only meant to serve English speaking people, not just English speaking people, but people who have these prestige dialects of English as well. And then the issue with like, now that we have all these issues with the conceptualization, operationalization of bias, we start moving into the idea that you can actually remove bias from systems. And that also has some issues, because it's not like - it so much of ethics is based on the idea that you can run an algorithm and then get rid of discrimination from it without considering the downstream use case of the model or long term monitoring of the model. And then also somehow creating this veneer of discrimination-free AI, just on the basis of again, these algorithms like that just leads to so many issues. And I think to some extent, it almost just ends up further reinforcing harm, because I don't think a lot of these de-biassing algorithms actually work. Like, it's not like we can go around like de-biassing humans, right? Like we have to constantly be critical of the ways in which we're interacting with other people with the systemic issues that affect marginalised communities. And yeah, clearly, none of these are taken into account. In fact, a lot of these bias definitions and de-biassing methods are just like, there's no room for repairing anything, right? Like, they're not accommodating of reparative techniques. Yeah, like you remove bias, but then you're not really like engaging with the historical or social context, what I think I want to say like there's, there's no like reparation based on historical or social context, because that context isn't even considered to begin with. And I think the final thing I want to say about bias really quickly, is that a lot of these discussions about bias are just - I think the thing that we need to do right now is talk about power centralization. Like, that's the biggest thing we need to do actually to get closer to combating bias. If we want to talk about bias, we actually need to talk about harms, because that's closer to the social context. Why are these harms coming about? Because we're putting the development of AI in the hands of a few powerful individuals. And ultimately, we need to go about destroying capitalism, destroying big corporations. Like that's kind of going to be the solution to I think if we kind of go down this road of looking at how do we actually tackle bias, that's, that's going to be it? How do we take compute and put it in the hands of communities who actually know how to use the compute for their own benefit, instead of continuing to uphold large corporations that have no vested interest in any of these communities, and only building AI for the sake of surveillance capitalism?

KERRY MACKERETH:

That's so fascinating. And I really love the way that you're thinking through like, how do we think about the kind of restorative work that needs to happen rather than just trying to strip bias or strip the problem out of the system? But I was also really interested in what you're saying around kind of the use of these more statistical terms like over representation under representation because as I'll Regulus this might know, I work in Asia, they ask for studies and I think this one Language of like, over representation, I think is one that is often levied against members of the Asian diaspora as a way of kind of saying know your place or like the kinds of sort of very contingent and limited forms of power that diasporic people from Asia experience. Some of us occupy in many different ways positions of relative privilege. And so I think, yeah, even just seeing how these discourses of under and over representation play out in the bias debates, I think is absolutely fascinating to me.


ELEANOR DRAGE:

Yeah, absolutely. I also really liked that you brought up reparative ways of doing justice. And I think, you know, a lot of listeners will be familiar with the term reparative justice or reparations in the context of South Africa, for example, and post-apartheid reparative work. There's a great paper, if you are a paper reading kind of person called “Algorithmic Reparation” by Jenny Davis, April Williams and Michael Yang. And that talks about the need to redress harms, you know, to actively respond to them by doing something to push the scales of this balance in the other direction, which seems to me what you're talking about there. I know that where we do this work is really important. And what's really interesting about you discussing this in a computational way, a technical way, as well as in an activist sense, is that, you can understand both sides, right, of how technical fixes work and the kinds of ways people think ‘how can we fix this computationally’. And also, the activist way of thinking or the kind of humanities end of the spectrum, which is more where Kerry and I are. And you've worked both in big tech and outside of it. And I know that there's loads of young engineers and budding computer scientists who think where should I work to do ethics work? Where is the best place for me to be? Should I be in the corporations? Or should I be outside of them? And I guess, where you are really oriented the way that you act on different kinds of harms, as you said, different kinds of institutions have different investments. So what has been your experience? And is big tech just using language like a lack of compute as euphemisms for harm and not really actually doing as much as they should? Or really, is there something that can be done while working in these kinds of companies? What do you think?

ARJUN SUBRAMONIAN:

Yeah, that's a very interesting, and also like, very complex, very complex question. So I have worked on AI ethics at some of these bigger tech companies. And I do think there is some benefit to being in this companies that comes from one, I think, just having like the financial resources to go about doing your research, I think like as, especially for especially for a lot of like people in marginalised communities, it's really nice to just be able to do this work that kind of addresses your your community and get paid for it. That's not something that usually happens very often in academia. It's almost like always almost underappreciated or looked down upon if you do research that concerns like Asian American diaspora, or it concerns like the queer community, because this is kind of seen us on as like part of the fringe of academic research. And I think the other thing is, I also have realised that I'm getting a lot more insight into the kind of awful ways in which AI development operates by being part of these bigger tech companies. Because you interact with other teams, and you see that there are all these not great practices involving like, the way that people talk about bias or the way people don't inspect their datasets, or they just kind of run their models and generate a bunch of results. And they say, like, look, we're beating humans on this task, the ways in which they're deploying these models, like you got a lot more visibility into it by being at these companies. So I think that's also helped shape my research. And at the same time, though, I think it's almost like the best of both worlds to also be involved in these activism spaces because they kind of complement each other. As part of queer in AI, I think I get a lot more opportunities to do grassroots work, work with the communities that are being impacted negatively by AI who are being excluded from AI. And that obviously shapes their perspective as well, just as much as poor development practices, if not more. I guess what I want to say is by being by having more visibility by doing this kind of activism work, you can kind of address these issues within bigger tech companies. I'm not saying this actually, like, I still feel like I have any sense of efficacy in doing so. I don't feel like I'm going to change a company from the inside. I don't think that's ever possible because of just how these corporations exist. But I do think it makes some, it does have some impact when you when I get to go to work and share my perspective as a queer individual and how AI affects me with my team. And then I know that at least touches a few individuals and definitely changes the way that they do research. And I guess the last thing I want to say about this is, I also can tell when I don't want to send people in my community to a company so obviously, we do things like reject sponsorship, if we feel like the company is a terrible environment, or is actively harming like queer people, for example.

KERRY MACKERETH:

And on that note, we'd actually be really interested to hear a bit more about why Queer in AI rejected Google sponsorship. And we'd love to hear what their rationale was and how their decision was made.

ARJUN SUBRAMONIAN:

Yeah, so I think we made the active push to do so following the firing of Dr Timnit Gebru. And then subsequently, Meg Mitchell, but also April Curley and so many other folks who have just been consistently harmed by Google over the past. Over the past few years I think we recognised that Google's ethical AI team was one of these amazing organisations that people like looked up to, and they were doing just fantastic research, like very critical research in the space of AI, and the ability of Google to go in and then just like snap their fingers and then get rid of this team and then put, like, all, Dr. Gebru and Dr. Mitchell's lives in danger totally, like, rip their lives apart, and not even acknowledge they did anything wrong. Like, I think that was when we're like, this is obviously not okay, like, this is not something that we can … we can’t continue our partnership with Google, because it's very clear that they understand what power they have, and are not willing to take any accountability for their actions. And in addition to that there are a lot of other reasons that we kind of pushed for this rejection of sponsorship, you might have heard about these issues with Google Scholar and the deadnaming of trans authors. This is an issue that continues to not be fixed. We have tried to work with Google on to just give trans authors the agency to change their own names like you would think that it would be easy, but clearly, there's so many other things that come up, people have told us, we want to ensure accuracy. And so not allowing people to change their names is not great. And again, this kind of is just this current theme of putting marginalised communities at harm for the sake of the company's bottom line. There's an absolutely no vested interest in any of these companies, and actually benefiting people at the margins. They do not care. And yeah, we've tried so many things, they proposed this tool where they would allow trans authors to go talk to publishers. And if they change the name of the publisher, then the publisher will update Google Scholar and the name will change in the publication, Google Scholar, but there's just so many bottlenecks in that process. Publishers are not very responsive either. So now we're stripping trans authors of their dignity. We're making them go through this epistemic ongoing corrective labour of just needing to change their name, exposing them to violence, there's safety issues here. And Google is aware of all of this, I have made it very clear to them that these are all the issues where Scholar has failed us, #scholarhasfailedus has been up for a long time. And Google is aware of that as well. But they're continuing refusal to make any make any changes in this direction... It just speaks speaks so much. It's not an accident. It's very intentional. This is what they care about. And so I think that was another big reason we wanted to separate from Google. This is not related to the separate the rejection of sponsorship and more recently, they're cancelling of a talk by Dalits speaker on capitalism and caste discrimination has also been, I think, kind of telling them the same problem, they are willing to say that they're diverse and inclusive, as long as it keeps their company happy. And it defends their reputation. As soon as there's any sense that people are unhappy with them, or they could potentially lose anything, then they kind of take a step back. And so they cancel this talk. But of course, they're still Dalit, Bahujan, like so many other oppressed caste individuals at Google who are suffering, from lack of promotion, toxic workplace environment etc. Just because Google is looking to protect itself, and none of the people at the margins within their companies.

ELEANOR DRAGE:

Yeah, that seems to be the message that we're getting from lots of people that it does diversity and just justice up to a point but don't rock the boat. And on the point of deadnaming, I keep, for some reason recommending this fantastic book by Juliet Jacques called Trans: A Memoir. She is a writer for The Guardian. And she talks about jumping through these many administrative hoops when transitioning - and deadnaming is when you call a trans person by their birth name, or a name that was previously used that they don't go by anymore. And it's a huge issue. And then Os Keyes is another amazing trans scholar who talks about this, how this process works with data and how technology has made this administrative process even more toxic and confusing and painful to people. So anyway, that's just wonderful that you've made this kind of decision, you've thought about it so carefully, I know that there are major benefits that come with being sponsored by a big, wealthy, powerful institution. So you know, really, it can be a sacrifice in some ways, too. So that's brilliant that people are taking these kinds of decisions really seriously. Just to finish, can you tell us briefly about some of the things that Queer in AI is doing? And what listeners can do if they're keen to join? Or want to get involved in some way? How can they do that?

ARJUN SUBRAMONIAN:

Yeah, of course. So just really quickly to give a high level overview of some of the work we do, our mission at Queer in AI is to look at what ways we can advance research at the intersections of queerness and AI, while also fostering a strong community of queer and trans researchers and bringing visibility to and celebrating their work. So some of the methods that we kind of use to achieve this: I mentioned earlier community education and empowerment. So this involves running workshops at conferences, having talks from people at the intersections of different queer identities, and their experiences with artificial intelligence, or just academia in general, just to build this, like collective knowledge about the ways in which we are constantly being excluded from artificial intelligence, then there's, we also provide research venues at these conferences to have people in our community whose work is just like, often rejected from these machine learning or AI conferences to have a place to share their ideas. We also have these collective aid programmes. So the graduate application aid programme is we get donations in order to support just the application fees of queer scholars, it's kind of shocking how expensive they are. So we had about $72,000 in donations last year, and even with that, we were only able to help around like I think 100 applicants who had just applied to grad school. It's just such an expensive process. And a lot of queer people are just financially cut off from their families, their extra expenses from managing oppression and trauma. And this is just compounded for queer applicants from the Global South. So yeah, apparently the GRE fees alone are three months of an average salary in Ethiopia. So it's just not like … it's very exclusionary intentionally. And we have this programme to help queer people, you can actually go to our website and look for this programme and you can donate to our programme. That's one really great way that you can help us because yeah, we need more queer people in AI, it would be fantastic if there were because that's kind of how we get closer to to get technology. We found that the programme allowed 60% of applicants to take admissions tests, 50% to avoid skipping buying essentials and 1/3 to avoid skipping groceries and bills. So it's a very pressing issue. And I just quickly wanted to touch upon the mentoring programmes that we have, which are also kind of in the same line as well as the research and policy work that we do. We tried to write as much as we can about queer harms and take a very critical approach to AI, share our ideas, and do it in a non-academic environment, because research we think really needs to come from our organisation if we're actually going to make an impact on the way that AI interacts with queer communities. And we also do some advocacy. So queer inclusivity at conferences is a big thing. We've revised registration form questions about gender and pronouns, like very simple things, all the way to try to institute formal name change procedures for the proceedings of conferences, and we talked to a lot of, we talked to a lot of different organisations like semantic scholar about ways to make their platform queer inclusive. And we do our best to take all this knowledge that we're sharing with these conferences and organisations and disseminate it. So we try to have as much of an up to date website as possible. We do Twitter threads. But yeah, take a look at our Twitter, take a look at our website, if you just want to learn more about some of the issues that we discuss and the work that we do. And please donate donations are just like so helpful. It's nice when we can get money from people who care and not companies like Google.

KERRY MACKERETH:

Absolutely. So everyone listening please go ahead, check out these pages and donate until imagine thank you so much. This has been such a fascinating episode. It's really such a pleasure to be able to chat with you and so we hope to have to chat with you again soon.

ARJUN SUBRAMONIAN:

Thank you so much. It was a pleasure talking to both of you

​​


201 views0 comments

コメント


bottom of page